Search is not available for this dataset
project
stringlengths
1
235
source
stringclasses
16 values
language
stringclasses
48 values
content
stringlengths
909
64.8M
crossvalidationCP
cran
R
Package ‘crossvalidationCP’ May 22, 2023 Title Cross-Validation for Change-Point Regression Version 1.1 Depends R (>= 3.3.0) Imports changepoint (>= 2.0), fpopw(>= 1.1), wbs (>= 1.4), stats Suggests testthat (>= 2.0.0) Description Implements the cross-validation methodol- ogy from Pein and Shah (2021) <arXiv:2112.03220>. Can be customised by providing differ- ent cross-validation criteria, estimators for the change-point locations and local parame- ters, and freely chosen folds. Pre-implemented estimators and criteria are available. It also in- cludes our own implementation of the COPPS procedure <doi:10.1214/19-AOS1814>. License GPL-3 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-05-22 18:30:02 UTC R topics documented: crossvalidationCP-packag... 2 convertSinglePara... 4 COPP... 5 criteri... 7 crossvalidationC... 9 estimator... 12 VfoldC... 14 crossvalidationCP-package Cross-validation for change-point regression Description Implements the cross-validation methodology from Pein and Shah (2021). The approach can be customised by providing cross-validation criteria, estimators for the change-point locations and local parameters, and freely chosen folds. Pre-implemented estimators and criteria are available. It also includes our own implementation of the COPPS procedure Zou et al. (2020). By default, 5-fold cross-validation with ordered folds, absolute error loss, and least squares estimation for estimating the change-point locations is used. Details The main function is crossvalidationCP. It selects among a list of parameters the one with the smallest cross-validation criterion for a given method. The user can freely choose the folds, the local estimator and the criterion. Several pre-implemented estimators and criteria are available. Estimators have to allow a list of parameters at the same time. One can use convertSingleParam to convert a function allowing only a single parameter to a function that allows a list of parameters. A ssimpler, but more limited access is given by the functions VfoldCV, COPPS, CV1 and CVmod. VfoldCV performs V-fold cross-validation, where the tuning parameter is directly the number of change-points. COPPS implements the COPPS procedure Zou et al. (2020), i.e. 2-fold cross- validation with Order-Preserved Sample-Splitting and the tuning parameter being again the num- ber of change-points. CV1 and CVmod do the same, but with absolute error loss and the modified quadratic error loss, see (15) and (16) in Pein and Shah (2021), instead of quadratic error loss. Note that COPPS can be problematic when larger changes occur at odd locations. For a detailed discussion, why standard quadratic error loss can lead to misestimation, see Section 2 in Pein and Shah (2021). By default, we recommend to use absolute error loss and 5-fold cross-validation as offered by VfoldCV. So far only univariate data is supported, but support for multivariate data is planned. References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. <NAME>., <NAME>., and <NAME>. (2020) Consistent selection of the number of change-points via sample-splitting. The Annals of Statistics, 48(1), 413–439. See Also crossvalidationCP, estimators, criteria, convertSingleParam, VfoldCV, COPPS, CV1, CVmod Examples # call with default parameters: # 5-fold cross-validation with absolute error loss, least squares estimation, # and possible parameters being 0 to 5 change-points Y <- rnorm(100) (ret <- crossvalidationCP(Y = Y)) # a simpler, but more limited access to it is offered by VfoldCV() identical(VfoldCV(Y = Y), ret) # more interesting data and more detailed output set.seed(1L) Y <- c(rnorm(50), rnorm(50, 5), rnorm(50), rnorm(50, 5)) VfoldCV(Y = Y, output = "detailed") # finds the correct change-points at 50, 100, 150 # (plus the start and end points 0 and 200) # reducing the maximal number of change-points to 2 VfoldCV(Y = Y, Kmax = 2) # crossvalidationCP is more flexible and allows a list of parameters # here only 1 or 2 change-points are allowed crossvalidationCP(Y = Y, param = as.list(1:2)) # reducing the number of folds to 3 ret <- VfoldCV(Y = Y, V = 3L, output = "detailed") # the same but with explicitly specified folds identical(crossvalidationCP(Y = Y, folds = list(seq(1, 200, 3), seq(2, 200, 3), seq(3, 200, 3)), output = "detailed"), ret) # 2-fold cross-validation with Order-Preserved Sample-Splitting ret <- crossvalidationCP(Y = Y, folds = "COPPS", output = "detailed") # a simpler access to it is offered by CV1() identical(CV1(Y = Y, output = "detailed"), ret) # different criterion: quadratic error loss ret <- crossvalidationCP(Y = Y, folds = "COPPS", output = "detailed", criterion = criterionL2loss) # same as COPPS procedure; as offered by COPPS() identical(COPPS(Y = Y, output = "detailed"), ret) # COPPS potentially fails to provide a good selection when large changes occur at odd locations # Example 1 in (<NAME>, 2021), see Section 2.2 in this paper for more details set.seed(1) exampleY <- rnorm(102, c(rep(10, 46), rep(0, 5), rep(30, 51))) # misses one change-point crossvalidationCP(Y = exampleY, folds = "COPPS", criterion = criterionL2loss) # correct number of change-points when modified criterion (or absolute error loss) is used (ret <- crossvalidationCP(Y = exampleY, folds = "COPPS", criterion = criterionMod)) # a simpler access to it is offered by CVmod() identical(CVmod(Y = exampleY), ret) # manually given criterion; identical to criterionL1loss() testCriterion <- function(testset, estset, value = NULL, ...) { if (!is.null(value)) { return(sum(abs(testset - value))) } sum(abs(testset - mean(estset))) } identical(crossvalidationCP(Y = Y, criterion = testCriterion, output = "detailed"), crossvalidationCP(Y = Y, output = "detailed")) # PELT as a local estimator instead of least squares estimation # param must contain parameters that are acceptable for the given estimator crossvalidationCP(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y)))) # argument minseglen of pelt specified in ... crossvalidationCP(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y))), minseglen = 60) convertSingleParam Provides estimators that allows list of parameters Description Converts estimators allowing single parameters to estimators allowing a list of parameters. The resulting function can be passed to the argument estimator in the cross-validation functions, see See Also. Usage convertSingleParam(estimator) Arguments estimator the function to be converted, i.e. a function providing a local estimate. The function must have the arguments Y, param and ..., where Y will be the ob- servations, and param a single parameter of arbitrary type. Hence lists can be used when multiple parameter of different types are needed. It has to return either a vector with the estimated change-points or a list containing the named entries cps and value. In this case cps has to be a numeric vector with the estimated change-points as before and value has to be a list of length one entry longer than cps giving the locally estimated values. An example is given below. Value a function that can be passed to the argument estimator in the cross-validation functions, see the functions listed in See Also References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. See Also crossvalidationCP, VfoldCV, COPPS, CV1, CVmod Examples # wrapper around pelt to demonstrate an estimator that allows a single parameter only singleParamEstimator <- function(Y, param, minseglen = 1, ...) { if (is.numeric(param)) { ret <- changepoint::cpt.mean(data = Y, penalty = "Manual", pen.value = param, method = "PELT", minseglen = minseglen) } else { ret <- changepoint::cpt.mean(data = Y, penalty = param, method = "PELT", minseglen = minseglen) } list(cps = ret@cpts[-length(ret@cpts)], value = as.list(ret@param.est$mean)) } # conversion to an estimator that is suitable for crossvalidationCP() etc. estimatorMultiParam <- convertSingleParam(singleParamEstimator) crossvalidationCP(rnorm(100), estimator = estimatorMultiParam, param = list("SIC", "MBIC")) COPPS Cross-validation with Order-Preserved Sample-Splitting Description Tuning parameters are selected by a generalised COPPS procedure. All functions use Order-Preserved Sample-Splitting, meaning that the folds will be the odd and even indexed observations. The three functions differ in which cross-validation criterion they are using. COPPS is the original COPPS pro- cedure Zou et al. (2020), i.e. uses quadratic error loss. CV1 and CVmod use absolute error loss and the modified quadratic error loss, respectively. Usage COPPS(Y, param = 5L, estimator = leastSquares, output = c("param", "fit", "detailed"), ...) CV1(Y, param = 5L, estimator = leastSquares, output = c("param", "fit", "detailed"), ...) CVmod(Y, param = 5L, estimator = leastSquares, output = c("param", "fit", "detailed"), ...) Arguments Y the observations, can be any data type that supports the function length and the operator [] and can be passed to estimator and the cross-validation criterion, e.g. a numeric vector or a list. Support for matrices, i.e. for multivariate data, is planned but not implemented so far param a list giving the possible tuning parameters. Alternatively, a single integer which will be interpreted as the maximal number of change-points and converted to as.list(0:param) estimator a function providing a local estimate. For pre-implemented estimators see esti- mators. The function must have the arguments Y, param and ..., where Y will be a subset of the observations, and param and ... will be the corresponding ar- guments of the called function. Note that ... will be passed to estimator and the cross-validation criterion. The return value must be either a list of length length(param) with each entry containing the estimated change-point locations for the given entry in param or a list containing the named entries cps and value. In this case cps has to be a list of the estimated change-points as before and value has to be a list of the locally estimated values for each entry in param, i.e. each list entry has to be a list itself of length one entry longer than the corresponding entry in cps. The function convertSingleParam offers the con- version of an estimator allowing a single parameter into an estimator allowing multiple parameters output a string specifying the output, either "param", "fit" or "detailed". For details what they mean see Value ... additional parameters that are passed to estimator and the cross-validation criterion Value if output == "param", the selected tuning parameter, i.e. an entry from param. If output == "fit", a list with the entries param, giving the selected tuning parameter, and fit. The named entry fit is a list giving the returned fit obtained by applying estimator to the whole data Y with the selected tuning parameter. The returned value is transformed to a list with an entry cps giving the estimated change-points and, if provided by estimator, an entry value giving the estimated local values. If output == "detailed", the same as for output == "fit", but additionally the entries CP, CVodd, and CVeven giving the calculated cross-validation criteria for all parameter entries. CVodd and CVeven are the criteria when the odd / even observations are in the test set, respectively. CP is the sum of those two. References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. <NAME>., <NAME>., and <NAME>. (2020) Consistent selection of the number of change-points via sample-splitting. The Annals of Statistics, 48(1), 413–439. See Also estimators, criteria, convertSingleParam Examples # call with default parameters: # 2-folds cross-validation with ordereded folds, absolute error loss, # least squares estimation, and possible parameters being 0 to 5 change-points CV1(Y = rnorm(100)) # the same, but with modified error loss CVmod(Y = rnorm(100)) # the same, but with quadratic error loss, indentical to COPPS procedure COPPS(Y = rnorm(100)) # more interesting data and more detailed output set.seed(1L) Y <- c(rnorm(50), rnorm(50, 5), rnorm(50), rnorm(50, 5)) CV1(Y = Y, output = "detailed") # finds the correct change-points at 50, 100, 150 # (plus the start and end points 0 and 200) # list of parameters, only allowing 1 or 2 change-points CVmod(Y = Y, param = as.list(1:2)) # COPPS potentially fails to provide a good selection when large changes occur at odd locations # Example 1 in (Pein and Shah, 2021), see Section 2.2 in this paper for more details set.seed(1) exampleY <- rnorm(102, c(rep(10, 46), rep(0, 5), rep(30, 51))) # misses one change-point COPPS(Y = exampleY) # correct number of change-points when modified criterion (or absolute error loss) is used CVmod(Y = exampleY) # PELT as a local estimator instead of least squares estimation # param must contain parameters that are acceptable for the given estimator CV1(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y)))) # argument minseglen of pelt specified in ... CVmod(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y))), minseglen = 30) criteria Pre-implemented cross-validation criteria Description criterionL1loss, criterionMod and criterionL2loss compute the cross-validation criterion with L1-loss, the modified criterion and the criterion with L2-loss for univariate data, see (15), (16), and (6) in Pein and Shah (2021), respectively. If value is given (i.e. value =! NULL), then value replaces the empirical means. All criteria can be passed to the argument criterion in the cross-validation functions, see the functions listed in See Also. Usage criterionL1loss(testset, estset, value = NULL, ...) criterionMod(testset, estset, value = NULL, ...) criterionL2loss(testset, estset, value = NULL, ...) Arguments testset a numeric vector giving the observations in the test set / fold. For criterionMod, if length(testset) == 1L, NaN will be returned, see Details estset a numeric vector giving the observations in the estimation set value a single numeric giving the local value on the segment or NULL. If NULL the value will be mean(estset) ... unused Details criterionMod requires that the minimal segment length is at least 2. So far the only pre-implemented estimators that allows for such an option are pelt and binseg, where one can specify minseglen in .... Value a single numeric References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. See Also crossvalidationCP, VfoldCV, COPPS, CV1, CVmod Examples # all functions can be called directly, e.g. Y <- rnorm(100) criterionL1loss(testset = Y[seq(1, 100, 2)], estset = Y[seq(2, 100, 2)]) # but their main purpose is to serve as the criterion in the cross-validation functions, e.g. crossvalidationCP(rnorm(100), criterion = criterionL1loss) crossvalidationCP Cross-validation in change-point regression Description Generic function for cross-validation to select tuning parameters in change-point regression. It selects among a list of parameters the one with the smallest cross-validation criterion for a given method. The cross-validation criterion, the estimator, and the the folds can be specified by the user. Usage crossvalidationCP(Y, param = 5L, folds = 5L, estimator = leastSquares, criterion = criterionL1loss, output = c("param", "fit", "detailed"), ...) Arguments Y the observations, can be any data type that supports the function length and the operator [] and can be passed to estimator and criterion, e.g. a numeric vector or a list. Support for matrices, i.e. for multivariate data, is planned but not implemented so far param a list giving the possible tuning parameters. Alternatively, a single integer which will be interpreted as the maximal number of change-points and converted to as.list(0:param). All values have to be acceptable values for the specified estimator folds either a list, a single integer or the string "COPPS" specifying the folds. If a list, each entry should be an integer vector with values between 1 and length(Y) giving the indices of the observations in the fold. A single integer specifies the number of folds and ordered folds are automatically created, i.e. fold i will be seq(i, length(Y), folds). "COPPS" means that a generalised COPPS proce- dure Zou et al. (2020) will be used, i.e. 2-fold cross-validation with Order- Preserved Sample-Splitting, meaning that the folds will be the odd and even in- dexed observations. Note that observations will be given in reverse order to the cross-validation criterion when the odd-indexed observations are in the test set. This allows criteria such as the modified criterion, where for the odd-indexed the first and for the even-indexed the last observation is removed estimator a function providing a local estimate. For pre-implemented estimators see esti- mators. The function must have the arguments Y, param and ..., where Y will be a subset of the observations, and param and ... will be the corresponding arguments of the called function. Note that ... will be passed to estimator and criterion. The return value must be either a list of length length(param) with each entry containing the estimated change-point locations for the given entry in param or a list containing the named entries cps and value. In this case cps has to be a list of the estimated change-points as before and value has to be a list of the locally estimated values for each entry in param, i.e. each list entry has to be a list itself of length one entry longer than the corresponding entry in cps. The function convertSingleParam offers the conversion of an estimator allowing a single parameter into an estimator allowing multiple parameters criterion a function providing the cross-validation criterion. For pre-implemented crite- ria see criteria. The function must have the arguments testset, estset and value. testset and estset are the observations of one segment that are in the test and estimation set, respectively. value is the local parameter on the segment if provided by estimator, otherwise NULL. Additionally, ... is pos- sible and potentially necessary to absorb arguments, since the argument ... of crossvalidationCP will be passed to estimator and criterion. It must return a single numeric. All return values will be summed accordingly and which.min will be called on the vector to determine the parameter with the smallest criterion, hence some NaN values etc. are allowed output a string specifying the output, either "param", "fit" or "detailed". For details what they mean see Value ... additional parameters that are passed to estimator and criterion Value if output == "param", the selected tuning parameter, i.e. an entry from param. If output == "fit", a list with the entries param, giving the selected tuning parameter, and fit. The named entry fit is a list giving the returned fit obtained by applying estimator to the whole data Y with the selected tuning parameter. The retured value is transformed to a list with an entry cps giving the estimated change-points and, if provided by estimator, an entry value giving the estimated local values. If output == "detailed", the same as for output == "fit", but additionally an entry CP giving all calculated cross-validation criteria. Those values are summed over all folds References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. <NAME>., <NAME>., and <NAME>. (2020) Consistent selection of the number of change-points via sample-splitting. The Annals of Statistics, 48(1), 413–439. See Also estimators, criteria, convertSingleParam, VfoldCV, COPPS, CV1, CVmod Examples # call with default parameters: # 5-fold cross-validation with absolute error loss, least squares estimation, # and possible parameters being 0 to 5 change-points # a simpler access to it is offered by VfoldCV() crossvalidationCP(Y = rnorm(100)) # more interesting data and more detailed output set.seed(1L) Y <- c(rnorm(50), rnorm(50, 5), rnorm(50), rnorm(50, 5)) crossvalidationCP(Y = Y, output = "detailed") # finds the correct change-points at 50, 100, 150 # (plus the start and end points 0 and 200) # list of parameters, only allowing 1 or 2 change-points crossvalidationCP(Y = Y, param = as.list(1:2)) # reducing the number of folds to 3 ret <- crossvalidationCP(Y = Y, folds = 3L, output = "detailed") # the same but with explicitly specified folds identical(crossvalidationCP(Y = Y, folds = list(seq(1, 200, 3), seq(2, 200, 3), seq(3, 200, 3)), output = "detailed"), ret) # 2-fold cross-validation with Order-Preserved Sample-Splitting ret <- crossvalidationCP(Y = Y, folds = "COPPS", output = "detailed") # a simpler access to it is offered by CV1() identical(CV1(Y = Y, output = "detailed"), ret) # different criterion: quadratic error loss ret <- crossvalidationCP(Y = Y, folds = "COPPS", output = "detailed", criterion = criterionL2loss) # same as COPPS procedure; as offered by COPPS() identical(COPPS(Y = Y, output = "detailed"), ret) # COPPS potentially fails to provide a good selection when large changes occur at odd locations # Example 1 in (Pein and Shah, 2021), see Section 2.2 in this paper for more details set.seed(1) exampleY <- rnorm(102, c(rep(10, 46), rep(0, 5), rep(30, 51))) # misses one change-point crossvalidationCP(Y = exampleY, folds = "COPPS", criterion = criterionL2loss) # correct number of change-points when modified criterion (or absolute error loss) is used (ret <- crossvalidationCP(Y = exampleY, folds = "COPPS", criterion = criterionMod)) # a simpler access to it is offered by CVmod() identical(CVmod(Y = exampleY), ret) # manually given criterion; identical to criterionL1loss() testCriterion <- function(testset, estset, value = NULL, ...) { if (!is.null(value)) { return(sum(abs(testset - value))) } sum(abs(testset - mean(estset))) } identical(crossvalidationCP(Y = Y, criterion = testCriterion, output = "detailed"), crossvalidationCP(Y = Y, output = "detailed")) # PELT as a local estimator instead of least squares estimation # param must contain parameters that are acceptable for the given estimator crossvalidationCP(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y)))) # argument minseglen of pelt specified in ... crossvalidationCP(Y = Y, estimator = pelt, output = "detailed", param = list("SIC", "MBIC", 3 * log(length(Y))), minseglen = 60) estimators Pre-implemented estimators Description Pre-implemented change-point estimators that can be passed to the argument estimator in the cross-validation functions, see the functions listed in See Also. Usage leastSquares(Y, param, ...) pelt(Y, param, ...) binseg(Y, param, ...) wbs(Y, param, ...) Arguments Y a numeric vector giving the observations param a list giving the possible tuning parameters. See Details to see which tuning parameters are allowed for which function ... additional arguments, see Details to see which arguments are allowed for which function Details leastSquares implements least squares estimation by using the segment neighbourhoods algo- rithm with functional pruning from Rigaill (20015), see also Auger and Lawrence (1989) for the original segment neighbourhoods algorithm. It calls Fpsn. Each list entry in param has to be a single integer giving the number of change-points. optimalPartitioning is outdated. It will give the same results as leastSquares, but is slower. It is part of the package for backwards compatibility only. pelt implements PELT (Killick et al., 2012), i.e. penalised maximum likelihood estimation com- puted by a pruned dynamic program. For each list entry in param it calls cpt.mean with method = "PELT" and penalty = param[[i]] or when param[[i]] is a numeric with penalty = "Manual" and pen.value = param[[i]]. Hence, each entry in param must be a single numeric or an argu- ment that can be passed to penalty. Additionally minseglen can be specified in ..., by default minseglen = 1. binseg implements binary segmentation (Vostrikova, 1981). The call is the same as for pelt, but with method = "BinSeg". Additionally, the maximal number of change-points Q can be specified in ..., by default Q = 5. Alternatively, each list entry of param can be a list itself containing the named entries penalty and Q. Note that this estimator differs from binary segmentation in Zou et al. (2020), it requires a penalty instead of a given number of change-points. Warnings that Q is chosen too small are suppressed when Q is given in param, but not when it is a global parameter specified in ... or Q = 5 by default. wbs implements wild binary segmentation (Fryzlewicz, 2014). It calls changepoints with th.const = param, hence param has to be a list of positive scalars. Additionally, ... will be passed. Value For leastSquares and wbs a list of length length(param) with each entry containing the estimated change-point locations for the given entry in param. For the other functions a list containing the named entries cps and value, with cps a list of the estimated change-points as before and value a list of the locally estimated values for each entry in param, i.e. each list entry is a list itself of length one entry longer than the corresponding entry in cps. References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. <NAME>. (2015) A pruned dynamic programming algorithm to recover the best segmentations with 1 to Kmax change-points. Journal de la Societe Francaise de Statistique 156(4), 180–205. <NAME>., <NAME>. (1989) Algorithms for the Optimal Identification of Segment Neigh- borhoods. Bulletin of Mathematical Biology, 51(1), 39–54. <NAME>., <NAME>., <NAME>. (2012) Optimal detection of changepoints with a linear computational cost. Journal of the American Statistical Association, 107(500), 1590–1598. <NAME>. (1981). Detecting ’disorder’ in multidimensional random processes. Soviet Math- ematics Doklady, 24, 55–59. <NAME>. (2014) Wild binary segmentation for multiple change-point detection. The Annals of Statistics, 42(6), 2243–2281. <NAME>., <NAME>., and <NAME>. (2020). Consistent selection of the number of change-points via sample-splitting. The Annals of Statistics, 48(1), 413–439. See Also crossvalidationCP, VfoldCV, COPPS, CV1, CVmod Examples # all functions can be called directly, e.g. leastSquares(Y = rnorm(100), param = 2) # but their main purpose is to serve as a local estimator in the cross-validation functions, e.g. crossvalidationCP(rnorm(100), estimator = leastSquares) # param must contain values that are suitable for the given estimator crossvalidationCP(rnorm(100), estimator = pelt, param = list("SIC", "MBIC")) VfoldCV V-fold cross-validation Description Selects the number of change-points by minimizing a V-fold cross-validation criterion. The crite- rion, the estimator, and the number of folds can be specified by the user. Usage VfoldCV(Y, V = 5L, Kmax = 8L, adaptiveKmax = TRUE, tolKmax = 3L, estimator = leastSquares, criterion = criterionL1loss, output = c("param", "fit", "detailed"), ...) Arguments Y the observations, can be any data type that supports the function length and the operator [] and can be passed to estimator and criterion, e.g. a numeric vector or a list. Support for matrices, i.e. for multivariate data, is planned but not implemented so far V a single integer giving the number of folds. Ordered folds will automatically be created, i.e. fold i will be seq(i, length(Y), folds) Kmax a single integer giving maximal number of change-points adaptiveKmax a single logical indicating whether Kmax should be chosen adaptively. If true Kmax will be double if the estimated number of change-points is not at least Kmax - tolKmax tolKmax a single integer specifiying how much the estimated number of change-points have to be smaller than Kmax estimator a function providing a local estimate. For pre-implemented estimators see esti- mators. The function must have the arguments Y, param and ..., where Y will be a subset of the observations, param will be list(0:Kmax), and ... will be the argument ... of VfoldCV. Note that ... will be passed to estimator and criterion. The return value must be either a list of length length(param) with each entry containing the estimated change-point locations for the given entry in param or a list containing the named entries cps and value. In this case cps has to be a list of the estimated change-points as before and value has to be a list of the locally estimated values for each entry in param, i.e. each list entry has to be a list itself of length one entry longer than the corresponding entry in cps. The function convertSingleParam offers the conversion of an estimator allow- ing a single parameter into an estimator allowing multiple parameters. From the currently pre-implemented estimators only leastSquares accepts param == list(0:Kmax). Estimators that allow param to differ from list(0:Kmax) can be used in crossvalidationCP criterion a function providing the cross-validation criterion. For pre-implemented criteria see criteria. The function must have the arguments testset, estset and value. testset and estset are the observations of one segment that are in the test and estimation set, respectively. value is the local parameter on the segment if provided by estimator, otherwise NULL. Additionally, ... is possible and potentially necessary to absorb arguments, since the argument ... of VfoldCV will be passed to estimator and criterion. It must return a single numeric. All return values will be summed accordingly and which.min will be called on the vector to determine the parameter with the smallest criterion. Hence some NaN values etc. are allowed output a string specifying the output, either "param", "fit" or "detailed". For details what they mean see Value ... additional parameters that are passed to estimator and criterion Value if output == "param", the selected number of change-points, i.e. an integer between 0 and Kmax. If output == "fit", a list with the entries param, giving the selected number of change-points, and fit. The named entry fit is a list giving the returned fit obtained by applying estimator to the whole data Y with the selected tuning parameter. The returned value is transformed to a list with an entry cps giving the estimated change-points and, if provided by estimator, an entry value giving the estimated local values. If output == "detailed", the same as for output == "fit", but additionally an entry CP giving all calculated cross-validation criteria. Those values are summed over all folds References <NAME>., and <NAME>. (2021) Cross-validation for change-point regression: pitfalls and solutions. arXiv:2112.03220. See Also estimators, criteria, convertSingleParam Examples # call with default parameters: # 5-fold cross-validation with absolute error loss, least squares estimation, # and 0 to 5 change-points VfoldCV(Y = rnorm(100)) # more interesting data and more detailed output set.seed(1L) Y <- c(rnorm(50), rnorm(50, 5), rnorm(50), rnorm(50, 5)) VfoldCV(Y = Y, output = "detailed") # finds the correct change-points at 50, 100, 150 # (plus the start and end points 0 and 200) # reducing the number of folds to 3 VfoldCV(Y = Y, V = 3L, output = "detailed") # reducing the maximal number of change-points to 2 VfoldCV(Y = Y, Kmax = 2) # different criterion: modified error loss VfoldCV(Y = Y, output = "detailed", criterion = criterionMod) # manually given criterion; identical to criterionL1loss() testCriterion <- function(testset, estset, value = NULL, ...) { if (!is.null(value)) { return(sum(abs(testset - value))) } sum(abs(testset - mean(estset))) } identical(VfoldCV(Y = Y, criterion = testCriterion, output = "detailed"), VfoldCV(Y = Y, output = "detailed"))
pipefittr
cran
R
Package ‘pipefittr’ October 14, 2022 Type Package Title Convert Nested Functions to Pipes Version 0.1.2 Author <NAME>, <NAME>, <NAME>, <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description To take nested function calls and convert them to a more readable form us- ing pipes from package 'magrittr'. Depends R (>= 3.0.0) Imports magrittr, miniUI (>= 0.1.1), rstudioapi (>= 0.4), shiny (>= 0.13), stringr Suggests testthat License MIT + file LICENSE LazyData TRUE RoxygenNote 5.0.1 NeedsCompilation no Repository CRAN Date/Publication 2016-09-14 21:44:10 R topics documented: make_lis... 2 make_outpu... 2 pipefitt... 2 splitlisttod... 3 splitmultistrtolis... 3 make_list make_list Description make_list Usage make_list(string) Arguments string a string to be converted into a list make_output make_output Description make_output Usage make_output(funclist) Arguments funclist a list of functions pipefittr Convert nested calls to magrittr’s pipes. Description To take nested function calls and convert them to a more readable form using magrittr’s pipes. Usage pipefittr(string, pretty = F) Arguments string a string, to be converted into magrittr’s pipe syntax pretty create a multiline output, which is prettier. Try this. Examples teststring = "jump_on(bop_on( scoop_up( hop_through(foo_foo, forest), field_mouse ), head))" pipefittr(teststring, pretty = TRUE) splitlisttodf Splits list into a data.frame Description Splits list into a data.frame Usage splitlisttodf(listtosplit) Arguments listtosplit a list to be converted into a data.frame splitmultistrtolist Splits string into a list Description Splits string into a list Usage splitmultistrtolist(stringtosplit) Arguments stringtosplit a string to be split
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile
go
Go
README [¶](#section-readme) --- ### Azure File Storage SDK for Go > Service Version: 2022-11-02 Azure File Shares offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](https://docs.microsoft.com/windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview). Azure file shares can be mounted concurrently by cloud or on-premises deployments of Windows, Linux, and macOS. Additionally, Azure file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used. [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azfile) | [API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azfile) | [REST API documentation](https://docs.microsoft.com/rest/api/storageservices/file-service-rest-api) | [Product documentation](https://docs.microsoft.com/azure/storage/files/storage-files-introduction) #### Getting started ##### Install the package Install the Azure File Storage SDK for Go with [go get](https://pkg.go.dev/cmd/go#hdr-Add_dependencies_to_current_module_and_install_them): ``` go get github.com/Azure/azure-sdk-for-go/sdk/storage/azfile ``` If you plan to authenticate with Azure Active Directory (recommended), also install the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module. ``` go get github.com/Azure/azure-sdk-for-go/sdk/azidentity ``` ##### Prerequisites A supported [Go](https://go.dev/dl/) version (the Azure SDK supports the two most recent Go releases). You need an [Azure subscription](https://azure.microsoft.com/free/) and a [Storage Account](https://docs.microsoft.com/azure/storage/common/storage-account-overview) to use this package. To create a new Storage Account, you can use the [Azure Portal](https://docs.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal), [Azure PowerShell](https://docs.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-powershell), or the [Azure CLI](https://docs.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-cli). Here's an example using the Azure CLI: ``` az storage account create --name MyStorageAccount --resource-group MyResourceGroup --location westus --sku Standard_LRS ``` ##### Authenticate the client The Azure File Storage SDK for Go allows you to interact with four types of resources: the storage account itself, file shares, directories, and files. Interaction with these resources starts with an instance of a client. To create a client object, you will need the storage account's file service URL and a credential that allows you to access the storage account. The [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module makes it easy to add Azure Active Directory support for authenticating Azure SDK clients with their corresponding Azure services. ``` // create a credential for authenticating with Azure Active Directory cred, err := azidentity.NewDefaultAzureCredential(nil) // TODO: handle err // create service.Client for the specified storage account that uses the above credential client, err := service.NewClient("https://<my-storage-account-name>.file.core.windows.net/", cred, &service.ClientOptions{FileRequestIntent: to.Ptr(service.ShareTokenIntentBackup)}) // TODO: handle err ``` Learn more about enabling Azure Active Directory for authentication with Azure Storage: [Authorize access to blobs using Azure Active Directory](https://learn.microsoft.com/azure/storage/common/storage-auth-aad) Other options for authentication include connection strings, shared key, and shared access signatures (SAS). Use the appropriate client constructor function for the authentication mechanism you wish to use. #### Key concepts Azure file shares can be used to: * Completely replace or supplement traditional on-premises file servers or NAS devices. * "Lift and shift" applications to the cloud that expect a file share to store file application or user data. * Simplify new cloud development projects with shared application settings, diagnostic shares, and Dev/Test/Debug tool file shares. ##### Goroutine safety We guarantee that all client instance methods are goroutine-safe and independent of each other ([guideline](https://azure.github.io/azure-sdk/golang_introduction.html#thread-safety)). This ensures that the recommendation of reusing client instances is always safe, even across goroutines. ##### Additional concepts [Client options](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy#ClientOptions) | [Accessing the response](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#WithCaptureResponse) | [Handling failures](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore#ResponseError) | [Logging](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/log) #### Examples ##### Create a share and upload a file ``` const ( shareName = "sample-share" dirName = "sample-dir" fileName = "sample-file" ) // Get a connection string to our Azure Storage account. You can // obtain your connection string from the Azure Portal (click // Access Keys under Settings in the Portal Storage account blade) // or using the Azure CLI with: // // az storage account show-connection-string --name <account_name> --resource-group <resource_group> // // And you can provide the connection string to your application // using an environment variable. connectionString := "<connection_string>" // Path to the local file to upload localFilePath := "<path_to_local_file>" // Get reference to a share and create it shareClient, err := share.NewClientFromConnectionString(connectionString, shareName, nil) // TODO: handle error _, err = shareClient.Create(context.TODO(), nil) // TODO: handle error // Get reference to a directory and create it dirClient := shareClient.NewDirectoryClient(dirName) _, err = dirClient.Create(context.TODO(), nil) // TODO: handle error // open the file for reading file, err := os.OpenFile(localFilePath, os.O_RDONLY, 0) // TODO: handle error defer file.Close() // get the size of file fInfo, err := file.Stat() // TODO: handle error fSize := fInfo.Size() // create the file fClient := dirClient.NewFileClient(fileName) _, err = fClient.Create(context.TODO(), fSize, nil) // TODO: handle error // upload the file err = fClient.UploadFile(context.TODO(), file, nil) // TODO: handle error ``` ##### Download a file ``` const ( shareName = "sample-share" dirName = "sample-dir" fileName = "sample-file" ) connectionString := "<connection_string>" // Path to the save the downloaded file localFilePath := "<path_to_local_file>" // Get reference to the share shareClient, err := share.NewClientFromConnectionString(connectionString, shareName, nil) // TODO: handle error // Get reference to the directory dirClient := shareClient.NewDirectoryClient(dirName) // Get reference to the file fClient := dirClient.NewFileClient(fileName) // create or open a local file where we can download the Azure File file, err := os.Create(localFilePath) // TODO: handle error defer file.Close() // Download the file _, err = fClient.DownloadFile(context.TODO(), file, nil) // TODO: handle error ``` ##### Traverse a share ``` const shareName = "sample-share" connectionString := "<connection_string>" // Get reference to the share shareClient, err := share.NewClientFromConnectionString(connectionString, shareName, nil) // TODO: handle error // Track the remaining directories to walk, starting from the root var dirs []*directory.Client dirs = append(dirs, shareClient.NewRootDirectoryClient()) for len(dirs) > 0 { dirClient := dirs[0] dirs = dirs[1:] // Get all the next directory's files and subdirectories pager := dirClient.NewListFilesAndDirectoriesPager(nil) for pager.More() { resp, err := pager.NextPage(context.TODO()) // TODO: handle error for _, d := range resp.Segment.Directories { fmt.Println(*d.Name) // Keep walking down directories dirs = append(dirs, dirClient.NewSubdirectoryClient(*d.Name)) } for _, f := range resp.Segment.Files { fmt.Println(*f.Name) } } } ``` #### Troubleshooting All File service operations will return an [*azcore.ResponseError](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore#ResponseError) on failure with a populated `ErrorCode` field. Many of these errors are recoverable. The [fileerror](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azfile/fileerror/error_codes.go) package provides the possible Storage error codes along with various helper facilities for error handling. ``` const ( connectionString = "<connection_string>" shareName = "sample-share" ) // create a client with the provided connection string client, err := service.NewClientFromConnectionString(connectionString, nil) // TODO: handle error // try to delete the share, avoiding any potential race conditions with an in-progress or completed deletion _, err = client.DeleteShare(context.TODO(), shareName, nil) if fileerror.HasCode(err, fileerror.ShareBeingDeleted, fileerror.ShareNotFound) { // ignore any errors if the share is being deleted or already has been deleted } else if err != nil { // TODO: some other error } ``` #### Next steps Get started with our [File samples](https://github.com/Azure/azure-sdk-for-go/raw/main/sdk/storage/azfile/file/examples_test.go). They contain complete examples of the above snippets and more. #### Contributing See the [Storage CONTRIBUTING.md](https://github.com/Azure/azure-sdk-for-go/blob/main/CONTRIBUTING.md) for details on building, testing, and contributing to this library. This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [cla.microsoft.com](https://cla.microsoft.com). This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [Constants](#pkg-constants) ### Constants [¶](#pkg-constants) ``` const ( // EventUpload is used for logging events related to upload operation. EventUpload = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/azfile@v1.1.0/internal/exported).[EventUpload](/github.com/Azure/azure-sdk-for-go/sdk/storage/azfile@v1.1.0/internal/exported#EventUpload) ) ``` ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) This section is empty.
NRejections
cran
R
Package ‘NRejections’ October 12, 2022 Type Package Title Metrics for Multiple Testing with Correlated Outcomes Version 1.2.0 Author <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description Implements methods in Mathur and VanderWeele (in preparation) to character- ize global evidence strength across W correlated ordinary least squares (OLS) hypothe- sis tests. Specifically, uses resampling to estimate a null interval for the total number of rejec- tions in, for example, 95% of samples generated with no associations (the global null), the ex- cess hits (the difference between the observed number of rejections and the up- per limit of the null interval), and a test of the global null based on the number of rejections. LazyData true License GPL-2 Imports stats, doParallel, matrixcalc, StepwiseTest, foreach, mvtnorm RoxygenNote 7.1.1 Suggests testthat NeedsCompilation no Repository CRAN Date/Publication 2020-07-09 13:50:02 UTC R topics documented: adj_min... 2 adj_Wste... 3 cell_cor... 3 corr_test... 4 dataset_resul... 6 fit_mode... 7 fix_inpu... 8 get_cri... 9 make_corr_ma... 9 resample_resi... 10 sim_dat... 12 adj_minP Adjust p-values using minP Description Returns minP-adjusted p-values (single-step). See Westfall & Young (1993), pg. 48. Usage adj_minP(p, p.bt) Arguments p Original dataset p-values (W-vector) p.bt Bootstrapped p-values (a W X B matrix) References Westfall, P. H., & Yo<NAME>. (1993). Resampling-based multiple testing: Examples and methods for p-value adjustment. Taylor & Francis Group. Examples # observed p-values for 3 tests pvals = c(0.00233103655078803, 0.470366742594242, 0.00290278216035089 ) # bootstrapped p-values for 5 resamples p.bt = t( structure(c(0.308528665936264, 0.517319402377912, 0.686518314693482, 0.637306248855186, 0.106805510862352, 0.116705315041494, 0.0732076817175753, 0.770308936364482, 0.384405349738909, 0.0434358213611965, 0.41497067850141, 0.513471489744384, 0.571213377144122, 0.628054979652722, 0.490196884985226 ), .Dim = c(5L, 3L)) ) # adjust the p-values adj_minP( p = pvals, p.bt = p.bt ) adj_Wstep Return Wstep-adjusted p-values Description Returns p-values adjusted based on Westfall & Young (1993)’s step-down algorithm (see pg. 66- 67). Usage adj_Wstep(p, p.bt) Arguments p Original dataset p-values (W-vector) p.bt Bootstrapped p-values (an W X B matrix) References <NAME>., & <NAME>. (1993). Resampling-based multiple testing: Examples and methods for p-value adjustment. Taylor & Francis Group. Examples # observed p-values for 3 tests pvals = c(0.00233103655078803, 0.470366742594242, 0.00290278216035089 ) # bootstrapped p-values for 5 resamples p.bt = t( structure(c(0.308528665936264, 0.517319402377912, 0.686518314693482, 0.637306248855186, 0.106805510862352, 0.116705315041494, 0.0732076817175753, 0.770308936364482, 0.384405349738909, 0.0434358213611965, 0.41497067850141, 0.513471489744384, 0.571213377144122, 0.628054979652722, 0.490196884985226 ), .Dim = c(5L, 3L)) ) # adjust the p-values adj_Wstep( p = pvals, p.bt = p.bt ) cell_corr Cell correlation for simulating data Description The user does not need to call this function. This internal function is called by make_corr_mat and populates a single cell. Assumes X1 is the covariate of interest. Usage cell_corr(vname.1, vname.2, rho.XX, rho.YY, rho.XY, nY, prop.corr = 1) Arguments vname.1 Quoted name of first variable vname.2 Quoted name of second variable rho.XX Correlation between pairs of Xs rho.YY Correlation between all pairs of Ys rho.XY rho.XY Correlation between pairs of X-Y (of non-null ones) nY Number of outcomes prop.corr Proportion of X-Y pairs that are non-null (non-nulls will be first .prop.corr * .nY pairs) corr_tests Global evidence strength across correlated tests Description This is the main wrapper function for the user to call. For an arbitrary number of outcome variables, regresses the outcome on an exposure of interest (X) and adjusted covariates (C). Returns the results of the original sample (statistics and inference corresponding to X for each model, along with the observed number of rejections), a 100*(1 - alpha.fam) percent null interval for the number of rejections in samples generated under the global null, the excess hits (the difference between the observed number of rejections and the upper null interval limit), and results of a test of the global null hypothesis at alpha.fam of the global null. The global test can be conducted based on the number of rejections or based on various FWER-control methods (see References). Usage corr_tests( d, X, C = NA, Ys, B = 2000, cores, alpha = 0.05, alpha.fam = 0.05, method = "nreject" ) Arguments d Dataframe X Single quoted name of covariate of interest C Vector of quoted covariate names Ys Vector of quoted outcome names B Number of resamples to generate cores Number of cores to use for parallelization. Defaults to number available. alpha Alpha level for individual hypothesis tests alpha.fam Alpha level for global test and null interval method Which methods to report (ours, Westfall’s two methods, Bonferroni, Holm, Ro- mano) Value samp.res is a list containing the number of observed rejections (rej), the coefficient estimates of interest for each outcome model (bhats), their t-values (tvals), their uncorrected p-values at level alpha (pvals), and an N X W matrix of residuals for each model (resid). nrej.bt contains the number of rejections in each bootstrap resample. tvals.bt is a W X B matrix containing t-values for the resamples. pvals.bt is a W X B matrix containing p-values for the resamples. null.int contains the lower and upper limits of a 100*(1 - alpha.fam) percent null interval. excess.hits is the difference between the observed rejections and the upper limit of the null inter- val. global.test is a dataframe containing global test results for each user-specified method, including an indicator for whether the test rejects the global null at alpha.fam (reject), the p-value of the global test where possible (reject), and the critical value of the global test based on the number of rejections (crit). References <NAME>., & <NAME>. (in preparation). New metrics for multiple testing with corre- lated outcomes. <NAME>., & <NAME>. (2007). Control of generalized error rates in multiple testing. The Annals of Statistics, 1378-1408. <NAME>., & <NAME>. (1993). Resampling-based multiple testing: Examples and methods for p-value adjustment. Taylor & Francis Group. Examples ##### Example 1 ##### data(rock) res = corr_tests( d = rock, X = c("area"), C = NA, Ys = c("perm", "peri", "shape"), method = "nreject" ) # mean rejections in resamples # should be close to 0.05 * 3 = 0.15 mean( as.numeric(res$nrej.bt) ) ##### Example 1 ##### cor = make_corr_mat( nX = 10, nY = 20, rho.XX = 0.10, rho.YY = 0.5, rho.XY = 0.1, prop.corr = .4 ) d = sim_data( n = 300, cor = cor ) # X1 is the covariate of interest, and all other X variables are adjusted all.covars = names(d)[ grep( "X", names(d) ) ] C = all.covars[ !all.covars == "X1" ] # may take 10 min to run res = corr_tests( d, X = "X1", C = C, Ys = names(d)[ grep( "Y", names(d) ) ], method = "nreject" ) # look at the main results res$null.int res$excess.hits res$global.test dataset_result Fit all models for a single dataset Description The user does not need to call this function. For a single dataset, fits separate OLS models for W outcomes with or without centering the test statistics to enforce the global null. Usage dataset_result( d, X, C = NA, Ys, alpha = 0.05, center.stats = TRUE, bhat.orig = NA ) Arguments d Dataframe X Single quoted name of covariate of interest C Vector of quoted covariate names Ys W-vector of quoted outcome names alpha Alpha level for individual tests center.stats Should test statistics be centered by original-sample estimates to enforce global null? bhat.orig Estimated coefficients for covariate of interest in original sample (W-vector). Can be left NA for non-centered stats. Value Returns a list containing the number of observed rejections (rej), the coefficient estimates of in- terest for each outcome model (bhats), their t-values (tvals), their uncorrected p-values at level alpha (pvals), and a matrix of residuals from each model (resid). The latter is used for residual resampling under the global null. Examples samp.res = dataset_result( X = "complaints", C = c("privileges", "learning"), Ys = c("rating", "raises"), d = attitude, center.stats = FALSE, bhat.orig = NA, # bhat.orig is a single value now for just the correct Y alpha = 0.05 ) fit_model Fit OLS model for a single outcome Description The user does not need to call this function. Fits OLS model for a single outcome with or without centering the test statistics to enforce the global null. Usage fit_model( X, C = NA, Y, Ys, d, center.stats = FALSE, bhat.orig = NA, alpha = 0.05 ) Arguments X Single quoted name of covariate of interest C Vector of quoted covariate names Y Quoted name of single outcome for which model should be fit Ys Vector of all quoted outcome names d Dataframe center.stats Should test statistics be centered by original-sample estimates to enforce global null? bhat.orig Estimated coefficients for covariate of interest in original sample (W-vector). Can be left NA for non-centered stats. alpha Alpha level for individual tests Examples data(attitude) fit_model( X = "complaints", C = c("privileges", "learning"), Y = "rating", Ys = c("rating", "raises"), d = attitude, center.stats = FALSE, bhat.orig = NA, alpha = 0.05 ) fix_input Fix bad user input Description The user does not need to call this function. Warns about and fixes bad user input: missing data on analysis variables or datasets containing extraneous variables. Usage fix_input(X, C, Ys, d) Arguments X Single quoted name of covariate of interest C Vector of quoted covariate names Ys Vector of quoted outcome names d Dataframe get_crit Return ordered critical values for Wstep Description The user does not need to call this function. This is an internal function for use by adj_minP and adj_Wstep. Usage get_crit(p.dat, col.p) Arguments p.dat p-values from dataset (W-vector) col.p Column of resampled p-values (for the single p-value for which we’re make_corr_mat Makes correlation matrix to simulate data Description Simulates a dataset with a specified number of standard MVN covariates and outcomes with a specified correlation structure. If the function returns an error stating that the correlation matrix is not positive definite, try reducing the correlation magnitudes. Usage make_corr_mat(nX, nY, rho.XX, rho.YY, rho.XY, prop.corr = 1) Arguments nX Number of covariates, including the one of interest nY Number of outcomes rho.XX Correlation between all pairs of Xs rho.YY Correlation between all pairs of Ys rho.XY Correlation between pairs of X-Y that are not null (see below) prop.corr Proportion of X-Y pairs that are non-null (non-nulls will be first prop.corr * nY pairs) Examples make_corr_mat( nX = 1, nY = 4, rho.XX = 0, rho.YY = 0.25, rho.XY = 0, prop.corr = 0.8 ) resample_resid Resample residuals for OLS Description Implements the residual resampling OLS algorithm described in Mathur & VanderWeele (in prepa- ration). Specifically, the design matrix is fixed while the resampled outcomes are set equal to the original fitted values plus a vector of residuals sampled with replacement. Usage resample_resid( d, X, C = NA, Ys, alpha, resid, bhat.orig, B = 2000, cores = NULL ) Arguments d Dataframe X Single quoted name of covariate of interest C Vector of quoted covariate names Ys Vector of quoted outcome names alpha Alpha level for individual tests resid Residuals from original sample (W X B matrix) bhat.orig Estimated coefficients for covariate of interest in original sample (W-vector) B Number of resamples to generate cores Number of cores available for parallelization Value Returns a list containing the number of rejections in each resample, a matrix of p-values in the resamples, and a matrix of t-statistics in the resamples. References <NAME>., & <NAME>. (in preparation). New metrics for multiple testing with corre- lated outcomes. Examples samp.res = dataset_result( X = "complaints", C = c("privileges", "learning"), Ys = c("rating", "raises"), d = attitude, center.stats = FALSE, bhat.orig = NA, # bhat.orig is a single value now for just the correct Y alpha = 0.05 ) resamps = resample_resid( X = "complaints", C = c("privileges", "learning"), Ys = c("rating", "raises"), d = attitude, alpha = 0.05, resid = samp.res$resid, bhat.orig = samp.res$b, B=20, cores = 2) sim_data Simulate MVN data Description Simulates one dataset with standard MVN correlated covariates and outcomes. Usage sim_data(n, cor) Arguments n Number of rows to simulate cor Correlation matrix (e.g., from make_corr_mat) Examples cor = make_corr_mat( nX = 5, nY = 2, rho.XX = -0.06, rho.YY = 0.1, rho.XY = -0.1, prop.corr = 8/40 ) d = sim_data( n = 50, cor = cor )
stock-analysis-engine
readthedoc
YAML
# Stock Analysis Engine # Stock Analysis Engine¶ Build and tune investment algorithms for use with artificial intelligence (deep neural networks) with a distributed stack for running backtests using live pricing data on publicly traded companies with automated datafeeds from: IEX Cloud, Tradier and FinViz (includes: pricing, options, news, dividends, daily, intraday, screeners, statistics, financials, earnings, and more). Kubernetes users please refer to the Helm guide to get started and Metalnetes for running multiple Analysis Engines at the same time on a bare-metal server # Fetch the Latest Pricing Data¶ Supported fetch methods for getting pricing data: * Command line using `fetch` command * IEX Cloud Fetch API * Tradier Fetch API * Docker-compose using * Kubernetes jobs: Fetch Intraday, Fetch Daily, Fetch Weekly, or Fetch from only Tradier ## Fetch using the Command Line¶ Here is a video showing how to fetch the latest pricing data for a ticker using the command line: Clone to `/opt/sa` > git clone https://github.com/AlgoTraders/stock-analysis-engine.git /opt/sa cd /opt/sa * Create Docker Mounts and Start Redis and Minio This will pull Redis and Minio docker images. > ./compose/start.sh -a * Fetch All Pricing Data Fetch pricing data from IEX Cloud (requires an account and uses on-demand usage pricing) and Tradier (requires an account): * Set the IEX_TOKEN environment variable to fetch from the IEX Cloud datafeeds: > export IEX_TOKEN=YOUR_IEX_TOKEN * Set the TD_TOKEN environment variable to fetch from the Tradier datafeeds: > export TD_TOKEN=YOUR_TRADIER_TOKEN * Fetch with: > fetch -t SPY * Fetch only from IEX with -g iex: > fetch -t SPY -g iex # and fetch from just Tradier with: # fetch -t SPY -g td * Fetch previous 30 calendar days of intraday minute pricing data from IEX Cloud > backfill-minute-data.sh TICKER # backfill-minute-data.sh SPY * Please refer to the documentation for more examples on controlling your pricing request usage (including how to run fetches for intraday, daily and weekly use cases) * View the Compressed Pricing Data in Redis > redis-cli keys "SPY_*" redis-cli get "<key like SPY_2019-01-08_minute>" # Run Backtests with the Algorithm Runner API¶ Run a backtest with the latest pricing data: ``` import analysis_engine.algo_runner as algo_runner import analysis_engine.plot_trading_history as plot runner = algo_runner.AlgoRunner('SPY') # run the algorithm with the latest 200 minutes: df = runner.latest() print(df[['minute', 'close']].tail(5)) plot.plot_trading_history( title=( f'SPY - ${df["close"].iloc[-1]} at: ' f'{df["minute"].iloc[-1]}'), df=df) # start a full backtest with: # runner.start() ``` Check out the backtest_with_runner.py script for a command line example of using the Algorithm Runner API to run and plot from an Algorithm backtest config file. # Extract from Redis API¶ Once fetched, you can extract datasets from the redis cache with: # Extract Latest Minute Pricing for Stocks and Options¶ ## Extract Historical Data¶ Extract historical data with the `date` argument formatted `YYYY-MM-DD` : # Additional Extraction APIs¶ # Backups¶ Pricing data is automatically compressed in redis and there is an example Kubernetes job for backing up all stored pricing data to AWS S3. # Running the Full Stack Locally for Backtesting and Live Trading Analysis¶ While not required for backtesting, running the full stack is required for running algorithms during a live trading session. Here is a video on how to deploy the full stack locally using docker compose and the commands from the video. Start Workers, Backtester, Pricing Data Collection, Jupyter, Redis and Minio Now start the rest of the stack with the command below. This will pull the ~3.0 GB stock-analysis-engine docker image and start the workers, backtester, dataset collection and Jupyter image. It will start Redis and Minio if they are not running already. > ./compose/start.sh Mac OS X users just a note that there is a known docker compose issue with network_mode: “host” so you may have issues trying to connect to your services. * Check the Docker Containers > docker ps -a * View for dataset collection logs > logs-dataset-collection.sh * Wait for pricing engine logs to stop with `ctrl+c` > logs-workers.sh * Verify Pricing Data is in Redis > redis-cli keys "*" * Optional - Automating pricing data collection with the automation-dataset-collection.yml docker compose file: Depending on how fast you want to run intraday algorithms, you can use this docker compose job or the Kubernetes job or the Fetch from Only Tradier Kubernetes job to collect the most recent pricing information > ./compose/start.sh -c # Run a Custom Minute-by-Minute Intraday Algorithm Backtest and Plot the Trading History¶ With pricing data in redis, you can start running backtests a few ways: * Comparing 3 Deep Neural Networks Trained to Predict a Stocks Closing Price in a Jupyter Notebook * Build, run and tune within a Jupyter Notebook and plot the balance vs the stock’s closing price while running * Analyze and replay algorithm trading histories stored in s3 with this Jupyter Notebook * Run with the command line backtest tool * Advanced - building a standalone algorithm as a class for running trading analysis # Running an Algorithm with Live Intraday Pricing Data¶ Here is a video showing how to run it: The backtest command line tool uses an algorithm config dictionary to build multiple Williams %R indicators into an algorithm with a 10,000.00 USD starting balance. Once configured, the backtest iterates through each trading dataset and evaluates if it should buy or sell based off the pricing data. After it finishes, the tool will display a chart showing the algorithm’s balance and the stock’s close price per minute using matplotlib and seaborn. ``` # this can take a few minutes to evaluate # as more data is collected # because each day has 390 rows to process bt -t SPY -f /tmp/history.json ``` Note The algorithm’s trading history dataset provides many additional columns to review for tuning indicators and custom buy/sell rules. To reduce the time spent waiting on an algorithm to finish processing, you can save the entire trading history to disk with the `-f <save_to_file>` argument. # View the Minute Algorithm’s Trading History from a File¶ Once the trading history is saved to disk, you can open it back up and plot other columns within the dataset with: ``` # by default the plot shows # balance vs close per minute plot-history -f /tmp/history.json ``` # Run a Custom Algorithm and Save the Trading History with just Today’s Pricing Data¶ Here’s how to run an algorithm during a live trading session. This approach assumes another process or cron is `fetch-ing` the pricing data using the engine so the algorithm(s) have access to the latest pricing data: ``` bt -t SPY -f /tmp/SPY-history-$(date +"%Y-%m-%d").json -j $(date +"%Y-%m-%d") ``` Note Using `-j <DATE>` will make the algorithm jump-to-this-date before starting any trading. This is helpful for debugging indicators, algorithms, datasets issues, and buy/sell rules as well. # Run a Backtest using an External Algorithm Module and Config File¶ Run an algorithm backtest with a standalone algorithm class contained in a single python module file that can even be outside the repository using a config file on disk: ``` ticker=SPY config=<CUSTOM_ALGO_CONFIG_DIR>/minute_algo.json algo_mod=<CUSTOM_ALGO_MODULE_DIR>/minute_algo.py bt -t ${ticker} -c ${algo_config} -g ${algo_mod} ``` Or the config can use ``` "algo_path": "<PATH_TO_FILE>" ``` to set the path to an external algorithm module file. ``` bt -t ${ticker} -c ${algo_config} ``` Note Using a standalone algorithm class must derive from the class # Building Your Own Trading Algorithms¶ Beyond running backtests, the included engine supports running many algorithms and fetching data for both live trading or backtesting all at the same time. As you start to use this approach, you will be generating lots of algorithm pricing datasets, history datasets and coming soon performance datasets for AI training. Because algorithm’s utilize the same dataset structure, you can share ready-to-go datasets with a team and publish them to S3 for kicking off backtests using lambda functions or just archival for disaster recovery. Backtests can use ready-to-go datasets out of S3, redis or a file The next section looks at how to build an algorithm-ready datasets from cached pricing data in redis. # Run a Local Backtest and Publish Algorithm Trading History to S3¶ Run distributed across the engine workers with `-w` Use this command to start a local backtest with the included algorithm config. This backtest will also generate a local algorithm-ready dataset saved to a file once it finishes. Define common values > ticker=SPY algo_config=tests/algo_configs/test_5_days_ahead.json extract_loc=file:/tmp/algoready-SPY-latest.json history_loc=file:/tmp/history-SPY-latest.json load_loc=${extract_loc} ## Run Algo with Extraction and History Publishing¶ ``` run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -e ${extract_loc} -p ${history_loc} ``` The pip includes vprof for profiling an algorithm’s performance (cpu, memory, profiler and heat map - not money-related) which was used to generate the cpu flame graph seen above. Profile your algorithm’s code performance with the following steps: After generating the local algorithm-ready dataset (which can take some time), use this command to run another backtest using the file on disk: ``` dev_history_loc=file:/tmp/dev-history-${ticker}-latest.json run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc} ``` ## View Buy and Sell Transactions¶ ``` run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc} | grep "TRADE" ``` # Plot Trading History Tools¶ ## Plot Timeseries Trading History with High + Low + Open + Close¶ ``` sa -t SPY -H ${dev_history_loc} ``` # Run and Publish Trading Performance Report for a Custom Algorithm¶ This will run a backtest over the past 60 days in order and run the standalone algorithm as a class example. Once done it will publish the trading performance report to a file or minio (s3). ## Write the Trading Performance Report to a Local File¶ ``` run-algo-report-to-file.sh -t SPY -b 60 -a /opt/sa/analysis_engine/mocks/example_algo_minute.py # run-algo-report-to-file.sh -t <TICKER> -b <NUM_DAYS_BACK> -a <CUSTOM_ALGO_MODULE> # run on specific date ranges with: # -s <start date YYYY-MM-DD> -n <end date YYYY-MM-DD> ``` ## Write the Trading Performance Report to Minio (s3)¶ ``` run-algo-report-to-s3.sh -t SPY -b 60 -a /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` # Run and Publish Trading History for a Custom Algorithm¶ This will run a full backtest across the past 60 days in order and run the example algorithm. Once done it will publish the trading history to a file or minio (s3). ## Write the Trading History to a Local File¶ ``` run-algo-history-to-file.sh -t SPY -b 60 -a /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` ## Write the Trading History to Minio (s3)¶ ``` run-algo-history-to-s3.sh -t SPY -b 60 -a /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` # Developing on AWS¶ If you are comfortable with AWS S3 usage charges, then you can run just with a redis server to develop and tune algorithms. This works for teams and for archiving datasets for disaster recovery. ## Environment Variables¶ Export these based off your AWS IAM credentials and S3 endpoint. ``` export AWS_ACCESS_KEY_ID="ACCESS" export AWS_SECRET_ACCESS_KEY="SECRET" export S3_ADDRESS=s3.us-east-1.amazonaws.com ``` # Extract and Publish to AWS S3¶ ``` ./tools/backup-datasets-on-s3.sh -t TICKER -q YOUR_BUCKET -k ${S3_ADDRESS} -r localhost:6379 ``` # Publish to Custom AWS S3 Bucket and Key¶ ``` extract_loc=s3://YOUR_BUCKET/TICKER-latest.json ./tools/backup-datasets-on-s3.sh -t TICKER -e ${extract_loc} -r localhost:6379 ``` # Backtest a Custom Algorithm with a Dataset on AWS S3¶ ``` backtest_loc=s3://YOUR_BUCKET/TICKER-latest.json custom_algo_module=/opt/sa/analysis_engine/mocks/example_algo_minute.py sa -t TICKER -a ${S3_ADDRESS} -r localhost:6379 -b ${backtest_loc} -g ${custom_algo_module} ``` # Fetching New Pricing Tradier Every Minute with Kubernetes¶ If you want to fetch and append new option pricing data from Tradier, you can use the included kubernetes job with a cron to pull new data every minute: ``` kubectl -f apply /opt/sa/k8/datasets/pull_tradier_per_minute.yml ``` # Run a Distributed 60-day Backtest on SPY and Publish the Trading Report, Trading History and Algorithm-Ready Dataset to S3¶ Publish backtests and live trading algorithms to the engine’s workers for running many algorithms at the same time. Once done, the algorithm will publish results to s3, redis or a local file. By default, the included example below publishes all datasets into minio (s3) where they can be downloaded for offline backtests or restored back into redis. Running distributed algorithmic workloads requires redis, minio, and the engine running # Run a Local 60-day Backtest on SPY and Publish Trading Report, Trading History and Algorithm-Ready Dataset to S3¶ Or manually with: ``` ticker=SPY num_days_back=60 use_date=$(date +"%Y-%m-%d") ds_id=$(uuidgen | sed -e 's/-//g') ticker_dataset="${ticker}-${use_date}_${ds_id}.json" echo "creating ${ticker} dataset: ${ticker_dataset}" extract_loc="s3://algoready/${ticker_dataset}" history_loc="s3://algohistory/${ticker_dataset}" report_loc="s3://algoreport/${ticker_dataset}" backtest_loc="s3://algoready/${ticker_dataset}" # same as the extract_loc processed_loc="s3://algoprocessed/${ticker_dataset}" # archive it when done start_date=$(date --date="${num_days_back} day ago" +"%Y-%m-%d") echo "" echo "extracting algorithm-ready dataset: ${extract_loc}" echo "sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date}" sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date} echo "" echo "running algo with: ${backtest_loc}" echo "sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date}" sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date} ``` # Jupyter on Kubernetes¶ This command runs Jupyter on an AntiNex Kubernetes cluster ``` ./k8/jupyter/run.sh ceph dev ``` # Kubernetes - Analyze and Tune Algorithms from a Trading History¶ With the Analysis Engine’s Jupyter instance deployed you can tune algorithms from a trading history using this notebook. # Kubernetes Job - Export SPY Datasets and Publish to Minio¶ Manually run with an `ssh-eng` alias: ``` function ssheng() { pod_name=$(kubectl get po | grep ae-engine | grep Running |tail -1 | awk '{print $1}') echo "logging into ${pod_name}" kubectl exec -it ${pod_name} bash } ssheng # once inside the container on kubernetes source /opt/venv/bin/activate sa -a minio-service:9000 -r redis-master:6379 -e s3://backups/SPY-$(date +"%Y-%m-%d") -t SPY ``` ## View Algorithm-Ready Datasets¶ ``` aws --endpoint-url http://localhost:9000 s3 ls s3://algoready ``` ``` aws --endpoint-url http://localhost:9000 s3 ls s3://algohistory ``` ``` aws --endpoint-url http://localhost:9000 s3 ls s3://algoreport ``` # Advanced - Running Algorithm Backtests Offline¶ With extracted Algorithm-Ready datasets in minio (s3), redis or a file you can develop and tune your own algorithms offline without having redis, minio, the analysis engine, or jupyter running locally. ## Run a Offline Custom Algorithm Backtest with an Algorithm-Ready File¶ ``` # extract with: sa -t SPY -e file:/tmp/SPY-latest.json sa -t SPY -b file:/tmp/SPY-latest.json -g /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` ## Run the Intraday Minute-by-Minute Algorithm and Publish the Algorithm-Ready Dataset to S3¶ Run the included standalone algorithm with the latest pricing datasets use: ``` sa -t SPY -g /opt/sa/analysis_engine/mocks/example_algo_minute.py -e s3://algoready/SPY-$(date +"%Y-%m-%d").json ``` And to debug an algorithm’s historical trading performance add the `-d` debug flag: ``` sa -d -t SPY -g /opt/sa/analysis_engine/mocks/example_algo_minute.py -e s3://algoready/SPY-$(date +"%Y-%m-%d").json ``` # Extract Algorithm-Ready Datasets¶ With pricing data cached in redis, you can extract algorithm-ready datasets and save them to a local file for offline historical backtesting analysis. This also serves as a local backup where all cached data for a single ticker is in a single local file. ## Extract an Algorithm-Ready Dataset from Redis and Save it to a File¶ ``` sa -t SPY -e ~/SPY-latest.json ``` ## Create a Daily Backup¶ ``` sa -t SPY -e ~/SPY-$(date +"%Y-%m-%d").json ``` ## Restore Backup to Redis¶ Use this command to cache missing pricing datasets so algorithms have the correct data ready-to-go before making buy and sell predictions. By default, this command will not overwrite existing datasets in redis. It was built as a tool for merging redis pricing datasets after a VM restarted and pricing data was missing from the past few days (gaps in pricing data is bad for algorithms). ``` sa -t SPY -L ~/SPY-$(date +"%Y-%m-%d").json ``` ## Fetch¶ With redis and minio running ( `./compose/start.sh` ), you can fetch, cache, archive and return all of the newest datasets for tickers: ``` from analysis_engine.fetch import fetch d = fetch(ticker='SPY') for k in d['SPY']: print(f'dataset key: {k}\nvalue {d["SPY"][k]}\n') ``` # Backfill Historical Minute Data from IEX Cloud¶ ``` fetch -t TICKER -F PAST_DATE -g iex_min # example: # fetch -t SPY -F 2019-02-07 -g iex_min ``` Please refer to the Stock Analysis Intro Extracting Datasets Jupyter Notebook for the latest usage examples. Build | | --- | This section outlines how to get the Stock Analysis stack running locally with: * Redis * Minio (S3) * Stock Analysis engine * Jupyter For background, the stack provides a data pipeline that automatically archives pricing data in minio (s3) and caches pricing data in redis. Once cached or archived, custom algorithms can use the pricing information to determine buy or sell conditions and track internal trading performance across historical backtests. From a technical perspective, the engine uses Celery workers to process heavyweight, asynchronous tasks and scales horizontally with support for many transports and backends depending on where you need to run it. The stack deploys with Kubernetes or docker compose and supports publishing trading alerts to Slack. With the stack already running, please refer to the Intro Stock Analysis using Jupyter Notebook for more getting started examples. # Setting up Your Tradier Account with Docker Compose¶ Please set your Tradier account token in the docker environment files before starting the stack: ``` grep -r SETYOURTRADIERTOKENHERE compose/* compose/envs/backtester.env:TD_TOKEN=SETYOURTRADIERTOKENHERE compose/envs/workers.env:TD_TOKEN=SETYOURTRADIERTOKENHER ``` Please export the variable for developing locally: ``` export TD_TOKEN=<TRADIER_ACCOUNT_TOKEN> ``` Note Please restart the stack with `./compose/stop.sh` then `./compose/start.sh` after setting the Tradier token environment variable Start Redis and Minio The Redis and Minio container are set up to save data to `/data` so files can survive a restart/reboot. On Mac OS X, please make sure to add `/data` (and `/data/sa/notebooks` for Jupyter notebooks) on the Docker Preferences -> File Sharing tab and let the docker daemon restart before trying to start the containers. If not, you will likely see errors like: > ERROR: for minio Cannot start service minio: b'Mounts denied: \r\nThe path /data/minio/data\r\nis not shared from OS X Here is the command to manully creaate the shared volume directories: :: sudo mkdir -p -m 777 /data/redis/data /data/minio/data /data/sa/notebooks/dev /data/registry/auth /data/registry/data > ./compose/start.sh * Verify Redis and Minio are Running > docker ps | grep -E "redis|minio" # Running on Ubuntu and CentOS¶ Install Packages Ubuntu > sudo apt-get install make cmake gcc python3-distutils python3-tk python3 python3-apport python3-certifi python3-dev python3-pip python3-venv python3.6 redis-tools virtualenv libcurl4-openssl-dev libssl-dev CentOS 7 > sudo yum install cmake gcc gcc-c++ make tkinter curl-devel make cmake python-devel python-setuptools python-pip python-virtualenv redis python36u-libs python36u-devel python36u-pip python36u-tkinter python36u-setuptools python36u openssl-devel * Install TA-Lib Follow the TA-Lib install guide or use the included install tool as root: > sudo su /opt/sa/tools/linux-install-talib.sh exit * Create and Load Python 3 Virtual Environment > virtualenv -p python3 /opt/venv source /opt/venv/bin/activate pip install --upgrade pip setuptools * # Running on Mac OS X¶ Download Python 3.6 Python 3.7 is not supported by celery so please ensure it is python 3.6 * Install Packages > brew install openssl pyenv-virtualenv redis freetype pkg-config gcc ta-lib Mac OS X users just a note `keras` , `tensorflow` and `h5py` installs have not been debugged yet. Please let us know if you have issues setting up your environment. We likely have not hit the issue yet. * Create and Load Python 3 Virtual Environment > python3 -m venv /opt/venv source /opt/venv/bin/activate pip install --upgrade pip setuptools * Install Certs After hitting ssl verify errors, I found this stack overflow which shows there’s an additional step for setting up python 3.6: > /Applications/Python\ 3.6/Install\ Certificates.command * Install PyCurl with OpenSSL > PYCURL_SSL_LIBRARY=openssl LDFLAGS="-L/usr/local/opt/openssl/lib" CPPFLAGS="-I/usr/local/opt/openssl/include" pip install --no-cache-dir pycurl * # Start Workers¶ `./start-workers.sh` # Get and Publish Pricing data¶ Please refer to the lastest API docs in the repo: https://github.com/AlgoTraders/stock-analysis-engine/blob/master/analysis_engine/api_requests.py # Fetch New Stock Datasets¶ Run the ticker analysis using the ./analysis_engine/scripts/fetch_new_stock_datasets.py: ## Collect all datasets for a Ticker or Symbol¶ Collect all datasets for the ticker SPY: `fetch -t SPY` Note ## View the Engine Worker Logs¶ ``` docker logs ae-workers ``` ## Running Inside Docker Containers¶ If you are using an engine that is running inside a docker container, then `localhost` is probably not the correct network hostname for finding `redis` and `minio` . Please set these values as needed to publish and archive the dataset artifacts if you are using the integration or notebook integration docker compose files for deploying the analysis engine stack: ``` fetch -t SPY -a 0.0.0.0:9000 -r 0.0.0.0:6379 ``` Warning It is not recommended sharing the same Redis server with multiple engine workers from inside docker containers and outside docker. This is because the `REDIS_ADDRESS` and `S3_ADDRESS` can only be one string value at the moment. So if a job is picked up by the wrong engine (which cannot connect to the correct Redis and Minio), then it can lead to data not being cached or archived correctly and show up as connectivity failures. ## Detailed Usage Example¶ The fetch_new_stock_datasets.py script supports many parameters. Here is how to set it up if you have custom `redis` and `minio` deployments like on kubernetes as minio-service:9000 and redis-master:6379: * S3 authentication ( `-k` and `-s` ) * S3 endpoint ( `-a` ) * Redis endoint ( `-r` ) * Custom S3 Key and Redis Key Name ( `-n` ) ``` fetch -t SPY -g all -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n SPY_demo -P 1 -N 1 -O 1 -U 1 -R 1 ``` ## Usage¶ Please refer to the fetch_new_stock_datasets.py script for the latest supported usage if some of these are out of date: ``` fetch -h 2019-02-11 01:55:33,791 - fetch - INFO - start - fetch_new_stock_datasets usage: fetch_new_stock_datasets.py [-h] [-t TICKER] [-g FETCH_MODE] [-i TICKER_ID] [-e EXP_DATE_STR] [-l LOG_CONFIG_PATH] [-b BROKER_URL] [-B BACKEND_URL] [-k S3_ACCESS_KEY] [-s S3_SECRET_KEY] [-a S3_ADDRESS] [-S S3_SECURE] [-u S3_BUCKET_NAME] [-G S3_REGION_NAME] [-p REDIS_PASSWORD] [-r REDIS_ADDRESS] [-n KEYNAME] [-m REDIS_DB] [-x REDIS_EXPIRE] [-z STRIKE] [-c CONTRACT_TYPE] [-P GET_PRICING] [-N GET_NEWS] [-O GET_OPTIONS] [-U S3_ENABLED] [-R REDIS_ENABLED] [-A ANALYSIS_TYPE] [-L URLS] [-Z] [-d] Download and store the latest stock pricing, news, and options chain data and store it in Minio (S3) and Redis. Also includes support for getting FinViz screener tickers optional arguments: -h, --help show this help message and exit -t TICKER ticker -g FETCH_MODE optional - fetch mode: initial = default fetch from initial data feeds (IEX and Tradier), intra = fetch intraday from IEX and Tradier, daily = fetch daily from IEX, weekly = fetch weekly from IEX, all = fetch from all data feeds, td = fetch from Tradier feeds only, iex = fetch from IEX Cloud feeds only, iex_min = fetch IEX Cloud intraday per-minute feed https://iexcloud.io/docs/api/#historical-prices iex_day = fetch IEX Cloud daily feed https://iexcloud.io/docs/api/#historical-prices iex_quote = fetch IEX Cloud quotes feed https://iexcloud.io/docs/api/#quote iex_stats = fetch IEX Cloud key stats feed https://iexcloud.io/docs/api/#key-stats iex_peers = fetch from just IEX Cloud peers feed https://iexcloud.io/docs/api/#peers iex_news = fetch IEX Cloud news feed https://iexcloud.io/docs/api/#news iex_fin = fetch IEX Cloud financials feedhttps://iexcloud.io/docs/api/#financials iex_earn = fetch from just IEX Cloud earnings feeed https://iexcloud.io/docs/api/#earnings iex_div = fetch from just IEX Cloud dividends feedhttps://iexcloud.io/docs/api/#dividends iex_comp = fetch from just IEX Cloud company feed https://iexcloud.io/docs/api/#company -i TICKER_ID optional - ticker id not used without a database -e EXP_DATE_STR optional - options expiration date -l LOG_CONFIG_PATH optional - path to the log config file -b BROKER_URL optional - broker url for Celery -B BACKEND_URL optional - backend url for Celery -k S3_ACCESS_KEY optional - s3 access key -s S3_SECRET_KEY optional - s3 secret key -a S3_ADDRESS optional - s3 address format: <host:port> -S S3_SECURE optional - s3 ssl or not -u S3_BUCKET_NAME optional - s3 bucket name -G S3_REGION_NAME optional - s3 region name -p REDIS_PASSWORD optional - redis_password -r REDIS_ADDRESS optional - redis_address format: <host:port> -n KEYNAME optional - redis and s3 key name -m REDIS_DB optional - redis database number (0 by default) -x REDIS_EXPIRE optional - redis expiration in seconds -z STRIKE optional - strike price -c CONTRACT_TYPE optional - contract type "C" for calls "P" for puts -P GET_PRICING optional - get pricing data if "1" or "0" disabled -N GET_NEWS optional - get news data if "1" or "0" disabled -O GET_OPTIONS optional - get options data if "1" or "0" disabled -U S3_ENABLED optional - s3 enabled for publishing if "1" or "0" is disabled -R REDIS_ENABLED optional - redis enabled for publishing if "1" or "0" is disabled -A ANALYSIS_TYPE optional - run an analysis supported modes: scn -L URLS optional - screener urls to pull tickers for analysis -Z disable run without an engine for local testing and demos -d debug ``` # Run FinViz Screener-driven Analysis¶ This is a work in progress, but the screener-driven workflow is: * Convert FinViz screeners into a list of tickers and a `pandas.DataFrames` from each ticker’s html row * Build unique list of tickers * Pull datasets for each ticker * Run sale-side processing - coming soon * Run buy-side processing - coming soon * Issue alerts to slack - coming soon Here is how to run an analysis on all unique tickers found in two FinViz screener urls: https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4 and https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4 ``` fetch -A scn -L 'https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4' ``` # Run Publish from an Existing S3 Key to Redis¶ Upload Integration Test Key to S3 > export INT_TESTS=1 python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload * Confirm the Integration Test Key is in S3 * Run an analysis with an existing S3 key using ./analysis_engine/scripts/publish_from_s3_to_redis.py > publish_from_s3_to_redis.py -t SPY -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1 * Confirm the Key is now in Redis > ./tools/redis-cli.sh 127.0.0.1:6379> keys * keys * 1) "SPY_demo_daily" 2) "SPY_demo_minute" 3) "SPY_demo_company" 4) "integration-test-v1" 5) "SPY_demo_stats" 6) "SPY_demo" 7) "SPY_demo_quote" 8) "SPY_demo_peers" 9) "SPY_demo_dividends" 10) "SPY_demo_news1" 11) "SPY_demo_news" 12) "SPY_demo_options" 13) "SPY_demo_pricing" 127.0.0.1:6379Run an analysis with an existing S3 key using ./analysis_engine/scripts/publish_ticker_aggregate_from_s3.py > publish_ticker_aggregate_from_s3.py -t SPY -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -u pricing -c compileddatasets * Confirm the aggregated Ticker is now in Redis > ./tools/redis-cli.sh 127.0.0.1:6379> keys *latest* 1) "SPY_latest" 127.0.0.1:6379# View Archives in S3 - Minio¶ Here’s a screenshot showing the stock market dataset archives created while running on the 3-node Kubernetes cluster for distributed AI predictions http://localhost:9000/minio/pricing/ Login * username: `trexaccesskey` * password: `trex123321` Using the AWS CLI to List the Pricing Bucket Please refer to the official steps for using the `awscli` pip with minio: https://docs.minio.io/docs/aws-cli-with-minio.html Export Credentials > export AWS_SECRET_ACCESS_KEY=trex123321 export AWS_ACCESS_KEY_ID=trexaccesskey * List Buckets > aws --endpoint-url http://localhost:9000 s3 ls 2018-10-02 22:24:06 company 2018-10-02 22:24:02 daily 2018-10-02 22:24:06 dividends 2018-10-02 22:33:15 integration-tests 2018-10-02 22:24:03 minute 2018-10-02 22:24:05 news 2018-10-02 22:24:04 peers 2018-10-02 22:24:06 pricing 2018-10-02 22:24:04 stats 2018-10-02 22:24:04 quote * List Pricing Bucket Contents > aws --endpoint-url http://localhost:9000 s3 ls s3://pricing * Get the Latest SPY Pricing Key > aws --endpoint-url http://localhost:9000 s3 ls s3://pricing | grep -i spy_demo SPY_demo # View Caches in Redis¶ ``` ./tools/redis-cli.sh 127.0.0.1:6379> keys * 1) "SPY_demo" ``` # Jupyter¶ You can run the Jupyter notebooks by starting the notebook-integration.yml stack with the command: On Mac OS X, Jupyter does not work with the Analysis Engine at the moment. PR’s are welcomed, but we have not figured out how to share the notebooks and access redis and minio with the known docker compose issue with network_host on Mac OS X For Linux users, the Jupyter container hosts the Stock Analysis Intro notebook at the url (default login password is `admin` ): # Jupyter Presentations with RISE¶ The docker container comes with RISE installed for running notebook presentations from a browser. Here’s the button on the notebook for starting the web presentation: # Distributed Automation with Docker¶ Automation requires the integration stack running (redis + minio + engine) and docker-compose. # Dataset Collection¶ Start automated dataset collection with docker compose: # Datasets in Redis¶ After running the dataset collection container, the datasets should be auto-cached in Minio (http://localhost:9000/minio/pricing/) and Redis: ``` ./tools/redis-cli.sh 127.0.0.1:6379> keys * ``` # Publishing to Slack¶ Please refer to the Publish Stock Alerts to Slack Jupyter Notebook for the latest usage examples. ## Publish FinViz Screener Tickers to Slack¶ Here is sample code for trying out the Slack integration. ``` import analysis_engine.finviz.fetch_api as fv from analysis_engine.send_to_slack import post_df # simple NYSE Dow Jones Index Financials with a P/E above 5 screener url url = 'https://finviz.com/screener.ashx?v=111&f=exch_nyse,fa_pe_o5,idx_dji,sec_financial&ft=4' res = fv.fetch_tickers_from_screener(url=url) df = res['rec']['data'] # please make sure the SLACK_WEBHOOK environment variable is set correctly: post_df( df=df[SLACK_FINVIZ_COLUMNS], columns=SLACK_FINVIZ_COLUMNS) ``` # Running on Kubernetes¶ ## Kubernetes Deployments - Engine¶ Deploy the engine with: ``` kubectl apply -f ./k8/engine/deployment.yml ``` ## Kubernetes Job - Dataset Collection¶ Start the dataset collection job with: ``` kubectl apply -f ./k8/datasets/job.yml ``` ## Kubernetes Deployments - Jupyter¶ Deploy Jupyter to a Kubernetes cluster with: `./k8/jupyter/run.sh` # Kubernetes with a Private Docker Registry¶ You can deploy a private docker registry that can be used to pull images from outside a kubernetes cluster with the following steps: Deploy Docker Registry > ./compose/start.sh -r * Configure Kubernetes hosts and other docker daemons for insecure registries > cat /etc/docker/daemon.json { "insecure-registries": [ "<public ip address/fqdn for host running the registry container>:5000" ] } * Restart all Docker daemons > sudo systemctl restart docker * Login to Docker Registry from all Kubernetes hosts and other daemons that need access to the registry Change the default registry password by either changing the `./compose/start.sh` file that uses `trex` and `123321` as the credentials or you can edit the volume mounted file ``` /data/registry/auth/htpasswd ``` . Here is how to find the registry’s default login set up: > grep docker compose/start.sh | grep htpass > docker login <public ip address/fqdn for host running the registry container>:5000 * Setup Kubernetes Secrets for All Credentials Set each of the fields according to your own buckets, docker registry and Tradier account token: > cat /opt/sa/k8/secrets/secrets.yml | grep SETYOUR aws_access_key_id: SETYOURENCODEDAWSACCESSKEYID aws_secret_access_key: SETYOURENCODEDAWSSECRETACCESSKEY .dockerconfigjson: SETYOURDOCKERCREDS td_token: SETYOURTDTOKEN * Deploy Kubernetes Secrets > kubectl apply -f /opt/sa/k8/secrets/secrets.yml * Confirm Kubernetes Secrets are Deployed > kubectl get secrets ae.docker.creds NAME TYPE DATA AGE ae.docker.creds kubernetes.io/dockerconfigjson 1 4d1h > kubectl get secrets | grep "ae\." ae.docker.creds kubernetes.io/dockerconfigjson 1 4d1h ae.k8.aws.s3 Opaque 3 4d1h ae.k8.minio.s3 Opaque 3 4d1h ae.k8.tradier Opaque 4 4d1h * Configure Kubernetes Deployments for using an External Private Docker Registry Add these lines to a Kubernetes deployment yaml file based off your set up: > imagePullSecrets: - name: ae.docker.creds containers: - image: <public ip address/fqdn for host running the registry container>:5000/my-own-stock-ae:latest imagePullPolicy: Always Tip After spending a sad amount of time debugging, please make sure to delete pods before applying new ones that are pulling docker images from an external registry. After running the ``` kubectl delete pod <name> ``` , you can apply/create the pod to get the latest image running. # Testing¶ To show debug, trace logging please export `SHARED_LOG_CFG` to a debug logger json file. To turn on debugging for this library, you can export this variable to the repo’s included file with the command: ``` export SHARED_LOG_CFG=/opt/sa/analysis_engine/log/debug-logging.json ``` Run all `py.test --maxfail=1` Run a test case ## Test Publishing¶ ``` python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_s3_upload ``` ``` python -m unittest tests.test_publish_from_s3_to_redis.TestPublishFromS3ToRedis.test_success_publish_from_s3_to_redis ``` ## Redis Cache Set¶ ``` python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_redis_set ``` ## Prepare Dataset¶ ## Test Algo Saving All Input Datasets to File¶ ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_algo_can_save_all_input_datasets_to_file ``` # End-to-End Integration Testing¶ Start all the containers for full end-to-end integration testing with real docker containers with the script: ``` ./compose/start.sh -a ``` Verify Containers are running: ``` docker ps | grep -E "stock-analysis|redis|minio" ``` Stop End-to-End Stack: ``` ./compose/stop.sh ./compose/stop.sh -s ``` # Integration UnitTests¶ please start redis and minio before running these tests. Please enable integration tests `export INT_TESTS=1` ``` python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload ``` ## IEX Test - Fetching All Datasets¶ ## IEX Test - Fetch Minute¶ ``` python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_minute ``` ``` python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_stats ``` ``` python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_peers ``` ## IEX Test - Fetch News¶ ## IEX Test - Fetch Earnings¶ ## IEX Test - Fetch Dividends¶ ``` python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_company ``` ## IEX Test - Fetch Financials Helper¶ ## IEX Test - Extract Daily Dataset¶ ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_daily_dataset ``` ## IEX Test - Extract Minute Dataset¶ ## IEX Test - Extract Quote Dataset¶ ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_quote_dataset ``` ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_stats_dataset ``` ## IEX Test - Extract Peers Dataset¶ ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_peers_dataset ``` ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_news_dataset ``` ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_earnings_dataset ``` ## IEX Test - Extract Dividends Dataset¶ ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_dividends_dataset ``` ## IEX Test - Extract Company Dataset¶ ``` python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_company_dataset ``` ## FinViz Test - Fetch Tickers from Screener URL¶ ``` python -m unittest tests.test_finviz_fetch_api.TestFinVizFetchAPI.test_integration_test_fetch_tickers_from_screener ``` or with code: ``` import analysis_engine.finviz.fetch_api as fv url = 'https://finviz.com/screener.ashx?v=111&f=exch_nyse&ft=4&r=41' res = fv.fetch_tickers_from_screener(url=url) print(res) ``` # Algorithm Testing¶ ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_dataset_to_file ``` ## Algorithm Test - Load Dataset From a File¶ ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_load_from_file ``` ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_s3_and_load ``` ## Algorithm Test - Extract Algorithm-Ready Dataset from Redis DB 0 and Load into Redis DB 1¶ Copying datasets between redis databases is part of the integration tests. Run it with: ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_restore_ready_back_to_redis ``` ## Algorithm Test - Test the Docs Example¶ ``` python -m unittest tests.test_base_algo.TestBaseAlgo.test_sample_algo_code_in_docstring ``` # Prepare a Dataset¶ ``` ticker=SPY sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo ``` # Debugging¶ ## Test Algos¶ The fastest way to run algos is to specify a 1-day range: ``` sa -t SPY -s $(date +"%Y-%m-%d) -n $(date +"%Y-%m-%d") ``` ## Test Tasks¶ Most of the scripts support running without Celery workers. To run without workers in a synchronous mode use the command: ``` ticker=SPY publish_from_s3_to_redis.py -t ${ticker} -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1 sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo fetch -t ${ticker} -g all -e 2018-10-19 -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo -P 1 -N 1 -O 1 -U 1 -R 1 fetch -A scn -L 'https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https://finviz.com/screener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4' ``` ## Linting and Other Tools¶ Linting > flake8 . pycodestyle . * Sphinx Docs > cd docs make html * Docker Admin - Pull Latest > docker pull jayjohnson/stock-analysis-jupyter && docker pull jayjohnson/stock-analysis-engine * Back up Docker Redis Database > /opt/sa/tools/backup-redis.sh View local redis backups with: > ls -hlrt /opt/sa/tests/datasets/redis/redis-0-backup-*.rdb * Export the Kubernetes Redis Cluster’s Database to the Local Redis Container stop the redis docker container: > ./compose/stop.sh * Archive the previous redis database > cp /data/redis/data/dump.rdb /data/redis/data/archive.rdb * Save the Redis database in the Cluster > kubectl exec -it redis-master-0 redis-cli save * Export the saved redis database file inside the pod to the default docker redis container’s local file > kubectl cp redis-master-0:/bitnami/redis/data/dump.rdb /data/redis/data/dump.rdb * Restart the stack Redis takes a few seconds to load all the data into memory so this can take a few seconds > ./compose/start.sh # Deploy Fork Feature Branch to Running Containers¶ When developing features that impact multiple containers, you can deploy your own feature branch without redownloading or manually building docker images. With the containers running., you can deploy your own fork’s branch as a new image (which are automatically saved as new docker container images). ## Deploy a public or private fork into running containers¶ ``` ./tools/update-stack.sh <git fork https uri> <optional - branch name (master by default)> <optional - fork repo name> ``` Example: ``` ./tools/update-stack.sh https://github.com/jay-johnson/stock-analysis-engine.git timeseries-charts jay ``` ## Restore the containers back to the Master¶ Restore the container builds back to the `master` branch from https://github.com/AlgoTraders/stock-analysis-engine with: ``` ./tools/update-stack.sh https://github.com/AlgoTraders/stock-analysis-engine.git master upstream ``` ## Deploy Fork Alias¶ Here’s a bashrc alias for quickly building containers from a fork’s feature branch: ``` alias bd='pushd /opt/sa >> /dev/null && source /opt/venv/bin/activate && /opt/sa/tools/update-stack.sh https://github.com/jay-johnson/stock-analysis-engine.git timeseries-charts jay && popd >> /dev/null' ``` ## Debug Fetching IEX Data¶ ``` ticker="SPY" use_date=$(date +"%Y-%m-%d") source /opt/venv/bin/activate exp_date=$(/opt/sa/analysis_engine/scripts/print_next_expiration_date.py) fetch -t ${ticker} -g iex -n ${ticker}_${use_date} -e ${exp_date} -Z ``` ## Failed Fetching Tradier Data¶ Please export a valid `TD_TOKEN` in your `compose/envs/*.env` docker compose files if you see the following errors trying to pull pricing data from Tradier: ``` 2019-01-09 00:16:47,148 - analysis_engine.td.fetch_api - INFO - failed to get put with response=<Response [401]> code=401 text=Invalid Access Token 2019-01-09 00:16:47,151 - analysis_engine.td.get_data - CRITICAL - ticker=TSLA-tdputs - ticker=TSLA field=10001 failed fetch_data with ex='date' 2019-01-09 00:16:47,151 - analysis_engine.work_tasks.get_new_pricing_data - CRITICAL - ticker=TSLA failed TD ticker=TSLA field=tdputs status=ERR err=ticker=TSLA-tdputs - ticker=TSLA field=10001 failed fetch_data with ex='date' ``` # FAQ¶ ## Can I live trade with my algorithms?¶ Not yet. Please reach out for help on how to do this or if you have a platform you like. ## Can I publish algorithm trade notifications?¶ Right now algorithms only support publishing to a private Slack channel for sharing with a group when an algorithm finds a buy/sell trade to execute. Reach out if you have a custom chat client app or service you think should be supported. # Terms of Service¶ # Data Attribution¶ This repository currently uses Tradier and IEX for pricing data. Usage of these feeds require the following agreements in the terms of service. # IEX Cloud¶ * Link to IEX’s Terms of Use * IEX Real-Time Price is used with this repository * IEX Cloud is a data source with the additional data attribution instructions available on https://iextrading.com/developer/docs/#attribution # Adding Celery Tasks¶ If you want to add a new Celery task add the file path to WORKER_TASKS at these locations: * compose/envs/local.env * compose/envs/.env * analysis_engine/work_tasks/consts.py # Algo Runner API # Algo Runner API¶ A class for running backtests and the latest pricing data with an automated publishing of the `Trading History` to S3 ``` analysis_engine.algo_runner. ``` `AlgoRunner` (ticker, algo_config=None, start_date=None, end_date=None, history_loc=None, predictions_loc=None, run_on_engine=False, verbose_algo=False, verbose_processor=False, verbose_indicators=False, **kwargs)[source]¶ * Run an algorithm backtest or with the latest pricing data and publish the compressed trading history to s3 which can be used to train AI Full Backtest > import analysis_engine.algo_runner as algo_runner runner = algo_runner.AlgoRunner('SPY') runner.start() Run Algorithm with Latest Pricing Data > import analysis_engine.algo_runner as algo_runner import analysis_engine.plot_trading_history as plot ticker = 'SPY' runner = algo_runner.AlgoRunner(ticker) # run the algorithm with the latest 200 minutes: df = runner.latest() print(df[['minute', 'close']].tail(5)) plot.plot_trading_history( title=( f'{ticker} - ${df["close"].iloc[-1]} ' f'at: {df["minute"].iloc[-1]}'), df=df) ``` determine_latest_times_in_history ``` determine the latest minute or day in the pricing dataset and convert `date` and `minute` columns to `datetime` objects * `latest` (date_str=None, start_row=-200, extract_iex=True, extract_yahoo=False, extract_td=True, verbose=False, **kwargs)[source]¶ * Run the algorithm with the latest pricing data. Also supports running a backtest for a historical date in the pricing history (format `YYYY-MM-DD` ) b'Parameters:' * date_str – optional - string start date `YYYY-MM-DD` default is the latest close date * start_row – negative number of rows back from the end of the list in the data default is `-200` where this means the algorithm will process the latest 200 rows in the minute dataset * extract_iex – bool flag for extracting from `IEX` * extract_yahoo – bool flag for extracting from `Yahoo` which is disabled as of 1/2019 * extract_td – bool flag for extracting from `Tradier` * verbose – bool flag for logs * kwargs – keyword arg dict * date_str – optional - string start date * `load_trading_history` (s3_access_key=None, s3_secret_key=None, s3_address=None, s3_region=None, s3_bucket=None, s3_key=None, s3_secure=7, **kwargs)[source]¶ * Helper for loading an algorithm `Trading History` from S3 b'Parameters:' * s3_access_key – access key * s3_secret_key – secret * s3_address – address * s3_region – region * s3_bucket – bucket * s3_key – key * s3_secure – secure flag * kwargs – support for keyword arg dict ``` publish_trading_history ``` (records_for_history, pt_s3_access_key=None, pt_s3_secret_key=None, pt_s3_address=None, pt_s3_region=None, pt_s3_bucket=None, pt_s3_key=None, pt_s3_secure=7, **kwargs)[source]¶ * Helper for publishing a trading history to another S3 service like AWS b'Parameters:' * records_for_history – list of dictionaries for the history file * pt_s3_access_key – access key * pt_s3_secret_key – secret * pt_s3_address – address * pt_s3_region – region * pt_s3_bucket – bucket * pt_s3_key – key * pt_s3_secure – secure flag * kwargs – support for keyword arg dict ## Build an Algorithm Backtest Dictionary¶ Build a dictionary by extracting all required pricing datasets for the algorithm’s indicators out of Redis This dictionary should be passed to an algorithm’s `handle_data` method like: ``` algo.handle_data(build_dataset_node()) ``` ``` analysis_engine.build_dataset_node. ``` `build_dataset_node` (ticker, datasets, date=None, service_dict=None, log_label=None, redis_enabled=True, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_key=None, s3_enabled=True, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, s3_key=None, verbose=False)[source]¶ * Helper for building a dictionary that of cached datasets from redis. The datasets should be built from off the algorithm’s config indicators `uses_data` fields which if not set will default to `minute` data b'Parameters:' * ticker – string ticker * datasets – list of string dataset names to extract from redis * date – optional - string datetime formatted `YYYY-MM-DD` (default is last trading close date) * service_dict – optional - dictionary for all service connectivity to Redis and Minio if not set the arguments for all `s3_*` and `redis_*` will be used to lookup data in Redis and Minio (Optional) Redis connectivity arguments b'Parameters:' * redis_enabled – bool - toggle for auto-caching all datasets in Redis (default is `True` ) * redis_address – Redis connection string format is `host:port` (default is `localhost:6379` ) * redis_db – Redis db to use (default is `0` ) * redis_password – optional - Redis password (default is `None` ) * redis_expire – optional - Redis expire value (default is `None` ) * redis_key – optional - redis key not used (default is `None` ) * s3_enabled – bool - toggle for turning on/off Minio or AWS S3 (default is `True` ) * s3_address – Minio S3 connection string address format is `host:port` (default is `localhost:9000` ) * s3_bucket – S3 Bucket for storing the artifacts (default is `dev` ) which should be viewable on a browser: http://localhost:9000/minio/dev/ * s3_access_key – S3 Access key (default is `trexaccesskey` ) * s3_secret_key – S3 Secret key (default is `trex123321` ) * s3_region_name – S3 region name (default is `us-east-1` ) * s3_secure – Transmit using tls encryption (default is `False` ) * s3_key – optional s3 key not used (default is `None` ) Debugging b'Parameters:' * log_label – optional - log label string * verbose – optional - flag for debugging (default to `False` ) # Run an Algorithm Backtest with the Runner API¶ Algorithm Runner API Example Script Run Full Backtest Run Algorithm with Latest Pricing Data Debug by adding `-d` as an argument Date: 2019-02-07 Categories: Tags: Get latest pricing from cached IEX pricing data ``` analysis_engine.iex.get_pricing_on_date. ``` `get_pricing_on_date` (ticker, date_str=None, label=None)[source]¶ * Get the latest pricing data from the cached IEX data in redis. Use this to keep costs down! > import analysis_engine.iex.get_pricing_on_date as iex_cache print(iex_cache.get_pricing_on_date('SPY')) print(iex_cache.get_pricing_on_date( ticker='SPY', date_str='2019-02-07')) b'Parameters:' * ticker – ticker string * date_str – optional - string date to pull data from redis. if `None` use today’s date. format is * label – log label from tracking Tool for inspecting cached pricing data to find common errors. This tool uses the Extraction API to look for dates that are not in sync with the redis cached date. This tool requires redis to be running with fetched datasets already stored in supported keys Inspect Minute Datasets for a Ticker ``` inspect_datasets.py -t SPY ``` Inspect Daily Datasets for a Ticker ``` inspect_datasets.py -t AAPL -g daily # or # inspect_datasets.py -t AAPL -g day ``` Usage ``` inspect_datasets.py -h usage: inspect_datasets.py [-h] [-t TICKER] [-g DATASETS] [-s START_DATE] Inspect datasets looking for dates in redis that look incorrect optional arguments: -h, --help show this help message and exit -t TICKER ticker -g DATASETS optional - datasets: minute or min = examine IEX Cloud intraday minute data, daily or day = examine IEX Cloud daily data, quote = examine IEX Cloud quotes data, stats = examine IEX Cloud key stats data, peers = examine IEX Cloud peers data, news = examine IEX Cloud news data, fin = examine IEX Cloud financials data, earn = examine IEX Cloud earnings data, div = examine IEX Cloud dividendsdata, comp = examine IEX Cloud company data, calls = examine Tradier calls data, puts = examine Tradier puts data, and comma delimited is supported as well -s START_DATE start date format YYYY-MM-DD (default is 2019-01-01) ``` ``` analysis_engine.scripts.inspect_datasets. ``` `inspect_datasets` (ticker=None, start_date=None, datasets=None)[source]¶ * Loop over all cached data in redis by going sequentially per date and examine the latest `date` value in the cache to check if it matches the redis key’s date. For IEX Cloud minute data errors, running this function will print out commands to fix any issues (if possible): > fetch -t TICKER -g iex_min -F DATE_TO_FIX b'Parameters:' * ticker – optional - string ticker * start_date – optional - datetime start date for the loop (default is `2019-01-01` ) * datasets – optional - list of strings to extract specific, supported datasets (default is `['minute']` ) # Scripts # Scripts¶ # Fetch Pricing Datasets from IEX Cloud and Tradier¶ Fetch new pricing datasets for a one or many tickers at once or pull screeners from IEX Cloud (https://iexcloud.io), Tradier (https://tradier.com/) and FinViz (https://finviz.com/) * Fetch pricing data * Publish pricing data to Redis and Minio Fetch Intraday Minute Pricing Data `fetch -t QQQ -g min` Fetch Intraday Option Chains for Calls and Puts `fetch -t QQQ -g td` Fetch Intraday News, Minute and Options ``` fetch -t QQQ -g news,min,td ``` Debugging Turn on verbose debugging with the `-d` argument: ``` fetch -t QQQ -g min -d ``` ``` analysis_engine.scripts.fetch_new_stock_datasets. ``` ``` fetch_new_stock_datasets ``` Collect datasets for a ticker from IEX Cloud or Tradier Setup > export IEX_TOKEN=YOUR_IEX_CLOUD_TOKEN export TD_TOKEN=YOUR_TRADIER_TOKEN Pull Data for a Ticker from IEX and Tradier > fetch -t TICKER Pull from All Supported IEX Feeds > fetch -t TICKER -g iex-all Pull from All Supported Tradier Feeds > fetch -t TICKER -g td Intraday IEX and Tradier Feeds (only minute and news to reduce costs) > fetch -t TICKER -g intra # or manually: # fetch -t TICKER -g td,iex_min,iex_news Daily IEX Feeds (daily and news) > fetch -t TICKER -g daily # or manually: # fetch -t TICKER -g iex_day,iex_news Weekly IEX Feeds (company, financials, earnings, dividends, and peers) > fetch -t TICKER -g weekly # or manually: # fetch -t TICKER -g iex_fin,iex_earn,iex_div,iex_peers,iex_news, # iex_comp IEX Minute > fetch -t TICKER -g iex_min IEX News > fetch -t TICKER -g iex_news IEX Daily > fetch -t TICKER -g iex_day IEX Stats > fetch -t TICKER -g iex_stats IEX Peers > fetch -t TICKER -g iex_peers IEX Financials > fetch -t TICKER -g iex_fin IEX Earnings > fetch -t TICKER -g iex_earn IEX Dividends > fetch -t TICKER -g iex_div IEX Quote > fetch -t TICKER -g iex_quote IEX Company > fetch -t TICKER -g iex_comp # Backtest an Algorithm and Plot the Trading History¶ A tool for showing how to build an algorithm and run a backtest with an algorithm config dictionary ``` import analysis_engine.consts as ae_consts import analysis_engine.algo as base_algo import analysis_engine.run_algo as run_algo ticker = 'SPY' willr_close_path = ( 'analysis_engine/mocks/example_indicator_williamsr.py') willr_open_path = ( 'analysis_engine/mocks/example_indicator_williamsr_open.py') algo_config_dict = { 'name': 'min-runner', 'timeseries': timeseries, 'trade_horizon': 5, 'num_owned': 10, 'buy_shares': 10, 'balance': 10000.0, 'commission': 6.0, 'ticker': ticker, 'algo_module_path': None, 'algo_version': 1, 'verbose': False, # log in the algorithm 'verbose_processor': False, # log in the indicator processor 'verbose_indicators': False, # log all indicators 'verbose_trading': True, # log in the algo trading methods 'positions': { ticker: { 'shares': 10, 'buys': [], 'sells': [] } }, 'buy_rules': { 'confidence': 75, 'min_indicators': 3 }, 'sell_rules': { 'confidence': 75, 'min_indicators': 3 }, 'indicators': [ { 'name': 'willr_-70_-30', 'module_path': willr_close_path, 'category': 'technical', 'type': 'momentum', 'uses_data': 'minute', 'high': 0, 'low': 0, 'close': 0, 'open': 0, 'willr_value': 0, 'num_points': 80, 'buy_below': -70, 'sell_above': -30, 'is_buy': False, 'is_sell': False, 'verbose': False # log in just this indicator }, { 'name': 'willr_-80_-20', 'module_path': willr_close_path, 'category': 'technical', 'type': 'momentum', 'uses_data': 'minute', 'high': 0, 'low': 0, 'close': 0, 'open': 0, 'willr_value': 0, 'num_points': 30, 'buy_below': -80, 'sell_above': -20, 'is_buy': False, 'is_sell': False }, { 'name': 'willr_-90_-10', 'module_path': willr_close_path, 'category': 'technical', 'type': 'momentum', 'uses_data': 'minute', 'high': 0, 'low': 0, 'close': 0, 'open': 0, 'willr_value': 0, 'num_points': 60, 'buy_below': -90, 'sell_above': -10, 'is_buy': False, 'is_sell': False }, { 'name': 'willr_open_-80_-20', 'module_path': willr_open_path, 'category': 'technical', 'type': 'momentum', 'uses_data': 'minute', 'high': 0, 'low': 0, 'close': 0, 'open': 0, 'willr_open_value': 0, 'num_points': 80, 'buy_below': -80, 'sell_above': -20, 'is_buy': False, 'is_sell': False } ], 'slack': { 'webhook': None } } class ExampleCustomAlgo(base_algo.BaseAlgo): def process(self, algo_id, ticker, dataset): if self.verbose: print( f'process start - {self.name} ' f'date={self.backtest_date} minute={self.latest_min} ' f'close={self.latest_close} high={self.latest_high} ' f'low={self.latest_low} open={self.latest_open} ' f'volume={self.latest_volume}') # end of process # end of ExampleCustomAlgo algo_obj = ExampleCustomAlgo( ticker=algo_config_dict['ticker'], config_dict=algo_config_dict) algo_res = run_algo.run_algo( ticker=algo_config_dict['ticker'], algo=algo_obj, raise_on_err=True) if algo_res['status'] != ae_consts.SUCCESS: print( 'failed running algo backtest ' f'{algo_obj.get_name()} hit status: ' f'{ae_consts.get_status(status=algo_res['status'])} ' f'error: {algo_res["err"]}') else: print( f'backtest: {algo_obj.get_name()} ' f'{ae_consts.get_status(status=algo_res["status"])} - ' 'plotting history') # if not successful ``` ``` build_example_algo_config ``` (ticker, timeseries='minute')[source]¶ * helper for building an algorithm config dictionary b'Returns:' b'algorithm config dictionary' Run a custom algorithm after all the indicators from the `algo_config_dict` have been processed and all the number crunching is done. This allows the algorithm class to focus on the high-level trade execution problems like bid-ask spreads and opening the buy/sell trade orders. How does it work? The engine provides a data stream from the latest pricing updates stored in redis. Once new data is stored in redis, algorithms will be able to use each `dataset` as a chance to evaluate buy and sell decisions. These are your own custom logic for trading based off what the indicators find and any non-indicator data provided from within the `dataset` dictionary. Dataset Dictionary Structure Here is what the `dataset` variable looks like when your algorithm’s `process` method is called (assuming you have redis running with actual pricing data too): > dataset = { 'id': dataset_id, 'date': date, 'data': { 'daily': pd.DataFrame([]), 'minute': pd.DataFrame([]), 'quote': pd.DataFrame([]), 'stats': pd.DataFrame([]), 'peers': pd.DataFrame([]), 'news1': pd.DataFrame([]), 'financials': pd.DataFrame([]), 'earnings': pd.DataFrame([]), 'dividends': pd.DataFrame([]), 'calls': pd.DataFrame([]), 'puts': pd.DataFrame([]), 'pricing': pd.DataFrame([]), 'news': pd.DataFrame([]) } } you can also inspect these datasets by setting the algorithm’s config dictionary key ``` "inspect_datasets": True ``` # Plot the Trading History from a File on Disk¶ A tool for plotting an algorithm’s `Trading History` from a locally saved file from running the backtester with the save to file option enabled: ``` run_backtest_and_plot_history.py -t SPY -f <SAVE_HISTORY_TO_THIS_FILE> ``` Publish the contents of an S3 key to a Redis key Publish the aggregated S3 contents of a ticker to a Redis key and back to S3 ## Steps:¶ * Parse arguments * Download and aggregate ticker data from S3 as a Celery task * Publish aggregated data to S3 as a Celery task * Publish aggregated data to Redis as a Celery task # Stock Analysis Command Line Tool¶ This tool is for preparing, analyzing and using datasets to run predictions using the tensorflow and keras. Stock Analysis Command Line Tool * Get an algorithm-ready dataset * Fetch and extract algorithm-ready datasets * Optional - Preparing a dataset from s3 or redis. A prepared dataset can be used for analysis. * Run an algorithm using the cached datasets * Coming Soon - Analyze datasets and store output (generated csvs) in s3 and redis. * Coming Soon - Make predictions using an analyzed dataset Supported Actions Algorithm-Ready Datasets Algo-ready datasets were created by the Algorithm Extraction API. You can tune algorithm performance by deriving your own algorithm from the analysis_engine.algo.BaseAlgo and then loading the dataset from s3, redis or a file by passing the correct arguments. Command line actions: Extract algorithm-ready datasets out of redis to a file > sa -t SPY -e ~/SPY-$(date +"%Y-%m-%d").json * View algorithm-ready datasets in a file > sa -t SPY -l ~/SPY-$(date +"%Y-%m-%d").json * Restore algorithm-ready datasets from a file to redis This also works as a backup tool for archiving an entire single ticker dataset from redis to a single file. (zlib compression is code-complete but has not been debugged end-to-end) > sa -t SPY -L ~/SPY-$(date +"%Y-%m-%d").json if the output redis key or s3 key already exists, this process will overwrite the previously stored values * Run an Algorithm Please refer to the included Minute Algorithm for an up to date reference. > sa -t SPY -g /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` restore_missing_dataset_values_from_algo_ready_file ``` (ticker, path_to_file, redis_address, redis_password, redis_db=0, output_redis_db=None, compress=True, encoding='utf-8', dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], show_summary=True)[source]¶ * restore missing dataset nodes in redis from an algorithm-ready dataset file on disk - use this to restore redis from scratch b'Parameters:' * ticker – string ticker * path_to_file – string path to file on disk * redis_address – redis server endpoint adddress with format `host:port` * redis_password – optional - string password for redis * redis_db – redis db (default is `REDIS_DB` ) * output_redis_db – optional - integer for different redis database (default is `None` ) * compress – contents in algorithm-ready file are compressed (default is `True` ) * encoding – byte encoding of algorithm-ready file (default is `utf-8` ) * dataset_type – optional - dataset type (default is ) * serialize_datasets – optional - list of dataset names to deserialize in the dataset * show_summary – optional - show a summary of the algorithm-ready dataset using (default is `True` ) ``` examine_dataset_in_file ``` (path_to_file, compress=False, encoding='utf-8', ticker=None, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'])[source]¶ * Show the internal dataset dictionary structure in dataset file b'Parameters:' # Set S3 Environment Variables¶ Set these as needed for your S3 deployment ``` export ENABLED_S3_UPLOAD=<'0' disabled which is the default, '1' enabled> export S3_ACCESS_KEY=<access key> export S3_SECRET_KEY=<secret key> export S3_REGION_NAME=<region name: us-east-1> export S3_ADDRESS=<S3 endpoint address host:port like: localhost:9000> export S3_UPLOAD_FILE=<path to file to upload> export S3_BUCKET=<bucket name - pricing default> export S3_COMPILED_BUCKET=<compiled bucket name - compileddatasets default> export S3_KEY=<key name - SPY_demo default> export S3_SECURE=<use ssl '1', disable with '0' which is the default> export PREPARE_S3_BUCKET_NAME=<prepared dataset bucket name> export ANALYZE_S3_BUCKET_NAME=<analyzed dataset bucket name> ``` # Set Redis Environment Variables¶ Set these as needed for your Redis deployment ``` export ENABLED_REDIS_PUBLISH=<'0' disabled which is the default, '1' enabled> export REDIS_ADDRESS=<redis endpoint address host:port like: localhost:6379> export REDIS_KEY=<key to cache values in redis> export REDIS_PASSWORD=<optional - redis password> export REDIS_DB=<optional - redis database - 0 by default> export REDIS_EXPIRE=<optional - redis expiration for data in seconds> ``` This guide outlines how to use helm to deploy and manage the Analysis Engine (AE) on kubernetes (tested on `1.13.3` ). It requires the following steps are done before getting started: * Access to a running Kubernetes cluster * Helm is installed * A valid account for IEX Cloud * A valid account for Tradier * Optional - Install Ceph Cluster for Persistent Storage Support * Optional - Install the Stock Analysis Engine for Local Development Outside of Kubernetes AE builds multiple helm charts that are hosted on a local helm repository, and everything runs within the `ae` kubernetes namespace. Please change to the `./helm` directory: `cd helm` ## Build Charts¶ This will build all the AE charts, download stable/redis and stable/minio, and ensure the local helm server is running: `./build.sh` # Configuration¶ Each AE chart supports attributes for connecting to a: Depending on your environment, these services may require you to edit the associated helm chart’s values.yaml file(s) before starting everything with the start.sh script to deploy AE. Below are some of the common integration questions on how to configure each one (hopefully) for your environment: ## Configure Redis¶ The `start.sh` script installs the stable/redis chart with the included ./redis/values.yaml for configuring as needed before the start script boots up the included Bitnami Redis cluster ## Configure Minio¶ The `start.sh` script installs the stable/minio chart with the included ./minio/values.yaml for configuring as needed before the start script boots up the included Minio ## Configure AE Stack¶ Each of the AE charts can be configured prior to running the stack’s core AE chart found in: ## Configure the AE Backup to AWS S3 Job¶ Please set your AWS credentials (which will be installed as kubernetes secrets) in the file: ## Configure Data Collection Jobs¶ Data collection is broken up into three categories of jobs: intraday, daily and weekly data to collect. Intraday data collection is built to be fast and pull data that changes often vs weekly data that is mostly static and expensive for `IEX Cloud` users. These chart jobs are intended to be used with cron jobs that fire work into the AE workers which compress + cache the pricing data for algorithms and backtesting. Set your `IEX Cloud` account up in each chart: Supported IEX Cloud Attributes > # IEX Cloud # https://iexcloud.io/docs/api/ iex: addToSecrets: true secretName: ae.k8.iex.<intraday|daily|weekly> # Publishable Token: token: "" # Secret Token: secretToken: "" apiVersion: beta * Set your `Tradier` account up in each chart: Supported Tradier Attributes > # Tradier # https://developer.tradier.com/documentation tradier: addToSecrets: true secretName: ae.k8.tradier.<intraday|daily|weekly> token: "" apiFQDN: api.tradier.com dataFQDN: sandbox.tradier.com streamFQDN: sandbox.tradier.com * * Set the intraday.tickers to a comma-delimited list of tickers to pull per minute. * * Set the daily.tickers to a comma-delimited list of tickers to pull at the end of each trading day. * ``` IEX Financials or Earnings ``` ## Set Jupyter Login Credentials¶ Please set your Jupyter login password that works with a browser: ``` jupyter: password: admin ``` ## View Jupyter¶ Default login password is: * password: `admin` Default login credentials are: * Access Key: `trexaccesskey` * Secret Key: `trex123321` ## Optional - Set Default Storage Class¶ The AE pods are using a Distributed Ceph Cluster for persistenting data outside kubernetes with `~300 GB` of disk space. To set your kubernetes cluster StorageClass to use the ceph-rbd use the script: ## Optional - Set the Charts to Pull from a Private Docker Registry¶ By default the AE charts use the Stock Analysis Engine container, and here is how to set up each AE component chart to use a private docker image in a private docker registry (for building your own algos in-house). Each of the AE charts values.yaml files contain two required sections for deploying from a private docker registry. Set the Private Docker Registry Authentication values in each chart The `imagePullSecrets` attribute uses a naming convention format: ``` <base key>.<component name> ``` . The base is `ae.docker.creds.` and the approach allows different docker images for each component (for testing) like intraday data collection vs running a backup job or even hosting jupyter. Supported Private Docker Registry Authentication Attributes > registry: addToSecrets: true address: <FQDN to docker registry>:<PORT registry uses a default port 5000> imagePullSecrets: ae.docker.creds.<core|backtester|backup|intraday|daily|weekly|jupyter> dockerConfigJSON: '{"auths":{"<FQDN>:<PORT>":{"Username":"username","Password":"password","Email":""}}}' * Set the AE Component’s docker image name, tag, pullPolicy and private flag Supported Private Docker Image Attributes per AE Component > image: private: true name: YOUR_IMAGE_NAME_HERE tag: latest pullPolicy: Always # Start Stack¶ This command can take a few minutes to download and start all the components: `./start.sh` # Manually Starting Components With Helm¶ If you do not want to use `start.sh` you can start the charts with helm using: ## Start the AE Stack¶ ``` helm install \ --name=ae \ ./ae \ --namespace=ae \ -f ./ae/values.yaml ``` ## Start Redis¶ ``` helm install \ --name=ae-redis \ stable/redis \ --namespace=ae \ -f ./redis/values.yaml ``` ## Start Minio¶ ``` helm install \ --name=ae-minio \ stable/minio \ --namespace=ae \ -f ./minio/values.yaml ``` ## Start Jupyter¶ ``` helm install \ --name=ae-jupyter \ ./ae-jupyter \ --namespace=ae \ -f ./ae-jupyter/values.yaml ``` ## Start Backup Job¶ ``` helm install \ --name=ae-backup \ ./ae-backup \ --namespace=ae \ -f ./ae-backup/values.yaml ``` ## Start Intraday Data Collection Job¶ ``` helm install \ --name=ae-intraday \ ./ae-intraday \ --namespace=ae \ -f ./ae-intraday/values.yaml ``` ## Start Daily Data Collection Job¶ ``` helm install \ --name=ae-daily \ ./ae-daily \ --namespace=ae \ -f ./ae-daily/values.yaml ``` ## Start Weekly Data Collection Job¶ ``` helm install \ --name=ae-weekly \ ./ae-weekly \ --namespace=ae \ -f ./ae-weekly/values.yaml ``` # Verify Pods are Running¶ ``` ./show-pods.sh ------------------------------------ getting pods in ae: kubectl get pods -n ae NAME READY STATUS RESTARTS AGE ae-minio-55d56cf646-87znm 1/1 Running 0 3h30m ae-redis-master-0 1/1 Running 0 3h30m ae-redis-slave-68fd99b688-sn875 1/1 Running 0 3h30m backtester-5c9687c645-n6mmr 1/1 Running 0 4m22s engine-6bc677fc8f-8c65v 1/1 Running 0 4m22s engine-6bc677fc8f-mdmcw 1/1 Running 0 4m22s jupyter-64cf988d59-7s7hs 1/1 Running 0 4m21s ``` # Run Intraday Pricing Data Collection¶ ``` ae-intraday/values.yaml ``` ``` ./run-intraday-job.sh <PATH_TO_VALUES_YAML> ``` ``` ./run-intraday-job.sh -r <PATH_TO_VALUES_YAML> ``` # View Collected Pricing Data in Redis¶ After data collection, you can view compressed data for a ticker within the redis cluster with: ``` ./view-ticker-data-in-redis.sh TICKER ``` # Run Daily Pricing Data Collection¶ Once your `ae-daily/values.yaml` is ready, you can automate daily data collection by using the helper script to start the helm release for `ae-daily` : ``` ./run-daily-job.sh <PATH_TO_VALUES_YAML> ``` ``` ./run-daily-job.sh -r <PATH_TO_VALUES_YAML> ``` # Run Weekly Pricing Data Collection¶ ``` ae-weekly/values.yaml ``` ``` ./run-weekly-job.sh <PATH_TO_VALUES_YAML> ``` ``` ./run-weekly-job.sh -r <PATH_TO_VALUES_YAML> ``` # Run Backup Collected Pricing Data to AWS¶ ``` ae-backup/values.yaml ``` is ready, you can automate backing up your collected + compressed pricing data from within the redis cluster and publish it to AWS S3 with the helper script: Warning Please remember AWS S3 has usage costs. Please set only the tickers you need to backup before running the ae-backup job. ``` ./run-backup-job.sh <PATH_TO_VALUES_YAML> ``` ``` ./run-backup-job.sh -r <PATH_TO_VALUES_YAML> ``` # Cron Automation with Helm¶ Add the lines below to your cron with `crontab -e` for automating pricing data collection. All cron jobs using `run-job.sh` log to: `/tmp/cron-ae.log` . Note This will pull data on holidays or closed trading days, but PR’s welcomed! ## Minute¶ Pull pricing data every minute `M-F` between 9 AM and 5 PM (assuming system time is `EST` ) ``` # intraday job: # min hour day month dayofweek job script path job KUBECONFIG * 9-17 * * 1,2,3,4,5 /opt/sa/helm/cron/run-job.sh intra /opt/k8/config ``` ## Daily¶ Pull only on Friday at 6:01 PM (assuming system time is `EST` ) ``` # daily job: # min hour day month dayofweek job script path job KUBECONFIG 1 18 * * 1,2,3,4,5 /opt/sa/helm/cron/run-job.sh daily /opt/k8/config ``` ## Weekly¶ Pull only on Friday at 7:01 PM (assuming system time is `EST` ) ``` # weekly job: # min hour day month dayofweek job script path job KUBECONFIG 1 19 * * 5 /opt/sa/helm/cron/run-job.sh weekly /opt/k8/config ``` ## Backup¶ Run Friday at 8:01 PM (assuming system time is `EST` ) ``` # backup job: # min hour day month dayofweek job script path job KUBECONFIG 1 20 * * 1,2,3,4,5 /opt/sa/helm/cron/run-job.sh backup /opt/k8/config ``` ## Restore on Reboot¶ Restore Latest Backup from S3 to Redis on a server reboot. ``` # restore job: # on a server reboot (assuming your k8 cluster is running on just 1 host) @reboot /opt/sa/helm/cron/run-job.sh restore /opt/k8/config ``` # Monitoring Kubernetes with Prometheus and Grafana using Helm¶ Deploy Prometheus and Grafana to monitor your kubernetes cluster with support for granular monitoring like for total Redis keys with the command: `./monitor-start.sh` Recreate Prometheus and Grafana: ``` ./monitor-start.sh -r ``` # Prometheus¶ Access Prometheus with this link Monitor Redis Keys in the pricing Redis database 0 with this link # Grafana¶ Access Grafana with this link and the default credentials are: * username: `trex` * password: `123321` # Included Grafana Dashboards¶ The ./grafana/values.yaml defines many dashboards to automatically import on startup using the dashboards.default section. Quickly change between dashboards with this url: https://grafana.example.com/dashboards ## Redis Grafana Dashboard¶ ## Ceph Grafana Dashboard¶ ## Minio Grafana Dashboard¶ ## Kubernetes Grafana Dashboards¶ * Kubernetes Pods on grafana.com * Kubernetes Cluster Monitoring on grafana.com * Kubernetes Capacity Planning on grafana.com * Kubernetes Capacity on grafana.com * Kubernetes Deployment Statefulset Daemonset Metrics on grafana.com * Kubernetes Cluster Monitoring 2 on grafana.com * Kubernetes Cluster Monitoring 3 on grafana.com # Debugging Helm Deployed Components¶ ## Cron Jobs¶ The `engine` pods handle pulling pricing data for the cron jobs. Please review `./logs-engine.sh` for any authentication errors for missing `IEX Cloud Token` and `Tradier Token` messages like: Missing IEX Token log ``` 2019-03-01 06:03:58,836 - analysis_engine.work_tasks.get_new_pricing_data - WARNING - ticker=SPY - please set a valid IEX Cloud Account token (https://iexcloud.io/cloud-login/#/register) to fetch data from IEX Cloud. It must be set as an environment variable like: export IEX_TOKEN=<token> ``` Missing Tradier Token log ``` 2019-03-01 06:03:59,721 - analysis_engine.td.fetch_api - CRITICAL - Please check the TD_TOKEN is correct received 401 during fetch for: puts ``` If there is an `IEX Cloud` or `Tradier` authentication issue, then please check out the Configure Data Collection Jobs section and then rerun the job with the updated `values.yaml` file. ## Helm - Incompatible Versions Client Error¶ If you see an error like this when trying to deploy: ``` Error: incompatible versions client[v2.13.0] server[v2.12.3] ``` Then please upgrade your helm with: Note This will recreate the `tiller` pod in the `kube-system` namespace and can take about 30 seconds to restart correctly, and you can view the pod with the command: ``` kubectl -n kube-system get po | grep tiller ``` `helm init --upgrade` ## Engine¶ Describe: `./describe-engine.sh` View Logs: `./logs-engine.sh` ## Intraday Data Collection¶ ``` ./describe-intraday.sh ``` View Logs: ``` ./logs-job-intraday.sh ``` ## Daily Data Collection¶ Describe: `./describe-daily.sh` View Logs: `./logs-job-daily.sh` ## Weekly Data Collection¶ Describe: `./describe-weekly.sh` View Logs: `./logs-job-weekly.sh` ## Jupyter¶ Describe Pod: ``` ./describe-jupyter.sh ``` View Logs: `./logs-jupyter.sh` View Service: ``` ./describe-service-jupyter.sh ``` ## Backtester¶ Jupyter uses the backtester pod to peform asynchronous processing like running an algo backtest. To debug this run: ``` ./describe-backtester.sh ``` View Logs: `./logs-backtester.sh` Describe: `./describe-minio.sh` Describe Service: ``` ./describe-service-minio.sh ``` Describe Ingress: ``` ./describe-ingress-minio.sh ``` Describe: `./describe-redis.sh` Date: 2018-11-02 Categories: Tags: Example Minute Algorithm for showing how to run an algorithm on intraday minute timeseries datasets What does the base class provide? Algorithms automatically provide the following member variables to any custom algorithm that derives the * `self.balance` * `self.prev_bal` Note If a key is not in the dataset, the algorithms’s member variable will be an empty pandas DataFrame created with: `pd.DataFrame([])` except `self.pricing` which is just a dictionary. Please ensure the engine successfully fetched and cached the dataset in redis using a tool like `redis-cli` and a query of `keys *` or `keys <TICKER>_*` on large deployments. Supported environment variables `ExampleMinuteAlgo` (**kwargs)[source]¶ * a dictionary of identifiers (for debugging) and multiple pandas `pd.DataFrame` objects. Dictionary where keys represent a label from one of the data sources ( `IEX` , `Yahoo` , `FinViz` or other). Here is the supported dataset structure for the process method: Note `get_algo` (**kwargs)[source]¶ * Analyze trading histories stored in S3 with this Jupyter Notebook Plot a `Trading History` dataset using seaborn and matplotlib ``` analysis_engine.plot_trading_history. ``` `plot_trading_history` (title, df, red=None, red_color=None, red_label=None, blue=None, blue_color=None, blue_label=None, green=None, green_color=None, green_label=None, orange=None, orange_color=None, orange_label=None, date_col='minute', xlabel='Minutes', ylabel='Algo Trading History Values', linestyle='-', width=9.0, height=9.0, date_format='%d\n%b', df_filter=None, start_date=None, footnote_text=None, footnote_xpos=0.7, footnote_ypos=0.01, footnote_color='#888888', footnote_fontsize=8, scale_y=False, show_plot=True, dropna_for_all=False, verbose=False, send_plots_to_slack=False)[source]¶ * Plot columns up to 4 lines from the `Trading History` dataset b'Parameters:' Running Distributed Algorithms Across Many Celery Workers Use this module to handle algorithm backtesting (tuning) or for live trading. Under the hood, this is a Celery task handler that processes jobs from a broker’s messaging queue. This allows the Analysis Engine to process many algorithmic workloads concurrently using Celery’s horizontally- scalable worker pool architecture. ``` analysis_engine.work_tasks.task_run_algo. ``` `task_run_algo` (algo_req)¶ * Process an Algorithm b'Parameters:' b'algo_req &#8211; dictionary for key/values for running an algorithm using Celery workers' This is a wrapper for running your own custom algorithms Please refer to the sa.py for the lastest usage examples. Example with the command line tool: ``` bt -t SPY -g /opt/sa/analysis_engine/mocks/example_algo_minute.py ``` ``` analysis_engine.run_custom_algo. ``` `run_custom_algo` (mod_path, ticker='SPY', balance=50000, commission=6.0, start_date=None, end_date=None, name='myalgo', auto_fill=True, config_file=None, config_dict=None, load_from_s3_bucket=None, load_from_s3_key=None, load_from_redis_key=None, load_from_file=None, load_compress=False, load_publish=True, load_config=None, report_redis_key=None, report_s3_bucket=None, report_s3_key=None, report_file=None, report_compress=False, report_publish=True, report_config=None, history_redis_key=None, history_s3_bucket=None, history_s3_key=None, history_file=None, history_compress=False, history_publish=True, history_config=None, extract_redis_key=None, extract_s3_bucket=None, extract_s3_key=None, extract_file=None, extract_save_dir=None, extract_compress=False, extract_publish=True, extract_config=None, publish_to_s3=True, publish_to_redis=True, publish_to_slack=True, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], compress=False, encoding='utf-8', redis_enabled=True, redis_key=None, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', s3_enabled=True, s3_key=None, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, slack_enabled=False, slack_code_block=False, slack_full_width=False, timeseries=None, trade_strategy=None, verbose=False, debug=False, dataset_publish_extract=False, dataset_publish_history=False, dataset_publish_report=False, run_on_engine=False, auth_url='redis://localhost:6379/13', backend_url='redis://localhost:6379/14', include_tasks=['analysis_engine.work_tasks.task_run_algo', 'analysis_engine.work_tasks.get_new_pricing_data', 'analysis_engine.work_tasks.handle_pricing_update_task', 'analysis_engine.work_tasks.prepare_pricing_dataset', 'analysis_engine.work_tasks.publish_from_s3_to_redis', 'analysis_engine.work_tasks.publish_pricing_update', 'analysis_engine.work_tasks.task_screener_analysis', 'analysis_engine.work_tasks.publish_ticker_aggregate_from_s3'], ssl_options={}, transport_options={}, path_to_config_module='analysis_engine.work_tasks.celery_config', raise_on_err=True)[source]¶ * Run a custom algorithm that derives the class Note Make sure to only have 1 class defined in an algo module. Imports from other modules should work just fine. Algorithm arguments b'Parameters:' * mod_path – file path to custom algorithm class module * ticker – ticker symbol * balance – float - starting balance capital for creating buys and sells * commission – float - cost pet buy or sell * name – string - name for tracking algorithm in the logs * start_date – string - start date for backtest with format `YYYY-MM-DD HH:MM:SS` * end_date – end date for backtest with format `YYYY-MM-DD HH:MM:SS` * auto_fill – optional - boolean for auto filling buy and sell orders for backtesting (default is `True` ) * config_file – path to a json file containing custom algorithm object member values (like indicator configuration and predict future date units ahead for a backtest) * config_dict – optional - dictionary that can be passed to derived class implementations of: Timeseries b'Parameters:' b'timeseries &#8211; optional - string to set' `day` or `minute` backtesting or live trading (default is `minute` ) Trading Strategy b'Parameters:' b'trade_strategy &#8211; optional - string to set the type of' `Trading Strategy` for backtesting or live trading (default is `count` ) Running Distributed Algorithms on the Engine Workers b'Parameters:' * run_on_engine – optional - boolean flag for publishing custom algorithms to Celery ae workers for distributing algorithm workloads (default is `False` which will run algos locally) this is required for distributing algorithms * auth_url – Celery broker address (default is ``` analysis_engine.consts.INCLUDE_TASKS ``` ) * ssl_options – security options dictionary (default is ) Load Algorithm-Ready Dataset From Source Use these arguments to load algorithm-ready datasets from supported sources (file, s3 or redis) b'Parameters:' * load_from_s3_bucket – optional - string load the algo from an a previously-created s3 bucket holding an s3 key with an algorithm-ready dataset for use with: `handle_data` * load_from_s3_key – optional - string load the algo from an a previously-created s3 key holding an algorithm-ready dataset for use with: `handle_data` * load_from_redis_key – optional - string load the algo from a a previously-created redis key holding an algorithm-ready dataset for use with: `handle_data` * load_from_file – optional - string path to a previously-created local file holding an algorithm-ready dataset for use with: `handle_data` * load_compress – optional - boolean flag for toggling to decompress or not when loading an algorithm-ready dataset ( `True` means the dataset must be decompressed to load correctly inside an algorithm to run a backtest) * load_publish – boolean - toggle publishing the load progress to slack, s3, redis or a file (default is `True` ) * load_config – optional - dictionary for setting member variables to load an agorithm-ready dataset from a file, s3 or redis Publishing Control Bool Flags b'Parameters:' * publish_to_s3 – optional - boolean for toggling publishing to s3 on/off (default is `True` ) * publish_to_redis – optional - boolean for publishing to redis on/off (default is `True` ) * publish_to_slack – optional - boolean for publishing to slack (default is `True` ) Algorithm Trade History Arguments b'Parameters:' * history_redis_key – optional - string where the algorithm trading history will be stored in an redis key * history_s3_bucket – optional - string where the algorithm trading history will be stored in an s3 bucket * history_s3_key – optional - string where the algorithm trading history will be stored in an s3 key * history_file – optional - string key where the algorithm trading history will be stored in a file serialized as a json-string * history_compress – optional - boolean flag for toggling to decompress or not when loading an algorithm-ready dataset ( `True` means the dataset will be compressed on publish) * history_publish – boolean - toggle publishing the history to s3, redis or a file (default is `True` ) * history_config – optional - dictionary for setting member variables to publish an algo `trade history` to s3, redis, a file or slack Algorithm Trade Performance Report Arguments (Output Dataset) b'Parameters:' * report_redis_key – optional - string where the algorithm (report) will be stored in an redis key * report_s3_bucket – optional - string where the algorithm report will be stored in an s3 bucket * report_s3_key – optional - string where the algorithm report will be stored in an s3 key * report_file – optional - string key where the algorithm report will be stored in a file serialized as a json-string * report_compress – optional - boolean flag for toggling to decompress or not when loading an algorithm-ready dataset ( `True` means the dataset will be compressed on publish) * report_publish – boolean - toggle publishing the to s3, redis, a file or slack Extract an Algorithm-Ready Dataset Arguments b'Parameters:' * extract_redis_key – optional - string where the algorithm report will be stored in an redis key * extract_s3_bucket – optional - string where the algorithm report will be stored in an s3 bucket * extract_s3_key – optional - string where the algorithm report will be stored in an s3 key * extract_file – optional - string key where the algorithm report will be stored in a file serialized as a json-string * extract_save_dir – optional - string path to auto-generated files from the algo * extract_compress – optional - boolean flag for toggling to decompress or not when loading an algorithm-ready dataset ( `True` means the dataset will be compressed on publish) * extract_publish – boolean - toggle publishing the used ``` algorithm-ready dataset ``` to s3, redis, a file or slack Dataset Arguments b'Parameters:' * dataset_type – optional - dataset type (default is ``` DEFAULT_SERIALIZED_DATASETS ``` ) * encoding – optional - string for data encoding Publish Algorithm Datasets to S3, Redis or a File b'Parameters:' * dataset_publish_extract – optional - bool for publishing the algorithm’s `algorithm-ready` dataset to: s3, redis or file * dataset_publish_history – optional - bool for publishing the algorithm’s `trading history` dataset to: s3, redis or file * dataset_publish_report – optional - bool for publishing the algorithm’s dataset to: s3, redis or file Redis connectivity arguments b'Parameters:' Debugging arguments b'Parameters:' * debug – optional - bool for debug tracking * verbose – optional - bool for increasing logging * raise_on_err – boolean - set this to `False` on prod to ensure exceptions do not interrupt services. With the default ( `True` ) any exceptions from the library and your own algorithm are sent back out immediately exiting the backtest. Run an Algo ``` export REDIS_ADDRESS="localhost:6379" export REDIS_DB="0" export S3_ADDRESS="localhost:9000" export S3_BUCKET="dev" export AWS_ACCESS_KEY_ID="trexaccesskey" export AWS_SECRET_ACCESS_KEY="trex123321" export AWS_DEFAULT_REGION="us-east-1" export S3_SECURE="0" export WORKER_BROKER_URL="redis://0.0.0.0:6379/13" export WORKER_BACKEND_URL="redis://0.0.0.0:6379/14" ``` analysis_engine.run_algo. ``` `run_algo` (ticker=None, tickers=None, algo=None, balance=None, commission=None, start_date=None, end_date=None, datasets=None, num_owned_dict=None, cache_freq='daily', auto_fill=True, load_config=None, report_config=None, history_config=None, extract_config=None, use_key=None, extract_mode='all', iex_datasets=None, redis_enabled=True, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_key=None, s3_enabled=True, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, s3_key=None, celery_disabled=True, broker_url=None, result_backend=None, label=None, name=None, timeseries=None, trade_strategy=None, verbose=False, publish_to_slack=True, publish_to_s3=True, publish_to_redis=True, extract_datasets=None, config_file=None, config_dict=None, version=1, raise_on_err=True, **kwargs)[source]¶ * Run an algorithm with steps: * Extract redis keys between dates * Compile a data pipeline dictionary (call it `data` ) * Call algorithm’s ``` myalgo.handle_data(data=data) ``` If no `algo` is set, the algorithm is used. Note Algo Configuration b'Parameters:' * algo – derived instance of ``` analysis_engine.algo.Algo ``` object * balance – optional - float balance parameter can also be set on the `algo` object if not set on the args * commission – float for single trade commission for buy or sell. can also be set on the `algo` objet * start_date – string `YYYY-MM-DD_HH:MM:SS` cache value * end_date – string `YYYY-MM-DD_HH:MM:SS` cache value * dataset_types – list of strings that are `iex` or `yahoo` datasets that are cached. * cache_freq – optional - depending on if you are running data feeds on a `daily` cron (default) vs every `minute` (or faster) * num_owned_dict – not supported yet * auto_fill – optional - boolean for auto filling buy/sell orders for backtesting (default is `True` ) * trading_calendar – ``` trading_calendar.TradingCalendar ``` object, by default ``` analysis_engine.calendars. always_open.AlwaysOpen ``` trading calendar # TradingCalendar by `TFSExchangeCalendar` * config_file – path to a json file containing custom algorithm object member values (like indicator configuration and predict future date units ahead for a backtest) * config_dict – optional - dictionary that can be passed to derived class implementations of: Timeseries b'Parameters:' b'timeseries &#8211; optional - string to set' `day` or `minute` backtesting or live trading (default is `minute` ) Trading Strategy b'Parameters:' b'trade_strategy &#8211; optional - string to set the type of' `Trading Strategy` for backtesting or live trading (default is `count` ) Algorithm Dataset Loading, Extracting, Reporting and Trading History arguments b'Parameters:' * load_config – optional - dictionary for setting member variables to load an agorithm-ready dataset from a file, s3 or redis * report_config – optional - dictionary for setting member variables to publish an algo to s3, redis, a file or slack (Optional) Data sources, datafeeds and datasets to gather b'Parameters:' b'iex_datasets &#8211; list of strings for gathering specific IEX datasets which are set as consts:' ) * label – tracking log label * publish_to_slack – optional - boolean for publishing to slack (coming soon) * publish_to_s3 – optional - boolean for publishing to s3 (coming soon) * publish_to_redis – optional - boolean for publishing to redis (coming soon) (Optional) Debugging b'Parameters:' * verbose – bool - show extract warnings and other debug logging (default is False) * raise_on_err – optional - boolean for unittests and developing algorithms with the ``` analysis_engine.run_algo.run_algo ``` helper. When set to `True` exceptions will are raised to the calling functions * kwargs – keyword arguments dictionary Example tool for to profiling algorithm performance for: * CPU * Memory * Profiler * Heatmap The pip includes vprof for profiling algorithm code performance # Tradier API # Tradier API¶ # Tradier - Account Set Up¶ Sign up for a develop account and get a token. * Export Environment Variable > export TD_TOKEN=<TRADIER_ACCOUNT_TOKENTradier Consts, Environment Variables and Authentication Helper ``` analysis_engine.td.consts. ``` `get_auth_headers` (use_token='MISSING_TD_TOKEN', env_token=None)[source]¶ * Get connection and auth headers for Tradier account: https://developer.tradier.com/getting_started b'Parameters:' * use_token – optional - token instead of the default `TD_TOKEN` * env_token – optional - env key to use instead of the default `TD_TOKEN` * use_token – optional - token instead of the default # Tradier - Fetch API Reference¶ Please use the command line tool to store the data in redis correctly for the extraction tools. ``` fetch -t TICKER -g td # fetch -t SPY -g td ``` Here is how to use the fetch api: ``` import analysis_engine.td.fetch_api as td_fetch # Please set the TD_TOKEN environment variable to your token calls_status, calls_df = td_fetch.fetch_calls( ticker='SPY') puts_status, puts_df = td_fetch.fetch_puts( ticker='SPY') print(f'Fetched SPY Option Calls from Tradier status={calls_status}:') print(calls_df) print(f'Fetched SPY Option Puts from Tradier status={puts_status}:') print(puts_df) ``` Fetch API calls wrapping Tradier `fetch_calls` (ticker=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch Tradier option calls for a ticker and return a tuple: (status, `pandas.DataFrame` ) > import analysis_engine.td.fetch_api as td_fetch # Please set the TD_TOKEN environment variable to your token calls_status, calls_df = td_fetch.fetch_calls( ticker='SPY') print(f'Fetched SPY Option Calls from Tradier status={calls_status}:') print(calls_df) b'Parameters:' `fetch_puts` (ticker=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch Tradier option puts for a ticker and return a tuple: (status, `pandas.DataFrame` ) > import analysis_engine.td.fetch_api as td_fetch puts_status, puts_df = td_fetch.fetch_puts( ticker='SPY') print(f'Fetched SPY Option Puts from Tradier status={puts_status}:') print(puts_df) b'Parameters:' # Tradier - Extraction API Reference¶ Once fetched you can extract the options data with: ``` import analysis_engine.td.extract_df_from_redis as td_extract # extract by historical date is also supported: # date='2019-02-15' calls_status, calls_df = td_extract.extract_option_calls_dataset( ticker='SPY') puts_status, puts_df = td_extract.extract_option_puts_dataset( ticker='SPY') print(f'SPY Option Calls from Tradier extract status={calls_status}:') print(calls_df) print(f'SPY Option Puts from Tradier extract status={puts_status}:') print(puts_df) ``` Extract an TD dataset from Redis (S3 support coming soon) and load it into a `pandas.DataFrame` Supported environment variables: Fetch data from Tradier: https://developer.tradier.com/getting_started ``` analysis_engine.td.fetch_data. ``` `fetch_data` (work_dict, fetch_type=None)[source]¶ * ``` analysis_engine.td.consts ``` > fetch_type = FETCH_TD_CALLS fetch_type = FETCH_TD_PUTS Supported `work_dict['ft_type']` string values: > work_dict['ft_type'] = 'tdcalls' work_dict['ft_type'] = 'tdputs' b'Parameters:' * work_dict – dictionary of args for the Tradier api * fetch_type – optional - name or enum of the fetcher to create can also be a lower case string in work_dict[‘ft_type’] Date: 2018-11-01 Categories: Tags: Algorithms automatically provide the following member variables to any custom algorithm that derives the ``` self.trade_strategy = 'count' ``` - if the number of indicators * saying buy or sell exceeds the buy/sell rules `min_indicators` the algorithm will trigger a buy or sell * * `self.buy_reason` - derived algorithms can attach custom * buy reasons as a string to each trade order * * `self.sell_reason` - derived algorithms can attach custom * sell reasons as a string to each trade order Timeseries * `self.timeseries` - use an algorithm config to set * `day` or `minute` to process daily or intraday minute by minute datasets. Indicators will still have access to all datasets, this just makes it easier to utilize the helper within an indicator to quickly get the correct dataset: > df_status, use_df = self.get_subscribed_dataset( dataset=dataset) Balance Information * `self.balance` - current algorithm account balance * `self.prev_bal` - previous balance * * `self.net_value` - total value the algorithm has * left remaining since starting trading. this includes the number of `self.num_owned` shares with the `self.latest_close` price included * * `self.net_gain` - amount the algorithm has * made since starting including owned shares with the `self.latest_close` price included Note If a key is not in the dataset, the algorithms’s member variable will be an empty pandas DataFrame created with: `pandas.DataFrame([])` except `self.pricing` which is just a dictionary. Please ensure the engine successfully fetched and cached the dataset in redis using a tool like `redis-cli` and a query of `keys *` or `keys <TICKER>_*` on large deployments. Indicator Information * `self.buy_rules` - optional - custom dictionary for passing * buy-side business rules to a custom algorithm * * `self.sell_rules` - optional - custom dictionary for passing * sale-side business rules to a custom algorithm * ``` self.min_buy_indicators ``` - if `self.buy_rules` has * a value for buying if a `minimum` number of indicators detect a value that is within a buy condition * ``` self.min_sell_indicators ``` - if `self.sell_rules` has * a value for selling if a `minimum` number of indicators detect a value that is within a sell condition * ``` self.latest_ind_report ``` - latest dictionary of values * from the * `self.latest_buys` - latest indicators saying buy * `self.latest_sells` - latest indicators saying sell * `self.num_latest_buys` - latest number of indicators saying buy * ``` self.num_latest_sells ``` - latest number of indicators saying sell * * `self.iproc` - member variables for the `IndicatorProcessor` * that holds all of the custom algorithm indicators Indicator buy and sell records in `self.latest_buys` and `self.latest_sells` have a dictionary structure: ``` { 'name': indicator_name, 'id': indicator_id, 'report': indicator_report_dict, 'cell': indicator cell number } ``` ``` analysis_engine.algo. ``` Run an algorithm against multiple tickers at once through the redis dataframe pipeline provided by analysis_engine.extract.extract. Data Pipeline Structure This algorithm can handle an extracted dictionary with structure: > import pandas as pd from analysis_engine.algo import BaseAlgo ticker = 'SPY' demo_algo = BaseAlgo( ticker=ticker, balance=1000.00, commission=6.00, name=f'test-{ticker}') date = '2018-11-05' dataset_id = f'{ticker}_{date}' # mock the data pipeline in redis: data = { ticker: [ { 'id': dataset_id, 'date': date, 'data': { 'daily': pd.DataFrame([ { 'high': 280.01, 'low': 270.01, 'open': 275.01, 'close': 272.02, 'volume': 123, 'date': '2018-11-01 15:59:59' }, { 'high': 281.01, 'low': 271.01, 'open': 276.01, 'close': 273.02, 'volume': 124, 'date': '2018-11-02 15:59:59' }, { 'high': 282.01, 'low': 272.01, 'open': 277.01, 'close': 274.02, 'volume': 121, 'date': '2018-11-05 15:59:59' } ]), 'calls': pd.DataFrame([]), 'puts': pd.DataFrame([]), 'minute': pd.DataFrame([]), 'pricing': pd.DataFrame([]), 'quote': pd.DataFrame([]), 'news': pd.DataFrame([]), 'news1': pd.DataFrame([]), 'dividends': pd.DataFrame([]), 'earnings': pd.DataFrame([]), 'financials': pd.DataFrame([]), 'stats': pd.DataFrame([]), 'peers': pd.DataFrame([]), 'company': pd.DataFrame([]) } } ] } # run the algorithm demo_algo.handle_data(data=data) # get the algorithm results results = demo_algo.get_result() print(results) * `build_progress_label` (progress, total)[source]¶ * create a progress label string for the logs b'Parameters:' * `build_ticker_history` (ticker, ignore_keys)[source]¶ * For all records in `self.order_history` compile a filter list of history records per `ticker` while pruning any keys that are in the list of `ignore_keys` b'Parameters:' * ticker – string ticker symbol * ignore_history_keys – list of keys to not include in the history report ``` create_algorithm_ready_dataset ``` Create the `Algorithm-Ready` dataset during the * `create_buy_order` (ticker, row, minute=None, shares=None, reason=None, orient='records', date_format='iso', is_live_trading=False)[source]¶ * * ticker – string ticker * shares – optional - integer number of shares to buy if None buy max number of shares at the `close` with the available `balance` amount. * row – `dictionary` or `pandas.DataFrame` row record that will be converted to a json-serialized string * minute – optional - string datetime when the order minute the order was placed. For `day` timeseries this is the close of trading (16:00:00 for the day) and for `minute` timeseries the value will be the latest minute from the `self.df_minute` `pandas.DataFrame` . Normally this value should be set to the `self.use_minute` , and the format is ``` create_history_dataset ``` Create the `Trading History` dataset during the ``` self.publish_trade_history_dataset() ``` ``` create_report_dataset ``` Create the dataset during the member method. Inherited Algorithm classes can derive how they build a custom dataset before publishing by implementing this method in the derived class. * `create_sell_order` (ticker, row, minute=None, shares=None, reason=None, orient='records', date_format='iso', is_live_trading=False)[source]¶ * * ticker – string ticker * shares – optional - integer number of shares to sell if None sell all owned shares at the `close` * row – `pandas.DataFrame` row record that will be converted to a json-serialized string * minute – optional - string datetime when the order minute the order was placed. For `day` timeseries this is the close of trading (16:00:00 for the day) and for `minute` timeseries the value will be the latest minute from the `self.df_minute` `pandas.DataFrame` . Normally this value should be set to the `self.use_minute` , and the format is ``` determine_indicator_datasets ``` Indicators are coupled to a dataset in the algorithm config file. This allows for identifying the exact datasets to pull from Redis to speed up backtesting. ``` get_indicator_process_last_indicator ``` Used to pull the indicator object back up to any created objects Tip this is for debugging data and code issues inside an indicator ``` get_indicator_processor ``` (existing_processor=None)[source]¶ * singleton for getting the indicator processor b'Parameters:' b'existing_processor &#8211; allow derived algos to build their own indicator processor and pass it to the base' * b'Parameters:' b'ticker &#8211; ticker to lookup' ``` get_supported_tickers_in_data ``` (data)[source]¶ * For all updates found in `data` compare to the supported list of `self.tickers` to make sure the updates are relevant for this algorithm. b'Parameters:' b'data &#8211; new data stream to process in this algo' * `get_ticker_positions` (ticker)[source]¶ * get the current positions for a ticker and returns a tuple: ``` num_owned (integer), buys (list), sells (list)` ``` > num_owned, buys, sells = self.get_ticker_positions( ticker=ticker) b'Parameters:' b'ticker &#8211; ticker to lookup' ``` get_trade_history_node ``` Helper for quickly building a history node on a derived algorithm. Whatever member variables are in the base class will be added automatically into the returned: ``` historical transaction dictionary ``` if you get a `None` back it means there could be a bug in how you are using the member variables (likely created an invalid math calculation) or could be a bug in the helper: build_trade_history_entry * `handle_daily_dataset` (algo_id, ticker, node)[source]¶ * * `handle_data` (data)[source]¶ * process new data for the algorithm using a multi-ticker mapping structure b'Parameters:' b'data &#8211;' dictionary of extracted data from the redis pipeline with a structure: > ticker = 'SPY' # string usually: YYYY-MM-DD date = '2018-11-05' # redis cache key for the dataset format: <ticker>_<date> dataset_id = f'{ticker}_{date}' dataset = { ticker: [ { 'id': dataset_id, 'date': date, 'data': { 'daily': pd.DataFrame([]), 'minute': pd.DataFrame([]), 'quote': pd.DataFrame([]), 'stats': pd.DataFrame([]), 'peers': pd.DataFrame([]), 'news1': pd.DataFrame([]), 'financials': pd.DataFrame([]), 'earnings': pd.DataFrame([]), 'dividends': pd.DataFrame([]), 'calls': pd.DataFrame([]), 'puts': pd.DataFrame([]), 'pricing': pd.DataFrame([]), 'news': pd.DataFrame([]) } } ] } ``` handle_minute_dataset ``` (algo_id, ticker, node, start_row=0)[source]¶ * * algo_id – string - algo identifier label for debugging datasets during specific dates * ticker – string - ticker * node – dataset to process * start_row – start row default is `0` Use this method inside of an algorithm’s `process()` method to view the available datasets in the redis cache b'Parameters:' Handler for loading custom datasets for indicators Custom datasets allow indicators to analyze more than the default pricing data provided by `IEX Cloud` and `Tradier` . This is helpful for building indicators to analyze and train AI from a previous algorithm `Trading History` . * `load_from_config` (config_dict)[source]¶ * support for replaying algorithms from a trading history b'Parameters:' b'config_dict &#8211; algorithm configuration values usually from a previous trading history or for quickly testing dataset theories in a development environment' * `load_from_dataset` (ds_data)[source]¶ * Load the member variables from the extracted `ds_data` dataset. algorithms automatically provide the following member variables to `myalgo.process()` for quickly building algorithms: If a key is not in the dataset, the algorithms’s member variable will be an empty `pandas.DataFrame([])` . Please ensure the engine cached the dataset in redis using a tool like `redis-cli` to verify the values are in memory. b'Parameters:' b'ds_data &#8211; extracted, structured dataset from redis' ``` load_from_external_source ``` (path_to_file=None, s3_bucket=None, s3_key=None, redis_key=None)[source]¶ * Load an algorithm-ready dataset for `handle_data` backtesting and trade performance analysis from: * Local file * S3 * Redis b'Parameters:' * path_to_file – optional - path to local file * s3_bucket – optional - s3 s3_bucket * s3_key – optional - s3 key * redis_key – optional - redis key ``` plot_trading_history_with_balance ``` This will live plot the trading history after each day is done b'Parameters:' ``` populate_intraday_events_dict ``` (start_min, end_min)[source]¶ * For tracking intraday buy/sell/news events with indicators use this method to build a dictionary where keys are the minutes between `start_date` and `end_date` . If both are `None` then the `self.df_minute` b'Parameters:' * start_min – start datetime for building the `self.intraday_events` dictionary keys * end_min – end datetime for building the `self.intraday_events` dictionary keys * start_min – start datetime for building the ``` prepare_for_new_indicator_run ``` Call this for non-daily datasets specifically if the algorithm is using `minute` timeseries a dictionary of identifiers (for debugging) and multiple pandas `pandas.DataFrame` objects. Dictionary where keys represent a label from one of the data sources ( `IEX Cloud` or `Tradier` ). Here is the supported dataset structure for the process method: Note ``` publish_input_dataset ``` ``` publish_report_dataset ``` ``` publish_trade_history_dataset ``` ``` record_trade_history_for_dataset ``` (node)[source]¶ * Build a daily or minute-by-minute trading history To run an algorithm minute-by-minute set the configuration to use: > 'timeseries': 'minute' b'Parameters:' b'node &#8211; cached dataset dictionary node' * `reset_for_next_run` ()[source]¶ * work in progress - clean up all internal member variables for another run random or probablistic predictions may not create the same trading history_output_file * `sell_reason` = None¶ * if this is in a juptyer notebook this will show the plots at the end of each day… please avoid with the command line as the plot’s window will block the algorithm until the window is closed ``` trade_off_indicator_buy_and_sell_signals ``` (ticker, algo_id, reason_for_buy=None, reason_for_sell=None)[source]¶ * Check if the minimum number of indicators for a buy or a sell were found. If there were, then commit the trade. > if self.trade_off_num_indicators: if self.num_latest_buys >= self.min_buy_indicators: self.should_buy = True elif self.num_latest_sells >= self.min_sell_indicators: self.should_sell = True b'Parameters:' * ticker – ticker symbol * algo_id – string algo for tracking internal progress for debugging * reason_for_buy – optional - string for tracking why the algo bought * reason_for_sell – optional - string for tracking why the algo sold ``` view_date_dataset_records ``` View the dataset contents for a single node - use it with the algo config_dict by setting: > "run_this_date": <string date YYYY-MM-DD> b'Parameters:' Custom Average Directional Index - ADX https://www.investopedia.com/terms/a/adx.asp `IndicatorADX` (**kwargs)[source]¶ * Custom Average True Range - ATR `IndicatorATR` (**kwargs)[source]¶ * Custom BollingerBands https://www.investopedia.com/terms/b/bollingerbands.asp ``` IndicatorBollingerBands ``` Custom Chaikin Oscillator `IndicatorChaikinOSC` (**kwargs)[source]¶ * Custom Chaikin `IndicatorChaikin` (**kwargs)[source]¶ * helper for setting up algorithm configs for this indicator and programmatically set the values based off the domain rules > from analysis_engine.indicators.chaikin import IndicatorChaikin ind = IndicatorChaikin(config_dict={ 'verbose': True }).get_configurables() b'Parameters:' b'kwargs &#8211; keyword args dictionary' Custom Exponential Moving Average https://www.investopedia.com/terms/e/ema.asp `IndicatorEMA` (**kwargs)[source]¶ * Custom Moving Average Convergence Divergence - MACD `IndicatorMACD` (**kwargs)[source]¶ * Custom Money Flow Index - MFI https://www.investopedia.com/terms/m/mfi.asp `IndicatorMFI` (**kwargs)[source]¶ * Custom Momentum - MOM `IndicatorMOM` (**kwargs)[source]¶ * Custom Normalized Average True Range - NATR `IndicatorNATR` (**kwargs)[source]¶ * Custom On Balance Volume https://www.investopedia.com/terms/o/onbalancevolume.asp ``` IndicatorOnBalanceVolume ``` Custom Price of Rate of Change - ROC https://www.investopedia.com/terms/p/pricerateofchange.asp Custom Relative Strength Index - RSI https://www.investopedia.com/terms/r/rsi.asp `IndicatorRSI` (**kwargs)[source]¶ * Custom Stochastics - STOCHF https://www.investopedia.com/terms/s/stochasticoscillator.asp `IndicatorSTOCHF` (**kwargs)[source]¶ * Custom Stochastics - STOCH `IndicatorSTOCH` (**kwargs)[source]¶ * Custom True Range - TRANGE `IndicatorTRANGE` (**kwargs)[source]¶ * ``` IndicatorWilliamsROpen ``` `IndicatorWilliamsR` (**kwargs)[source]¶ * helper for setting up algorithm configs for this indicator and programmatically set the values based off the domain rules > from analysis_engine.indicators.williamsr import IndicatorWilliamsR ind = IndicatorWilliamsR(config_dict={ 'verbose': True }).get_configurables() b'Parameters:' b'kwargs &#8211; keyword args dictionary' Custom Weighted Moving Average https://www.investopedia.com/articles/technical/060401.asp `IndicatorWMA` (**kwargs)[source]¶ * helper for setting up algorithm configs for this indicator and programmatically set the values based off the domain rules > from analysis_engine.indicators.wma import IndicatorWMA ind = IndicatorWMA(config_dict={ 'verbose': True }).get_configurables() b'Parameters:' b'kwargs &#8211; keyword args dictionary' ## V1 Indicator Examples¶ ``` ExampleIndicatorWilliamsR ``` ``` ExampleIndicatorWilliamsROpen ``` ## Indicator Utilities¶ Algo data helper for mapping indicator category to an integer label value for downstream dataset predictions ``` analysis_engine.indicators.get_category_as_int. ``` `get_category_as_int` (node, label=None)[source]¶ * Helper for converting feature labels to numeric values b'Parameters:' b'node &#8211; convert the dictionary&#8217;s' `category` string to the integer mapped value Indicator Processor * v1 Indicator type: `supported` * Binary decision support on buys and sells This is like an alert threshold that is `on` or `off` * v1 Indicator type: * * v2 Indicator type: `not supported` * Support for buy or sell value range This is like an alert threshold between a `lower` and `upper` bound * v2 Indicator type: ``` analysis_engine.indicators.indicator_processor. ``` `IndicatorProcessor` (config_dict, config_file=None, ticker=None, label=None, verbose=False, verbose_indicators=False)[source]¶ * ``` build_indicators_for_config ``` (config_dict)[source]¶ * Convert the dictionary into an internal dictionary for quickly processing results b'Parameters:' b'config_dict &#8211; initailized algorithm config dictionary' * `get_latest_report` (algo_id=None, ticker=None, dataset=None)[source]¶ * Return the latest report as a method that can be customized by a derived class from the `IndicatorProcessor` b'Parameters:' Helper for loading derived Indicators from a local module file ``` analysis_engine.indicators.load_indicator_from_module. ``` ``` load_indicator_from_module ``` (module_name, ind_dict, path_to_module=None, log_label=None, base_class_module_name='BaseIndicator', verbose=False)[source]¶ * Load a custom indicator from a file b'Parameters:' * module_name – string name of the indicator module use in to load the module * path_to_module – optional - path to custom indicator file (default is to use the ``` analysis_engine.indicators.base_indicator.BaseIndicator ``` or if set the ``` ind_dict['module_path'] ``` value) * ind_dict – dictionary of keyword arguments to pass to the newly created derived Indicator’s constructor * log_label – optional - log tracking label for helping to find this indicator’s logs (if not set the default name is the `module_name` string value) * base_class_module_name – optional - string name for using a non-standard indicator base class * verbose – optional - bool for more logging (default is `False` ) Base Indicator Class for deriving your own indicators to use within an ``` analysis_engine.indicators.in dicator_processor.IndicatorProcessor ``` ``` analysis_engine.indicators.base_indicator. ``` `BaseIndicator` (config_dict, path_to_module=None, name=None, verbose=False)[source]¶ * ``` build_base_configurables ``` (ind_type='momentum', category='technical', uses_data='minute', version=1)[source]¶ * b'Parameters:' * ind_type – string indicator type * category – string indicator category * uses_data – string indicator usess this type of data * version – integer for building configurables for the testing generation version ``` build_configurable_node ``` (name, conf_type, current_value=None, default_value=None, max_value=None, min_value=None, is_output_only=False, inc_interval=None, notes=None, **kwargs)[source]¶ * Helper for building a single configurable type node for programmatically creating algo configs b'Parameters:' * name – name of the member configurable * conf_type – string - configurable type * current_value – optional - current value * default_value – optional - default value * max_value – optional - maximum value * min_value – optional - minimum value * is_output_only – optional - bool for setting the input parameter as an output-only value (default is `False` ) * inc_interval – optional - float value for controlling how the tests should increment while walking between the `min_value` and the `max_value` * notes – optional - string notes * kwargs – optional - derived keyword args dictionary ``` convert_config_keys_to_members ``` This converts any key in the config to a member variable that can be used with the your derived indicators like: `self.<KEY_IN_CONFIG>` Derive this in your indicators This is used as a helper for setting up algorithm configs for this indicator and to programmatically set the values based off the domain rules b'Parameters:' b'kwargs &#8211; optional keyword args' * `get_dataset_by_name` (dataset, dataset_name)[source]¶ * Method for getting just a dataset by the dataset_name`` inside the cached `dataset['data']` dictionary of `pd.Dataframe(s)` b'Parameters:' * `get_report` (verbose=False)[source]¶ * Get the indicator’s current output node that is used for the trading performance report generated at the end of the algorithm the report dict should mostly be numeric types to enable AI predictions after removing non-numeric columns b'Parameters:' b'verbose &#8211; optional - boolean for toggling to show the report' ``` get_subscribed_dataset ``` (dataset, dataset_name=None)[source]¶ * Method for getting just the subscribed dataset else use the `dataset_name` argument dataset b'Parameters:' ``` handle_subscribed_dataset ``` Filter the algorithm’s `dataset` to just the dataset the indicator is set up to use as defined by the member variable: * `self.name_of_df` - string value like `daily` , `minute` b'Parameters:' * `lg` (msg, level=20)[source]¶ * Log only if the indicator has `self.verbose` set to `True` or if the ``` level == logging.CRITICAL ``` ``` level = logging.ERROR ``` otherwise no logs b'Parameters:' * msg – string message to log * level – set the logging level (default is `logging.INFO` ) * `reset_internals` (**kwargs)[source]¶ * Support a cleanup action before indicators run between datasets. Derived classes can implement custom cleanup actions that need to run before each call is run on the next cached dataset b'Parameters:' b'kwargs &#8211; keyword args dictionary' Build a single indicator for an algorithm ``` analysis_engine.indicators.build_indicator_node. ``` `build_indicator_node` (node, label=None)[source]¶ * Parse a dictionary in the algorithm config `indicators` list and return a dictionary Supported values found in: analysis_engine/consts.py b'Parameters:' * node – single dictionary from the config’s `indicators` list * label – optional - string log tracking this class in the logs (usually just the algo name is good enough to help debug issues when running distributed) b'Returns:' dictionary * node – single dictionary from the config’s The analysis engine includes a wrapper for talib. This wrapper imports with: ``` import analysis_engine.ae_talib as ae_talib ``` Use this wrapper if you want to run unittests that need to access talib functions. This approach is required because not all testing platforms support installing talib. If `import talib` fails, then ``` import analysis_engine.mocks.mock_talib as talib ``` module is loaded instead. This wrapper provides lightweight functions that are compatible with python mocks and replicate the functionality of `talib` . TA-Lib wrappers `BBANDS` (close, timeperiod=5, nbdevup=2, nbdevdn=2, matype=0, verbose=False)[source]¶ * Wrapper for ta.BBANDS for running unittests on ci/cd tools that do not provide talib > (upperband, middleband, lowerband) = BBANDS( close, timeperiod=5, nbdevup=2, nbdevdn=2, matype=0) b'Returns:' upperband, middleband, lowerband b'Parameters:' * close – close prices * timeperiod – number of values (default is `5` ) * nbdevup – float - standard deviation to set the upper band (default is `2` ) * nbdevdn – float - standard deviation to set the lower band (default is `2` ) * matype – moving average type (default is `0` simple moving average) * verbose – show logs `EMA` (close, timeperiod=30, verbose=False)[source]¶ * `WMA` (close, timeperiod=30, verbose=False)[source]¶ * `ADX` (high=None, low=None, close=None, timeperiod=14, verbose=False)[source]¶ * `MACD` (close=None, fast_period=12, slow_period=26, signal_period=9, verbose=False)[source]¶ * Wrapper for ta.MACD for running unittests on ci/cd tools that do not provide talib > (macd, macdsignal, macdhist) = MACD( close, fastperiod=12, slowperiod=26, signalperiod=9) b'Parameters:' * value – list of values (default `closes` ) * fast_period – integer fast line * slow_period – integer slow line * signal_period – integer signal line * verbose – show logs * value – list of values (default `MFI` (high=None, low=None, close=None, volume=None, timeperiod=None, verbose=False)[source]¶ * `MOM` (close=None, timeperiod=None, verbose=False)[source]¶ * `ROC` (close=None, timeperiod=None, verbose=False)[source]¶ * `RSI` (close=None, timeperiod=None, verbose=False)[source]¶ * `STOCH` (high=None, low=None, close=None, fastk_period=None, slowk_period=None, slowk_matype=None, slowd_period=None, slowd_matype=0, verbose=False)[source]¶ * Wrapper for ta.STOCH for running unittests on ci/cd tools that do not provide talib > slowk, slowd = STOCH( high, low, close, fastk_period=5, slowk_period=3, slowk_matype=0, slowd_period=3, slowd_matype=0) b'Parameters:' `STOCHF` (high=None, low=None, close=None, fastk_period=None, fastd_period=None, fastd_matype=0, verbose=False)[source]¶ * Wrapper for ta.STOCHF for running unittests on ci/cd tools that do not provide talib > fastk, fastd = STOCHF( high, low, close, fastk_period=5, fastd_period=3, fastd_matype=0) b'Parameters:' `WILLR` (high=None, low=None, close=None, timeperiod=None, verbose=False)[source]¶ * `Chaikin` (high=None, low=None, close=None, volume=None, verbose=False)[source]¶ * Wrapper for ta.AD for running unittests on ci/cd tools that do not provide talib > real = AD( high, low, close, volume) b'Parameters:' `ChaikinADOSC` (high=None, low=None, close=None, volume=None, fast_period=3, slow_period=10, verbose=False)[source]¶ * Wrapper for ta.ADOSC for running unittests on ci/cd tools that do not provide talib > real = ADOSC( high, low, close, volume, fastperiod=3, slowperiod=10) b'Parameters:' `OBV` (value=None, volume=None, verbose=False)[source]¶ * Wrapper for ta.OBV for running unittests on ci/cd tools that do not provide talib > real = OBV( close, volume) b'Parameters:' `ATR` (high=None, low=None, close=None, timeperiod=None, verbose=False)[source]¶ * `NATR` (high=None, low=None, close=None, timeperiod=None, verbose=False)[source]¶ * `TRANGE` (high=None, low=None, close=None, verbose=False)[source]¶ * Wrapper for ta.TRANGE for running unittests on ci/cd tools that do not provide talib > real = TRANGE( high, low, close) b'Parameters:' Build a dictionary for running an algorithm ``` analysis_engine.build_algo_request. ``` `build_algo_request` (ticker=None, tickers=None, use_key=None, start_date=None, end_date=None, datasets=None, balance=None, commission=None, num_shares=None, config_file=None, config_dict=None, load_config=None, history_config=None, report_config=None, extract_config=None, timeseries=None, trade_strategy=None, cache_freq='daily', label='algo')[source]¶ * Create a dictionary for building an algorithm. This is opinionated to how the underlying date-based caching strategy is running per day. Each business day becomes a possible dataset to process with an algorithm. b'Parameters:' * ticker – ticker * tickers – optional - list of tickers * use_key – redis and s3 to store the algo result * start_date – string date format `YYYY-MM-DD HH:MM:SS` * end_date – string date format `YYYY-MM-DD HH:MM:SS` * datasets – list of string dataset types * balance – starting capital balance * commission – commission for buy or sell * num_shares – optional - integer number of starting shares * cache_freq – optional - cache frequency ( `daily` is default) * label – optional - algo log tracking name * config_file – path to a json file containing custom algorithm object member values (like indicator configuration and predict future date units ahead for a backtest) * config_dict – optional - dictionary that can be passed to derived class implementations of: Timeseries b'Parameters:' b'timeseries &#8211; optional - string to set' `day` or `minute` backtesting or live trading (default is `minute` ) Trading Strategy b'Parameters:' b'trade_strategy &#8211; optional - string to set the type of' `Trading Strategy` for backtesting or live trading (default is `count` ) Algorithm Dataset Extraction, Loading and Publishing arguments b'Parameters:' * load_config – optional - dictionary for setting member variables to load an agorithm-ready dataset from a file, s3 or redis * history_config – optional - dictionary for setting member variables to publish an algo `trade history` to s3, redis, a file or slack * report_config – optional - dictionary for setting member variables to publish an algo to s3, redis, a file or slack Helper for creating a sell order ``` analysis_engine.build_sell_order. ``` Create an algorithm sell order as a dictionary b'Parameters:' Helper for creating a buy order ``` analysis_engine.build_buy_order. ``` Create an algorithm buy order as a dictionary Helper for building an algorithm trading and performance history as a dictionary that can be reviewed during or after an algorithm finishes running ``` analysis_engine.build_trade_history_entry. ``` ``` build_trade_history_entry ``` (ticker, num_owned, close, balance, commission, date, trade_type, algo_start_price, original_balance, minute=None, high=None, low=None, open_val=None, volume=None, ask=None, bid=None, today_high=None, today_low=None, today_open_val=None, today_close=None, today_volume=None, stop_loss=None, trailing_stop_loss=None, buy_hold_units=None, sell_hold_units=None, spread_exp_date=None, spread_id=None, low_strike=None, low_bid=None, low_ask=None, low_volume=None, low_open_int=None, low_delta=None, low_gamma=None, low_theta=None, low_vega=None, low_rho=None, low_impl_vol=None, low_intrinsic=None, low_extrinsic=None, low_theo_price=None, low_theo_volatility=None, low_max_covered=None, low_exp_date=None, high_strike=None, high_bid=None, high_ask=None, high_volume=None, high_open_int=None, high_delta=None, high_gamma=None, high_theta=None, high_vega=None, high_rho=None, high_impl_vol=None, high_intrinsic=None, high_extrinsic=None, high_theo_price=None, high_theo_volatility=None, high_max_covered=None, high_exp_date=None, prev_balance=None, prev_num_owned=None, total_buys=None, total_sells=None, buy_triggered=None, buy_strength=None, buy_risk=None, sell_triggered=None, sell_strength=None, sell_risk=None, num_indicators_buy=None, num_indicators_sell=None, min_buy_indicators=None, min_sell_indicators=None, net_gain=None, net_value=None, ds_id=None, note=None, err=None, entry_spread_dict=None, version=1, verbose=False)[source]¶ * Build a dictionary for tracking an algorithm profitability per ticker and for `TRADE_SHARES` , , or trading types. Note ( `YYYY-MM-DD HH:MM:SS` ) this is optional if the algorithm is set up to trade using a `day` value for timeseries. * trade_type – type of the trade - supported values: `TRADE_SHARES` , , * algo_start_price – float starting close/contract price for this algo * original_balance – float starting original account balance for this algo * high – optional - float underlying stock asset `high` price * low – optional - float underlying stock asset `low` price * open_val – optional - float underlying stock asset `open` price * volume – optional - integer underlying stock asset `volume` * ask – optional - float `ask` price of the stock (for buying `shares` ) * bid – optional - float `bid` price of the stock (for selling `shares` ) * today_high – optional - float `high` from the daily dataset (if available) * today_low – optional - float `low` from the daily dataset (if available) * today_open_val – optional - float `open` from the daily dataset (if available) * today_close – optional - float `close` from the daily dataset (if available) * today_volume – optional - float `volume` from the daily dataset (if available) * stop_loss – optional - float `stop_loss` price of the stock/spread (for selling `shares` vs `contracts` ) * trailing_stop_loss – optional - float `trailing_stop_loss` price of the stock/spread (for selling `shares` vs `contracts` ) * buy_hold_units – optional - number of units to hold buys - helps with algorithm tuning * sell_hold_units – optional - number of units to hold sells - helps with algorithm tuning * spread_exp_date – optional - string spread contract expiration date ( `COMMON_DATE_FORMAT` ( `YYYY-MM-DD` ) * spread_id – optional - spread identifier for reviewing spread performances * low_strike – optional - only for vertical bull/bear trade types `low leg strike price` of the spread * low_bid – optional - only for vertical bull/bear trade types `low leg bid` of the spread * low_ask – optional - only for vertical bull/bear trade types `low leg ask` of the spread * low_volume – optional - only for vertical bull/bear trade types `low leg volume` of the spread * low_open_int – optional - only for vertical bull/bear trade types ``` low leg open interest ``` of the spread * low_delta – optional - only for vertical bull/bear trade types `low leg delta` of the spread * low_gamma – optional - only for vertical bull/bear trade types `low leg gamma` of the spread * low_theta – optional - only for vertical bull/bear trade types `low leg theta` of the spread * low_vega – optional - only for vertical bull/bear trade types `low leg vega` of the spread * low_rho – optional - only for vertical bull/bear trade types `low leg rho` of the spread * low_impl_vol – optional - only for vertical bull/bear trade types ``` low leg implied volatility ``` of the spread * low_intrinsic – optional - only for vertical bull/bear trade types `low leg intrinsic` of the spread * low_extrinsic – optional - only for vertical bull/bear trade types `low leg extrinsic` of the spread * low_theo_price – optional - only for vertical bull/bear trade types ``` low leg theoretical price ``` ``` low leg theoretical volatility ``` ``` low leg max covered returns ``` ``` low leg expiration date ``` ``` high leg strike price ``` of the spread * high_bid – optional - only for vertical bull/bear trade types `high leg bid` of the spread * high_ask – optional - only for vertical bull/bear trade types `high leg ask` of the spread * high_volume – optional - only for vertical bull/bear trade types `high leg volume` of the spread * high_open_int – optional - only for vertical bull/bear trade types ``` high leg open interest ``` of the spread * high_delta – optional - only for vertical bull/bear trade types `high leg delta` of the spread * high_gamma – optional - only for vertical bull/bear trade types `high leg gamma` of the spread * high_theta – optional - only for vertical bull/bear trade types `high leg theta` of the spread * high_vega – optional - only for vertical bull/bear trade types `high leg vega` of the spread * high_rho – optional - only for vertical bull/bear trade types `high leg rho` of the spread * high_impl_vol – optional - only for vertical bull/bear trade types ``` high leg implied volatility ``` of the spread * high_intrinsic – optional - only for vertical bull/bear trade types `high leg intrinsic` of the spread * high_extrinsic – optional - only for vertical bull/bear trade types `high leg extrinsic` of the spread * high_theo_price – optional - only for vertical bull/bear trade types ``` high leg theoretical price ``` ``` high leg theoretical volatility ``` ``` high leg max covered returns ``` ``` high leg expiration date ``` of the spread * prev_balance – optional - previous balance for this algo * prev_num_owned – optional - previous num of `shares` or `contracts` * total_buys – optional - total buy orders for this algo * total_sells – optional - total sell orders for this algo * buy_triggered – optional - bool `buy` conditions in the algorithm triggered * buy_strength – optional - float custom strength/confidence rating for tuning algorithm performance for desirable sensitivity and specificity * buy_risk – optional - float custom risk rating for tuning algorithm peformance for avoiding custom risk for buy conditions * sell_triggered – optional - bool `sell` conditions in the algorithm triggered * sell_strength – optional - float custom strength/confidence rating for tuning algorithm performance for desirable sensitivity and specificity * sell_risk – optional - float custom risk rating for tuning algorithm peformance for avoiding custom risk for buy conditions * num_indicators_buy – optional - integer number of indicators the `IndicatorProcessor` processed and said to `buy` an asset * num_indicators_sell – optional - integer number of indicators the `IndicatorProcessor` processed and said to `sell` an asset * min_buy_indicators – optional - integer minimum number of indicators required to trigger a `buy` order * min_sell_indicators – optional - integer minimum number of indicators required to trigger a `sell` order net_gain=None, net_value=None, * net_value – optional - float total value the algorithm has left remaining since starting trading. this includes the number of `self.num_owned` shares with the `self.latest_close` price included * net_gain – optional - float amount the algorithm has made since starting including owned shares with the `self.latest_close` price included * ds_id – optional - datset id for debugging * note – optional - string for tracking high level testing notes on algorithm indicator ratings and internal message passing during an algorithms’s `self.process` method * err – optional - string for tracking errors * entry_spread_dict – optional - on exit spreads the calculation of net gain can use the entry spread to determine specific performance metrics (work in progress) * version – optional - version tracking order history * verbose – optional - bool log each history node (default is `False` ) Build option spread pricing details ``` analysis_engine.build_option_spread_details. ``` ``` build_option_spread_details ``` (trade_type, spread_type, option_type, close, num_contracts, low_strike, low_ask, low_bid, high_strike, high_ask, high_bid)[source]¶ * Calculate pricing information for supported spreads including `max loss` , `max profit` , and `mid price` (break even coming soon) b'Parameters:' * trade_type – entry ( `TRADE_ENTRY` ) or exit ( `TRADE_EXIT` ) of a spread position * spread_type – vertical bull ( `SPREAD_VERTICAL_BULL` ) and vertical bear ( `SPREAD_VERTICAL_BEAR` ) are the only supported calculations for now * option_type – call ( `OPTION_CALL` ) or put ( `OPTION_PUT` ) * close – closing price of the underlying asset * num_contracts – integer number of contracts * low_strike – float - strike for the low leg of the spread * low_ask – float - ask price for the low leg of the spread * low_bid – float - bid price for the low leg of the spread * high_strike – float - strike for the high leg of the spread * high_ask – float - ask price for the high leg of the spread * high_bid – float - bid price for the high leg of the spread * trade_type – entry ( The following notebooks, script and modules are guides for building KerasRegressor models, deep neural networks (dnn), for trying to predict a stock’s future closing price from a `Trading History` dataset. The tools use a `Trading History` dataset that was created and automatically published to S3 after processing a trading algorithm’s backtest of custom indicators analyzed intraday minute-by-minute pricing data stored in redis. If you do not have a `Trading History` you can create one with: and run it distributed across the engine’s workers with `-w` Here are examples on training a dnn’s using a `Trading History` from S3 (Minio or AWS): # AI - Building a Deep Neural Network Helper Module¶ This function is used as a Keras Scikit-Learn Builder Function for creating a Keras Sequential deep neural network model (dnn). This function is passed in as the build_fn argument to create a KerasRegressor (or KerasClassifier). Build a deep neural network for regression predictions ``` analysis_engine.ai.build_regression_dnn. ``` `build_regression_dnn` (num_features, compile_config, model_json=None, model_config=None)[source]¶ * b'Parameters:' * num_features – input_dim for the number of features in the data * compile_config – dictionary of compile options * model_json – keras model json to build the model * model_config – optional dictionary for model # AI - Training Dataset Helper Modules¶ These modules are included to help build new training datasets. It looks like read the docs does not support keras, sklearn or tensorflow for generating sphinx docs so here are links to the repository’s source code: Build scaler normalized train and test datasets from a `pandas.DataFrame` (like a `Trading History` stored in s3) Note This function will create multiple copies of the data so this is a memory intensive call which may overflow the available memory on a machine if there are many rows ``` analysis_engine.ai.build_datasets_using_scalers. ``` ``` build_datasets_using_scalers ``` (train_features, test_feature, df, test_size, seed, min_feature=-1, max_feature=1)[source]¶ * Build train and test datasets using a MinMaxScaler for normalizing a dataset before training a deep neural network. Here’s the returned dictionary: > res = { 'status': status, 'scaled_train_df': scaled_train_df, 'scaled_test_df': scaled_test_df, 'scaler_train': scaler_train, 'scaler_test': scaler_test, 'x_train': x_train, 'y_train': y_train, 'x_test': x_test, 'y_test': y_test, } b'Parameters:' * train_features – list of strings with all columns (features) to train * test_feature – string name of the column to predict. This is a single column name in the``df`` (which is a `pandas.DataFrame` ). * df – dataframe to build scaler test and train datasets * test_size – percent of test to train rows * min_feature – min scaler range with default `-1` * max_feature – max scaler range with default `1` Build a scaler normalized `pandas.DataFrame` from an existing `pandas.DataFrame` ``` analysis_engine.ai.build_scaler_dataset_from_df. ``` ``` build_scaler_dataset_from_df ``` (df, min_feature=-1, max_feature=1)[source]¶ * Helper for building scaler datasets from an existing `pandas.DataFrame` returns a dictionary: > return { 'status': status, # NOT_RUN | SUCCESS | ERR 'scaler': scaler, # MinMaxScaler 'df': df # scaled df from df arg } b'Parameters:' * df – `pandas.DataFrame` to convert to scalers * min_feature – min feature range for scaler normalization with default `-1` * max_feature – max feature range for scaler normalization with default `1` * df – # AI - Plot Deep Neural Network Fit History¶ Plot a deep neural network’s history output after training Please check out this blog post for more information on how this works ``` analysis_engine.ai.plot_dnn_fit_history. ``` `plot_dnn_fit_history` (title, df, red, red_color=None, red_label=None, blue=None, blue_color=None, blue_label=None, green=None, green_color=None, green_label=None, orange=None, orange_color=None, orange_label=None, xlabel='Training Epochs', ylabel='Error Values', linestyle='-', width=9.0, height=9.0, date_format='%d\n%b', df_filter=None, start_date=None, footnote_text=None, footnote_xpos=0.7, footnote_ypos=0.01, footnote_color='#888888', footnote_fontsize=8, scale_y=False, show_plot=True, dropna_for_all=False, verbose=False, send_plots_to_slack=False)[source]¶ * Plot a DNN’s fit history using Keras fit history object b'Parameters:' * Example Minute Intraday Algorithm * Running Distributed Algorithm Backtesting and Live Trading * Run an Algorithm * Build Custom Algorithms Using the Base Algorithm Class * Build an Algorithm Request Dictionary * Build a Sell Order * Build a Buy Order * Build Trade History * Calculate Bull Call Entry Pricing * Calculate Bull Call Exit Pricing * Calculate Bear Put Entry Pricing * Calculate Bear Put Exit Pricing * Calculate Option Pricing * Load an Algorithm-Ready Dataset in a File for a Backtest * Load an Algorithm-Ready Dataset in a S3 for a Backtest * Load an Algorithm-Ready Dataset in a Redis for a Backtest * Prepare a Ready-Dataset for a Backtest Helper for converting a dictionary to an algorithm-ready dataset ``` analysis_engine.prepare_dict_for_algo. ``` ``` prepare_dict_for_algo ``` (data, compress=False, encoding='utf-8', convert_to_dict=False, dataset_names=None)[source]¶ * b'Parameters:' * data – string holding contents of an algorithm-ready file, s3 key or redis-key * compress – optional - boolean flag for decompressing the contents of the `data` if necessary (default is `False` and algorithms use `zlib` for compression) * convert_to_dict – optional - bool for s3 use `False` and for files use `True` * encoding – optional - string for data encoding * dataset_names – optional - list of string keys for each dataset node in: ``` dataset[ticker][0]['data'][dataset_names[0]] ``` ## Load Algorithm Ready Dataset¶ ``` analysis_engine.load_dataset.load_dataset ``` will load a dataset from a file, s3 or redis. Load an algorithm dataset from file, s3 or redis - Algorithm-ready datasets Supported environment variables ``` analysis_engine.load_dataset. ``` `load_dataset` (algo_dataset=None, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], path_to_file=None, compress=False, encoding='utf-8', redis_enabled=True, redis_key=None, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', s3_enabled=True, s3_key=None, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, slack_enabled=False, slack_code_block=False, slack_full_width=False, verbose=False)[source]¶ * Load an algorithm dataset from file, s3 or redis b'Parameters:' ## Load Trading History Dataset¶ Load an `Trading History` dataset from file, s3 - redis coming soon Supported Datasets: ``` SA_DATASET_TYPE_TRADING_HISTORY ``` - trading history datasets ``` analysis_engine.load_history_dataset. ``` `load_history_dataset` (history_dataset=None, dataset_type=None, serialize_datasets=None, path_to_file=None, compress=None, encoding='utf-8', convert_to_dict=False, redis_enabled=None, redis_key=None, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', s3_enabled=None, s3_key=None, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=None, slack_enabled=False, slack_code_block=False, slack_full_width=False, verbose=False)[source]¶ * Load a `Trading History` Dataset from file, s3 - note redis is not supported yet b'Parameters:' ``` analysis_engine.consts.SA_DATASET_TYPE_TRADING_HISTORY ``` ) * path_to_file – optional - path to a trading history dataset in a file * serialize_datasets – optional - list of dataset names to deserialize in the dataset * compress – optional - boolean flag for decompressing the contents of the `path_to_file` if necessary (default is `True` and uses `zlib` for compression) * encoding – optional - string for data encoding * convert_to_dict – optional - boolean flag for decoding as a dictionary during prepare ``` analysis_engine.consts.ENABLED_REDIS_PUBLISH ``` ) * redis_key – string - key to save the data in redis (default is `None` ) * redis_address – Redis connection string format: `host:port` (default is ``` analysis_engine.consts.REDIS_ADDRESS ``` ) * redis_db – Redis db to use (default is ``` analysis_engine.consts.REDIS_DB ``` ) * redis_password – optional - Redis password (default is ``` analysis_engine.consts.REDIS_PASSWORD ``` ) * redis_expire – optional - Redis expire value (default is `None` ) * redis_serializer – not used yet - support for future pickle objects in redis (default is `json` ) * redis_encoding – format of the encoded key in redis (default is `utf-8` ) (Optional) Minio (S3) connectivity arguments b'Parameters:' * s3_enabled – bool - toggle for auto-archiving on Minio (S3) (default is ``` analysis_engine.consts.ENABLED_S3_UPLOAD ``` ) * s3_key – string - key to save the data in redis (default is `None` ) * s3_address – Minio S3 connection string format: `host:port` (default is ``` analysis_engine.consts.S3_ADDRESS ``` ) * s3_bucket – S3 Bucket for storing the artifacts (default is ``` analysis_engine.consts.S3_BUCKET ``` ) which should be viewable on a browser: http://localhost:9000/minio/ * s3_access_key – S3 Access key (default is ``` analysis_engine.consts.S3_ACCESS_KEY ``` ) * s3_secret_key – S3 Secret key (default is ``` analysis_engine.consts.S3_SECRET_KEY ``` ) * s3_region_name – S3 region name (default is ``` analysis_engine.consts.S3_REGION_NAME ``` ) * s3_secure – Transmit using tls encryption (default is ``` analysis_engine.consts.S3_SECURE ``` ) (Optional) Slack arguments b'Parameters:' ## Load Trading History Dataset from S3¶ Helper for loading `Trading History` datasets from s3 ``` analysis_engine.load_history_dataset_from_s3. ``` ``` load_history_dataset_from_s3 ``` (s3_key, s3_address, s3_bucket, s3_access_key, s3_secret_key, s3_region_name, s3_secure, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], convert_as_json=True, convert_to_dict=False, compress=False, encoding='utf-8')[source]¶ * Load an algorithm-ready dataset for algorithm backtesting from a local file b'Parameters:' * serialize_datasets – optional - list of dataset names to deserialize in the dataset * convert_as_json – optional - boolean flag for decoding as a dictionary * convert_to_dict – optional - boolean flag for decoding as a dictionary during prepare * compress – optional - boolean flag for decompressing the contents of the `path_to_file` if necessary (default is `False` and algorithms use `zlib` for compression) * encoding – optional - string for data encoding ## Load Trading History Dataset from a local File¶ Helper for loading `Trading History` dataset from a file Supported environment variables ``` analysis_engine.load_history_dataset_from_file. ``` ``` load_history_dataset_from_file ``` (path_to_file, compress=False, encoding='utf-8')[source]¶ * Load a `Trading History` dataset from a local file b'Parameters:' ``` analysis_engine.restore_dataset.restore_dataset ``` will load a dataset from a file, s3 or redis and merge any missing records back in to redis. Use this to restore missing dataset values after a host goes offline or on a fresh install or redis server restart or redis flush. Restore an algorithm dataset from file, s3 or redis to redis for ensuring all datasets are ready for Algorithmic backtesting - Algorithm-ready datasets ``` analysis_engine.restore_dataset. ``` `restore_dataset` (show_summary=True, force_restore=False, algo_dataset=None, dataset_type=20000, serialize_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news1', 'financials', 'earnings', 'dividends', 'company', 'news', 'calls', 'puts', 'pricing', 'tdcalls', 'tdputs'], path_to_file=None, compress=False, encoding='utf-8', redis_enabled=True, redis_key=None, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', redis_output_db=None, s3_enabled=True, s3_key=None, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, slack_enabled=False, slack_code_block=False, slack_full_width=False, datasets_compressed=True, verbose=False)[source]¶ * Restore missing dataset nodes in redis from an algorithm-ready dataset file on disk. Use this to restore redis from scratch. b'Parameters:' * show_summary – optional - show a summary of the algorithm-ready dataset using (default is `True` ) * force_restore – optional - boolean - publish whatever is in the algorithm-ready dataset into redis. If `False` this will ensure that datasets are only set in redis if they are not already set * algo_dataset – optional - already loaded algorithm-ready dataset * dataset_type – optional - dataset type (default is Additonal arguments b'Parameters:' * datasets_compressed – optional - boolean for publishing as compressed strings default is `True` * verbose – optional - bool for increasing logging * show_summary – optional - show a summary of the algorithm-ready dataset using Dataset Publishing API ``` analysis_engine.publish. ``` `publish` (data, label=None, convert_to_json=False, is_df=False, output_file=None, df_compress=False, compress=False, redis_enabled=True, redis_key=None, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', s3_enabled=True, s3_key=None, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, slack_enabled=False, slack_code_block=False, slack_full_width=False, verbose=False, silent=False, **kwargs)[source]¶ * Publish `data` to multiple optional endpoints: - a local file path ( `output_file` ) - minio ( `s3_bucket` and `s3_key` ) - redis ( `redis_key` ) - slack b'Returns:' status value b'Parameters:' * data – data to publish * convert_to_json – convert `data` to a json-serialized string. this function will throw if `json.dumps(data)` fails * is_df – convert `pd.DataFrame` using ``` pd.DataFrame.to_json() ``` to a json-serialized string. this function will throw if `to_json()` fails * label – log tracking label * output_file – path to save the data to a file * df_compress – optional - compress data that is a `pandas.DataFrame` before publishing * compress – optional - compress before publishing (default is `False` ) * verbose – optional - boolean to log output (default is `False` ) * silent – optional - boolean no log output (default is `False` ) * kwargs – optional - future argument support Extract provides a data pipeline for analyzing stock data straight from the redis cache. Extraction API Examples Extract All Data for a Ticker Extract Latest Minute Pricing for Stocks and Options Extract Historical Data Extract historical data with the `date` argument formatted `YYYY-MM-DD` : Additional Extraction APIs IEX Cloud Extraction API Reference Tradier Extraction API Reference ``` analysis_engine.extract. ``` `extract` (ticker=None, datasets=None, tickers=None, use_key=None, extract_mode='all', iex_datasets=None, date=None, redis_enabled=True, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, s3_enabled=True, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, celery_disabled=True, broker_url=None, result_backend=None, label=None, verbose=False)[source]¶ * Extract all cached datasets for a stock `ticker` or a list of `tickers` and returns a dictionary. Please make sure the datasets are already cached in Redis before running this method. If not please refer to the ``` analysis_engine.fetch.fetch ``` function to prepare the datasets on your environment. Python example: > from analysis_engine.extract import extract d = extract(ticker='NFLX') print(d) for k in d['NFLX']: print(f'dataset key: {k}') Extract Intraday Stock and Options Minute Pricing Data This works by using the `date` and `datasets` arguments as filters: > import analysis_engine.extract as ae_extract print(ae_extract.extract( ticker='SPY', datasets=['minute', 'tdcalls', 'tdputs']) ``` <TICKER>_<date formatted YYYY-MM-DD> ``` * iex_datasets – list of strings for gathering specific IEX datasets which are set as consts: . * date – optional - string date formatted `YYYY-MM-DD` - if not set use last close date * datasets – list of strings for indicator dataset extraction - preferred method (defaults to `BACKUP_DATASETS` ) (Optional) Redis connectivity arguments b'Parameters:' Fetch populates redis caches as a stock data pipeline. Data can be pulled at any time using: ``` analysis_engine.extract.extract ``` Dataset Fetch API ``` analysis_engine.fetch. ``` `fetch` (ticker=None, tickers=None, fetch_mode=None, iex_datasets=None, redis_enabled=True, redis_address=None, redis_db=None, redis_password=None, redis_expire=None, s3_enabled=True, s3_address=None, s3_bucket=None, s3_access_key=None, s3_secret_key=None, s3_region_name=None, s3_secure=False, celery_disabled=True, broker_url=None, result_backend=None, label=None, verbose=False)[source]¶ * Fetch all supported datasets for a stock `ticker` or a list of `tickers` and returns a dictionary. Once run, the datasets will all be cached in Redis and archived in Minio (S3) by default. Python example: > from analysis_engine.fetch import fetch d = fetch(ticker='NFLX') print(d) for k in d['NFLX']: print(f'dataset key: {k}') By default, it synchronously automates: * fetching all datasets * caching all datasets in Redis * archiving all datasets in Minio (S3) * returns all datasets in a single dictionary Stock tickers to fetch b'Parameters:' * ticker – single stock ticker/symbol/ETF to fetch * tickers – optional - list of tickers to fetch * fetch_mode – data sources - default is `all` (both IEX and Yahoo), `iex` for only IEX, `yahoo` for only Yahoo. * iex_datasets – list of strings for gathering specific IEX datasets which are set as consts: Helper for compressing a `dict` or `pandas.DataFrame` ``` analysis_engine.compress_data. ``` `compress_data` (data, encoding='utf-8', date_format=None)[source]¶ * Helper for compressing `data` which can be either a `dict` or a `pandas.DataFrame` objects with zlib. b'Parameters:' * data – `dict` or `pandas.DataFrame` object to compress * encoding – optional encoding - default is `utf-8` * date_format – optional date format - default is `None` * data – Helper for building a dictionary for the: ``` analysis_engine.publish.publish ``` function ``` analysis_engine.build_publish_request. ``` ``` build_publish_request ``` (ticker=None, tickers=None, convert_to_json=False, output_file=None, compress=False, redis_enabled=False, redis_key=None, redis_address='localhost:6379', redis_db=0, redis_password=None, redis_expire=None, redis_serializer='json', redis_encoding='utf-8', s3_enabled=False, s3_key=None, s3_address='0.0.0.0:9000', s3_bucket='pricing', s3_access_key='trexaccesskey', s3_secret_key='trex123321', s3_region_name='us-east-1', s3_secure=False, slack_enabled=False, slack_code_block=False, slack_full_width=False, verbose=False, label='publisher')[source]¶ * Build a dictionary for helping to quickly publish to multiple optional endpoints: - a local file path ( `output_file` ) - minio ( `s3_bucket` and `s3_key` ) - redis ( `redis_key` ) - slack b'Parameters:' * ticker – ticker * tickers – optional - list of tickers * label – optional - algo log tracking name * output_file – path to save the data to a file * compress – optional - compress before publishing * verbose – optional - boolean to log output * kwargs – optional - future argument support ``` ENABLED_REDIS_PUBLISH ``` ) * redis_key – string - key to save the data in redis (default is `None` ) * redis_address – Redis connection string format: `host:port` (default is `REDIS_ADDRESS` ) * redis_db – Redis db to use (default is `REDIS_DB` ) * redis_password – optional - Redis password (default is `REDIS_PASSWORD` ) * redis_expire – optional - Redis expire value (default is `REDIS_EXPIRE` ) * redis_serializer – not used yet - support for future pickle objects in redis (default is `json` ) * redis_encoding – format of the encoded key in redis (default is `utf-8` ) (Optional) Minio (S3) connectivity arguments b'Parameters:' * s3_enabled – bool - toggle for auto-archiving on Minio (S3) (default is `ENABLED_S3_UPLOAD` ) * s3_key – string - key to save the data in redis (default is `None` ) * s3_address – Minio S3 connection string format: `host:port` (default is `S3_ADDRESS` ) * s3_bucket – S3 Bucket for storing the artifacts (default is `S3_BUCKET` ) which should be viewable on a browser: http://localhost:9000/minio/dev/ * s3_access_key – S3 Access key (default is `S3_ACCESS_KEY` ) * s3_secret_key – S3 Secret key (default is `S3_SECRET_KEY` ) * s3_region_name – S3 region name (default is `S3_REGION_NAME` ) * s3_secure – Transmit using tls encryption (default is `S3_SECURE` ) (Optional) Slack arguments b'Parameters:' # Source Code # Source Code¶ These are documents for developing and understanding how the Stock Analysis Engine works. Please refer to the repository for the latest source code examples: # Example API Requests¶ Helpers and examples for supported API Requests that each Celery Task supports: * analysis_engine.work_tasks.get_new_pricing_data * analysis_engine.work_tasks.handle_pricing_update_task * analysis_engine.work_tasks.publish_pricing_update `get_ds_dict` (ticker, base_key=None, ds_id=None, label=None, service_dict=None)[source]¶ * Get a dictionary with all cache keys for a ticker and return the dictionary. Use this method to decouple your apps from the underlying cache key implementations (if you do not need them). b'Parameters:' * ticker – ticker * base_key – optional - base key that is prepended in all cache keys * ds_id – optional - dataset id (useful for external database id) * label – optional - tracking label in the logs * service_dict – optional - parent call functions and Celery tasks can use this dictionary to seed the common service routes and endpoints. Refer to ``` analysis_engine.consts.SERVICE_VALS ``` for automatically-copied over keys by this helper. ``` build_get_new_pricing_request ``` Build a sample Celery task API request: analysis_engine.work_tasks.get_new_pricing_data ``` build_publish_pricing_request ``` Build a sample Celery task API request: analysis_engine.work_tasks.publisher_pricing_update ``` build_cache_ready_pricing_dataset ``` Build a cache-ready pricing dataset to replicate the `get_new_pricing_data` task b'Parameters:' b'label &#8211; log label to use' ``` build_publish_from_s3_to_redis_request ``` Build a sample Celery task API request: analysis_engine.work_tasks.publish_from_s3_to_redis ``` build_prepare_dataset_request ``` Build a sample Celery task API request: analysis_engine.work_tasks.prepare_pricing_dataset ``` build_analyze_dataset_request ``` Build a sample Celery task API request: analysis_engine.work_tasks.analyze_pricing_dataset ``` build_screener_analysis_request ``` (ticker=None, tickers=None, fv_urls=None, fetch_mode='iex', iex_datasets=['daily', 'minute', 'quote', 'stats', 'peers', 'news', 'financials', 'earnings', 'dividends', 'company'], determine_sells=None, determine_buys=None, label='screener')[source]¶ * Build a dictionary request for the task: ``` analysis_engine.work_tasks.run_screener_analysis ``` * ticker – ticker to add to the analysis * tickers – tickers to add to the analysis * fv_urls – finviz urls * fetch_mode – supports pulling from `iex` , `yahoo` , `all` (defaults to `iex` ) * iex_datasets – datasets to fetch from `iex` (defaults to ``` analysis_engine.con sts.IEX_DATASETS_DEFAULT ``` ) * determine_sells – string custom Celery task name for handling sell-side processing * determine_buys – string custom Celery task name for handling buy-side processing * label – log tracking label b'Returns:' initial request dictionary: > req = { 'tickers': use_tickers, 'fv_urls': use_urls, 'fetch_mode': fetch_mode, 'iex_datasets': iex_datasets, 's3_bucket': s3_bucket_name, 's3_enabled': s3_enabled, 'redis_enabled': redis_enabled, 'determine_sells': determine_sells, 'determine_buys': determine_buys, 'label': label } # Read from S3 as a String¶ Wrapper for downloading an S3 key as a string ``` analysis_engine.s3_read_contents_from_key. ``` ``` s3_read_contents_from_key ``` (s3, s3_bucket_name, s3_key, encoding='utf-8', convert_as_json=True, compress=False)[source]¶ * Download the S3 key contents as a string. This will raise exceptions. b'Parameters:' * s3 – existing S3 object * s3_bucket_name – bucket name * s3_key – S3 key * encoding – utf-8 by default * convert_to_json – auto-convert to a dict * compress – decompress using `zlib` # Get Task Results¶ Get Task Results Debug by setting the environment variable: `export DEBUG_TASK=1` ``` analysis_engine.get_task_results. ``` `get_task_results` (work_dict=None, result=None, **kwargs)[source]¶ * If celery is disabled by the environment key or requested in the ``` work_dict['celery_disabled'] = True ``` then return the task result dictionary, otherwise return `None` . This method is useful for allowing tests to override the returned payloads during task chaining using `@mock.patch` . b'Parameters:' * work_dict – task work dictionary * result – task result dictionary * kwargs – keyword arguments # Constants¶ Utility methods and constants Consts and helper functions Algorithm Environment Variables ``` ALGO_MODULE_PATH = ev( 'ALGO_MODULE_PATH', '/opt/sa/analysis_engine/mocks/example_algo_minute.py') ALGO_BASE_MODULE_PATH = ev( 'ALGO_BASE_MODULE_PATH', '/opt/sa/analysis_engine/algo.py') ALGO_MODULE_NAME = ev( 'ALGO_MODULE_NAME', 'example_algo_minute') ALGO_VERSION = ev( 'ALGO_VERSION', '1') ALGO_BUYS_S3_BUCKET_NAME = ev( 'ALGO_BUYS_S3_BUCKET_NAME', 'algobuys') ALGO_SELLS_S3_BUCKET_NAME = ev( 'ALGO_SELLS_S3_BUCKET_NAME', 'algosells') ALGO_RESULT_S3_BUCKET_NAME = ev( 'ALGO_RESULT_S3_BUCKET_NAME', 'algoresults') ALGO_READY_DATASET_S3_BUCKET_NAME = ev( 'ALGO_READY_DATASET_S3_BUCKET_NAME', 'algoready') ALGO_EXTRACT_DATASET_S3_BUCKET_NAME = ev( 'ALGO_EXTRACT_DATASET_S3_BUCKET_NAME', 'algoready') ALGO_HISTORY_DATASET_S3_BUCKET_NAME = ev( 'ALGO_HISTORY_DATASET_S3_BUCKET_NAME', 'algohistory') ALGO_REPORT_DATASET_S3_BUCKET_NAME = ev( 'ALGO_REPORT_DATASET_S3_BUCKET_NAME', 'algoreport') ALGO_BACKUP_DATASET_S3_BUCKET_NAME = ev( 'ALGO_BACKUP_DATASET_S3_BUCKET_NAME', 'algobackup') ALGO_READY_DIR = ev( 'ALGO_READY_DIR', '/tmp') ALGO_EXTRACT_DIR = ev( 'ALGO_EXTRACT_DIR', '/tmp') ALGO_HISTORY_DIR = ev( 'ALGO_HISTORY_HISTORY_DIR', '/tmp') ALGO_REPORT_DIR = ev( 'ALGO_REPORT_DIR', '/tmp') ALGO_LOAD_DIR = ev( 'ALGO_LOAD_DIR', '/tmp') ALGO_BACKUP_DIR = ev( 'ALGO_BACKUP_DIR', '/tmp') ALGO_READY_REDIS_ADDRESS = ev( 'ALGO_READY_REDIS_ADDRESS', 'localhost:6379') ALGO_EXTRACT_REDIS_ADDRESS = ev( 'ALGO_EXTRACT_REDIS_ADDRESS', 'localhost:6379') ALGO_HISTORY_REDIS_ADDRESS = ev( 'ALGO_HISTORY_REDIS_ADDRESS', 'localhost:6379') ALGO_REPORT_REDIS_ADDRESS = ev( 'ALGO_REPORT_REDIS_ADDRESS', 'localhost:6379') ALGO_BACKUP_REDIS_ADDRESS = ev( 'ALGO_BACKUP_REDIS_ADDRESS', 'localhost:6379') ALGO_HISTORY_VERSION = ev( 'ALGO_HISTORY_VERSION', '1') ALGO_REPORT_VERSION = ev( 'ALGO_REPORT_VERSION', '1') ``` Stock and Analysis Environment Variables ``` TICKER = ev( 'TICKER', 'SPY') TICKER_ID = int(ev( 'TICKER_ID', '1')) DEFAULT_TICKERS = ev( 'DEFAULT_TICKERS', 'SPY,AMZN,TSLA,NFLX').split(',') NEXT_EXP = opt_dates.option_expiration() NEXT_EXP_STR = NEXT_EXP.strftime('%Y-%m-%d') ``` Logging Environment Variables ``` LOG_CONFIG_PATH = ev( 'LOG_CONFIG_PATH', './analysis_engine/log/logging.json') ``` Slack Environment Variables ``` SLACK_WEBHOOK = ev( 'SLACK_WEBHOOK', None) SLACK_ACCESS_TOKEN = ev( 'SLACK_ACCESS_TOKEN', None ) SLACK_PUBLISH_PLOT_CHANNELS = ev( 'SLACK_PUBLISH_PLOT_CHANNELS', None ) PROD_SLACK_ALERTS = ev( 'PROD_SLACK_ALERTS', '0') ``` Celery Environment Variables ``` SSL_OPTIONS = {} TRANSPORT_OPTIONS = {} WORKER_BROKER_URL = ev( 'WORKER_BROKER_URL', 'redis://localhost:6379/11') WORKER_BACKEND_URL = ev( 'WORKER_BACKEND_URL', 'redis://localhost:6379/12') WORKER_CELERY_CONFIG_MODULE = ev( 'WORKER_CELERY_CONFIG_MODULE', 'analysis_engine.work_tasks.celery_config') WORKER_TASKS = ev( 'WORKER_TASKS', ('analysis_engine.work_tasks.task_run_algo')) INCLUDE_TASKS = WORKER_TASKS.split(',') ``` ``` ENABLED_S3_UPLOAD = ev( 'ENABLED_S3_UPLOAD', '0') == '1' S3_ACCESS_KEY = ev( 'AWS_ACCESS_KEY_ID', 'trexaccesskey') S3_SECRET_KEY = ev( 'AWS_SECRET_ACCESS_KEY', 'trex123321') S3_REGION_NAME = ev( 'AWS_DEFAULT_REGION', 'us-east-1') S3_ADDRESS = ev( 'S3_ADDRESS', '0.0.0.0:9000') S3_SECURE = ev( 'S3_SECURE', '0') == '1' S3_BUCKET = ev( 'S3_BUCKET', 'pricing') S3_COMPILED_BUCKET = ev( 'S3_COMPILED_BUCKET', 'compileddatasets') S3_KEY = ev( 'S3_KEY', 'test_key') DAILY_S3_BUCKET_NAME = ev( 'DAILY_S3_BUCKET_NAME', 'daily') MINUTE_S3_BUCKET_NAME = ev( 'MINUTE_S3_BUCKET_NAME', 'minute') QUOTE_S3_BUCKET_NAME = ev( 'QUOTE_S3_BUCKET_NAME', 'quote') STATS_S3_BUCKET_NAME = ev( 'STATS_S3_BUCKET_NAME', 'stats') PEERS_S3_BUCKET_NAME = ev( 'PEERS_S3_BUCKET_NAME', 'peers') NEWS_S3_BUCKET_NAME = ev( 'NEWS_S3_BUCKET_NAME', 'news') FINANCIALS_S3_BUCKET_NAME = ev( 'FINANCIALS_S3_BUCKET_NAME', 'financials') EARNINGS_S3_BUCKET_NAME = ev( 'EARNINGS_S3_BUCKET_NAME', 'earnings') DIVIDENDS_S3_BUCKET_NAME = ev( 'DIVIDENDS_S3_BUCKET_NAME', 'dividends') COMPANY_S3_BUCKET_NAME = ev( 'COMPANY_S3_BUCKET_NAME', 'company') PREPARE_S3_BUCKET_NAME = ev( 'PREPARE_S3_BUCKET_NAME', 'prepared') ANALYZE_S3_BUCKET_NAME = ev( 'ANALYZE_S3_BUCKET_NAME', 'analyzed') SCREENER_S3_BUCKET_NAME = ev( 'SCREENER_S3_BUCKET_NAME', 'screener-data') PRICING_S3_BUCKET_NAME = ev( 'PRICING_S3_BUCKET_NAME', 'pricing') OPTIONS_S3_BUCKET_NAME = ev( 'OPTIONS_S3_BUCKET_NAME', 'options') ``` Supported Redis Environment Variables ``` ENABLED_REDIS_PUBLISH = ev( 'ENABLED_REDIS_PUBLISH', '0') == '1' REDIS_ADDRESS = ev( 'REDIS_ADDRESS', 'localhost:6379') REDIS_KEY = ev( 'REDIS_KEY', 'test_redis_key') REDIS_PASSWORD = ev( 'REDIS_PASSWORD', None) REDIS_DB = int(ev( 'REDIS_DB', '0')) REDIS_EXPIRE = ev( 'REDIS_EXPIRE', None) ``` ``` get_indicator_type_as_int ``` ``` get_indicator_category_as_int ``` ``` get_indicator_uses_data_as_int ``` ``` get_algo_timeseries_from_int ``` (val)[source]¶ * convert the integer value to the timeseries string found in the ``` analysis_engine.consts.ALGO_TIMESERIES ``` dictionary b'Parameters:' b'val &#8211; integer value for finding the string timeseries label' `is_celery_disabled` (work_dict=None)[source]¶ * b'Parameters:' b'work_dict &#8211; request to check' `get_status` (status)[source]¶ * Return the string label for an integer status code which should be one of the ones above. b'Parameters:' b'status &#8211; integer status code' `ppj` (json_data)[source]¶ * b'Parameters:' b'json_data &#8211; dictionary to convert to a pretty-printed, multi-line string' `to_float_str` (val)[source]¶ * convert the float to a string with 2 decimal points of precision b'Parameters:' b'val &#8211; float to change to a 2-decimal string' `to_f` (val)[source]¶ * truncate the float to 2 decimal points of precision b'Parameters:' b'val &#8211; float to change' `get_mb` (num)[source]¶ * convert a the number of bytes (as an `integer` ) to megabytes with 2 decimal points of precision b'Parameters:' b'num &#8211; integer - number of bytes' `ev` (k, v)[source]¶ * b'Parameters:' * k – environment variable key * v – environment variable value `get_percent_done` (progress, total)[source]¶ * calculate percentage done to 2 decimal points of precision b'Parameters:' ``` get_redis_host_and_port ``` (addr=None, req=None)[source]¶ * parse the env `REDIS_ADDRESS` or `addr` string or a dictionary `req` and return a tuple for (host (str), port (int)) b'Parameters:' * addr – optional - string redis address to parse format is `host:port` * req – optional - dictionary where the host and port are under the keys `redis_host` and `redis_port` * addr – optional - string redis address to parse format is # IEX API # IEX API¶ # IEX - Account Set Up¶ Install the Stock Analysis Engine * If you want to use python pip: ``` pip install stock-analysis-engine ``` * If you want to use a Kubernetes cloud service (EKS, AKS, or GCP) use the Helm guide to get started * If you want to run on your own bare-metal servers you can use Metalnetes to run multiple Analysis Engines at the same time * If you want to develop your own algorithms or integrate your applications using python, you can set up a Development Environment * If you want to use python pip: * Set `IEX_TOKEN` the Environment Variable > export IEX_TOKEN=PUBLISHABLE_TOKEN # IEX - Fetch API Reference¶ Fetch API calls for pulling IEX Cloud Data from a valid IEX account Running these API calls will impact your account’s monthly quota. Please be aware of your usage when calling these. Please set the environment variable `IEX_TOKEN` to your account token before running these calls. More steps can be found on the docs in the IEX API Command Line Tool Fetching Examples With the Analysis Engine stack running you can use the pip’s included `fetch` command line tool with the following arguments to pull data (and automate it). Fetch Minute Data `fetch -t AAPL -g min` Fetch Daily Data `fetch -t AAPL -g day` Fetch Quote Data ``` fetch -t AAPL -g quote ``` Fetch Stats Data ``` fetch -t AAPL -g stats ``` Fetch Peers Data ``` fetch -t AAPL -g peers ``` Fetch News Data ``` fetch -t AAPL -g news ``` Fetch Financials Data `fetch -t AAPL -g fin` Fetch Earnings Data ``` fetch -t AAPL -g earn ``` Fetch Dividends Data `fetch -t AAPL -g div` Fetch Company Data ``` fetch -t AAPL -g comp ``` Command Line Fetch Debugging Add the `-d` flag to the `fetch` command to enable verbose logging. Here is an example: ``` fetch -t AAPL -g news -d ``` `fetch_daily` (ticker=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch the IEX daily data for a ticker and return it as a `pandas.DataFrame` . https://iexcloud.io/docs/api/#historical-prices > import analysis_engine.iex.fetch_api as iex_fetch daily_df = iex_fetch.fetch_daily(ticker='SPY') print(daily_df) b'Parameters:' `fetch_minute` (ticker=None, backfill_date=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch the IEX minute intraday data for a ticker and return it as a `pandas.DataFrame` . https://iexcloud.io/docs/api/#historical-prices > import analysis_engine.iex.fetch_api as iex_fetch minute_df = iex_fetch.fetch_minute(ticker='SPY') print(minute_df) b'Parameters:' * ticker – string ticker to fetch * backfill_date – optional - date string formatted `YYYY-MM-DD` for filling in missing minute data * work_dict – dictionary of args used by the automation * scrub_mode – optional - string type of scrubbing handler to run * verbose – optional - bool to log for debugging `fetch_stats` (ticker=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * `fetch_stats` (ticker=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source] * `fetch_news` (ticker=None, num_news=5, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch the IEX news data for a ticker and return it as a `pandas.DataFrame` . https://iexcloud.io/docs/api/#news > import analysis_engine.iex.fetch_api as iex_fetch news_df = iex_fetch.fetch_news(ticker='SPY') print(news_df) b'Parameters:' * ticker – string ticker to fetch * num_news – optional - int number of news articles to fetch (default is `5` articles) * work_dict – dictionary of args used by the automation * scrub_mode – optional - string type of scrubbing handler to run * verbose – optional - bool to log for debugging Fetch the IEX earnings data for a ticker and return it as a `pandas.DataFrame` . https://iexcloud.io/docs/api/#earnings > import analysis_engine.iex.fetch_api as iex_fetch earn_df = iex_fetch.fetch_earnings(ticker='SPY') print(earn_df) b'Parameters:' `fetch_dividends` (ticker=None, timeframe='3m', work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * Fetch the IEX dividends data for a ticker and return it as a `pandas.DataFrame` . https://iexcloud.io/docs/api/#dividends > import analysis_engine.iex.fetch_api as iex_fetch div_df = iex_fetch.fetch_dividends(ticker='SPY') print(div_df) b'Parameters:' * ticker – string ticker to fetch * timeframe – optional - string for setting dividend lookback period used for (default is `3m` for three months) * work_dict – dictionary of args used by the automation * scrub_mode – optional - string type of scrubbing handler to run * verbose – optional - bool to log for debugging `fetch_company` (ticker=None, work_dict=None, scrub_mode='NO_SORT', verbose=False)[source]¶ * ## IEX - HTTP Fetch Functions¶ Functions for getting data from IEX using HTTP Debugging Please set the `verbose` argument to `True` to enable debug logging with these calls `get_from_iex` (url, token=None, version=None, verbose=False)[source]¶ * Helper for getting data from an IEX publishable API endpoint using a token as a query param on the http url. b'Parameters:' `handle_get_from_iex` (url, token=None, version=None, verbose=False)[source]¶ * Implementation for getting data from the IEX v2 or v1 api depending on if the `token` argument is set: b'Parameters:' optional - string version for the IEX Cloud (default is `beta` ) `get_from_iex_cloud` (url, token=None, verbose=False)[source]¶ * Get data from IEX Cloud API (v2) https://iexcloud.io b'Parameters:' `get_from_iex_v1` (url, verbose=False)[source]¶ * Get data from the IEX Trading API (v1) https//api.iextrading.com/1.0/ b'Parameters:' * url – IEX V1 Resource URL * verbose – optional - bool turn on logging ``` convert_datetime_columns ``` (df, date_cols=None, second_cols=None, tcols=None, ecols=None)[source]¶ * Convert the IEX date columns in the `df` to `datetime` objects b'Parameters:' * df – `pandas.DataFrame` to set columns to datetime objects * date_cols – list of columns to convert with a date string format formatted: `YYYY-MM-DD` * second_cols – list of columns to convert with a date string format formatted: `YYYY-MM-DD HH:MM:SS` * tcols – list of columns to convert with a time format (this is for millisecond epoch integers) * ecols – list of columns to convert with a time format (this is for nanosecond epoch integers) * df – ## IEX - Build Auth URL Using Publishable Token¶ Build an authenticated url for IEX Cloud ``` analysis_engine.iex.build_auth_url. ``` `build_auth_url` (url, token=None)[source]¶ * Helper for constructing authenticated IEX urls using an with a valid IEX Cloud Beta Account This will return a string with the token as a query parameter on the HTTP url b'Parameters:' * url – initial url to make authenticated * token – optional - string (defaults to `IEX_TOKEN` environment variable or `None` ) # IEX - Extraction API Reference¶ Here is the extraction API for returning a `pandas.DataFrame` from cached or archived IEX datasets. Extract an IEX dataset from Redis and return it as a `pandas.DataFrame` or None Please refer to the Extraction API reference for additional support ``` extract_daily_dataset ``` ``` extract_minute_dataset ``` ``` extract_quote_dataset ``` ``` extract_stats_dataset ``` ``` extract_peers_dataset ``` `extract_news_dataset` (ticker=None, date=None, work_dict=None, scrub_mode='sort-by-date', verbose=False)[source]¶ * ``` extract_financials_dataset ``` ``` extract_earnings_dataset ``` ``` extract_dividends_dataset ``` ``` extract_company_dataset ``` (ticker=None, date=None, work_dict=None, scrub_mode='NO_SORT', verbose=False)[source]¶ * Extract the IEX company data for a ticker from Redis and return it as a tuple (status, `pandas.Dataframe` ) > import analysis_engine.iex.extract_df_from_redis as iex_extract # extract by historical date is also supported as an arg # date='2019-02-15' comp_status, comp_df = iex_extract.extract_company_dataset( ticker='SPY') print(comp_df) b'Parameters:' # IEX API Example - Fetch Minute Intraday Data using HTTP¶ ``` import analysis_engine.iex.fetch_api as fetch df = fetch.fetch_minute(ticker='SPY') print(df) ``` # IEX API Example - Extract Minute Intraday Data from Cache¶ ``` import datetime import analysis_engine.iex.extract_df_from_redis as extract ticker = 'SPY' today = datetime.datetime.now().strftime('%Y-%m-%d') status, df = extract.extract_minute_dataset({ 'ticker': f'{ticker}', 'redis_key': f'{ticker}_{today}_minute'}) print(df) ``` # IEX API Example - Get Minute Data from IEX (calls fetch and cache)¶ ``` import analysis_engine.iex.get_data as get_data df = get_data.get_data_from_iex({ 'ticker': 'SPY', 'ft_type': 'minute'}) print(df) ``` # IEX - Get Data¶ Use this function to pull data from IEX with a shared API for supported fetch routines over the IEX HTTP Rest API. Common Fetch for any supported Get from IEX using HTTP ``` # debug the fetch routines with: export DEBUG_IEX_DATA=1 ``` ``` analysis_engine.iex.get_data. ``` `get_data_from_iex` (work_dict)[source]¶ * Get data from IEX - this requires an account b'Parameters:' b'work_dict &#8211; request dictionary' This is a helper for the parent method: ``` analysis_engine.iex.get_data.py ``` Fetch data from IEX with the factory method `fetch_data` ``` analysis_engine.iex.fetch_data. ``` `fetch_data` (work_dict, fetch_type=None, verbose=False)[source]¶ * ``` analysis_engine.iex.consts ``` > fetch_type = iex_consts.FETCH_DAILY fetch_type = iex_consts.FETCH_MINUTE fetch_type = iex_consts.FETCH_QUOTE fetch_type = iex_consts.FETCH_STATS fetch_type = iex_consts.FETCH_PEERS fetch_type = iex_consts.FETCH_NEWS fetch_type = iex_consts.FETCH_FINANCIALS fetch_type = iex_consts.FETCH_EARNINGS fetch_type = iex_consts.FETCH_DIVIDENDS fetch_type = iex_consts.FETCH_COMPANY Supported `work_dict['ft_type']` string values: > work_dict['ft_type'] = 'daily' work_dict['ft_type'] = 'minute' work_dict['ft_type'] = 'quote' work_dict['ft_type'] = 'stats' work_dict['ft_type'] = 'peers' work_dict['ft_type'] = 'news' work_dict['ft_type'] = 'financials' work_dict['ft_type'] = 'earnings' work_dict['ft_type'] = 'dividends' work_dict['ft_type'] = 'company' b'Parameters:' * work_dict – dictionary of args for the pEX call * fetch_type – optional - name or enum of the fetcher to create can also be a lower case string in work_dict[‘ft_type’] * verbose – optional - boolean enable debug logging # Yahoo API # Yahoo API¶ # Fetch Data from Yahoo¶ Parse data from yahoo # Yahoo Dataset Extraction API¶ Here is the extraction API for returning a `pandas.DataFrame` from cached or archived Yahoo datasets (pricing, options and news). Extract an Yahoo dataset from Redis (S3 support coming soon) and load it into a `pandas.DataFrame` Supported environment variables: ``` extract_pricing_dataset ``` Extract the Yahoo pricing data for a ticker and return it as a pandas Dataframe b'Parameters:' # FinViz API # FinViz API¶ # Fetch a FinViz Screener and Convert it to a List of Tickers¶ Supported Fetch calls * Convert a FinViz Screener URL to a list of tickers. ``` analysis_engine.finviz.fetch_api. ``` ``` fetch_tickers_from_screener ``` (url, columns=['ticker_id', 'ticker', 'company', 'sector', 'industry', 'country', 'market_cap', 'pe', 'price', 'change', 'volume'], as_json=False, soup_selector='td.screener-body-table-nw', label='fz-screen-converter')[source]¶ * Convert all the tickers on a FinViz screener url to a `pandas.DataFrame` . Returns a dictionary with a ticker list and DataFrame or a json-serialized DataFrame in a string (by default `as_json=False` will return a `pandas.DataFrame` if the ``` returned-dictionary['status'] == SUCCESS ``` Works with urls created on: https://finviz.com/screener.ashx > import analysis_engine.finviz.fetch_api as fv url = ( 'https://finviz.com/screener.ashx?' 'v=111&' 'f=cap_midunder,exch_nyse,fa_div_o5,idx_sp500' '&ft=4') res = fv.fetch_tickers_from_screener(url=url) print(res) b'Parameters:' * url – FinViz screener url * columns – ordered header column as a list of strings and corresponds to the header row from the FinViz screener table * soup_selector – ``` bs4.BeautifulSoup.selector ``` string for pulling selected html data (by default ``` td.screener-body-table-nw ``` ) * as_json – FinViz screener url * label – log tracking label string Perform dataset scrubbing actions and return the scrubbed dataset as a ready-to-go data feed. This is an approach for normalizing an internal data feed. ``` # verbose logging in this module # note this can take longer to transform # DataFrames and is not recommended for # production: export DEBUG_FETCH=1 ``` Ingress Scrubbing supports converting an incoming dataset (from IEX) and converts it to one of the following data feed and returned as a `pandas DataFrame` : ``` DATAFEED_DAILY = 900 DATAFEED_MINUTE = 901 DATAFEED_QUOTE = 902 DATAFEED_STATS = 903 DATAFEED_PEERS = 904 DATAFEED_NEWS = 905 DATAFEED_FINANCIALS = 906 DATAFEED_EARNINGS = 907 DATAFEED_DIVIDENDS = 908 DATAFEED_COMPANY = 909 DATAFEED_PRICING_YAHOO = 1100 DATAFEED_OPTIONS_YAHOO = 1101 DATAFEED_NEWS_YAHOO = 1102 ``` `debug_msg` (label, datafeed_type, msg_format, date_str, df)[source]¶ * Debug helper for debugging scrubbing handlers b'Parameters:' * label – log label * datafeed_type – fetch type * msg_format – message to include * date_str – date string * df – `pandas DataFrame` or `None` ``` ingress_scrub_dataset ``` Scrub a `pandas.DataFrame` from an Ingress pricing service and return the resulting `pandas.DataFrame` b'Parameters:' ``` extract_scrub_dataset ``` Scrub a cached `pandas.DataFrame` that was stored in Redis and return the resulting `pandas.DataFrame` b'Parameters:' ``` build_dates_from_df_col ``` (df, use_date_str, src_col='minute', src_date_format='%Y-%m-%d %H:%M:%S', output_date_format='%Y-%m-%d %H:%M:%S')[source]¶ * Converts a string date column series in a `pandas.DataFrame` to a well-formed date string list. b'Parameters:' * src_col – source column name * use_date_str – date string for today * src_date_format – format of the string in the ``df[src_col]` columne * output_date_format – write the new date strings in this format. * df – source `pandas.DataFrame` Date: 2014-01-01 Categories: Tags: These are a collection of functions for determining when the current options cycle expires (3rd Friday of most months) and for calculating historical option expiration dates. If you need to automate looking up the current option cycle expiration, then please checkout using the script: ``` /opt/sa/analysis_engine/scripts/print_next_expiration_date.py 2018-10-19 ``` ``` get_options_for_years ``` (years=['2014', '2015', '2016', '2016', '2017', '2018', '2019', '2020', '2021', '2022'])[source]¶ * b'Parameters:' * years – number of years back * months – number of months to build year `historical_options` (years=['2014', '2015', '2016', '2017', '2018', '2019', '2020', '2021', '2022', '2023', '2024', '2025', '2026', '2027', '2028'])[source]¶ * b'Parameters:' b'years &#8211; years to build' ``` get_options_between_dates ``` (start_date, end_date)[source]¶ * b'Parameters:' * start_date – start date * end_date – end date `option_expiration` (date=None)[source]¶ * b'Parameters:' b'date &#8211; date to find the current expiration' Use this module to determine if a `date` or `date string` is a holiday (future, today or historical should be supported). Holiday detection for US Markets Stack Overflow for this module ``` get_trading_close_holidays ``` (year=None)[source]¶ * Get Trading Holidays for the year b'Parameters:' b'year &#8211; optional - year integer' `is_holiday` (date=None, date_str=None, fmt='%Y-%m-%d')[source]¶ * Determine if the `date` is a holiday, if not then determine if today is a holiday. Returns `True` if it is a holiday and `False` if it is not a holiday in the US Markets. b'Parameters:' * date – optional - datetime object object for calling ``` get_trading_close_holidays(year=date.year) ``` * date_str – optional - date string formatted with `fmt` * fmt – optional - datetime.strftime formatter * date – optional - datetime object object for calling Here are the helper functions for plotting datasets. Charting functions with matplotlib, numpy, pandas, and seaborn Change the footnote with: ``` export PLOT_FOOTNOTE="custom footnote on images" ``` `plot_df` (log_label, title, column_list, df, xcol='date', xlabel='Date', ylabel='Pricing', linestyle='-', color='blue', show_plot=True, dropna_for_all=True)[source]¶ * b'Parameters:' * log_label – log identifier * title – title of the plot * column_list – list of columns in the df to show * df – initialized `pandas.DataFrame` * xcol – x-axis column in the initialized `pandas.DataFrame` * xlabel – x-axis label * ylabel – y-axis label * linestyle – style of the plot line * color – color to use * show_plot – bool to show the plot * dropna_for_all – optional - bool to toggle keep None’s in the plot `df` (default is drop them for display purposes) `dist_plot` (log_label, df, width=10.0, height=10.0, title='Distribution Plot', style='default', xlabel='', ylabel='', show_plot=True, dropna_for_all=True)[source]¶ * Show a distribution plot for the passed in dataframe: `df` b'Parameters:' * log_label – log identifier * df – initialized `pandas.DataFrame` * width – width of figure * height – height of figure * style – style to use * xlabel – x-axis label * ylabel – y-axis label * show_plot – bool to show plot or not * dropna_for_all – optional - bool to toggle keep None’s in the plot `df` (default is drop them for display purposes) `show_with_entities` (log_label, xlabel, ylabel, title, ax, fig, legend_list=None, show_plot=True)[source]¶ * Helper for showing a plot with a legend and a footnoe b'Parameters:' * log_label – log identifier * xlabel – x-axis label * ylabel – y-axis label * title – title of the plot * ax – axes * fig – figure * legend_list – list of legend items to show * show_plot – bool to show the plot ``` plot_overlay_pricing_and_volume ``` (log_label, ticker, df, xlabel=None, ylabel=None, high_color='#CC1100', close_color='#3498db', volume_color='#2ECC71', date_format='%Y-%m-%d %I:%M:%S %p', show_plot=True, dropna_for_all=True)[source]¶ * Plot pricing (high, low, open, close) and volume as an overlay off the x-axis Here is a sample chart from the Stock Analysis Jupyter Intro Notebook b'Parameters:' * log_label – log identifier * ticker – ticker name * df – timeseries `pandas.DateFrame` * xlabel – x-axis label * ylabel – y-axis label * high_color – optional - high plot color * close_color – optional - close plot color * volume_color – optional - volume color * data_format – optional - date format string this must be a valid value for the `df['date']` column that would work with: ``` datetime.datetime.stftime(date_format) ``` * show_plot – optional - bool to show the plot * dropna_for_all – optional - bool to toggle keep None’s in the plot `df` (default is drop them for display purposes) `plot_hloc_pricing` (log_label, ticker, df, title, show_plot=True, dropna_for_all=True)[source]¶ * Plot the high, low, open and close columns together on a chart b'Parameters:' * log_label – log identifier * ticker – ticker * df – initialized `pandas.DataFrame` * title – title for the chart * show_plot – bool to show the plot * dropna_for_all – optional - bool to toggle keep None’s in the plot `df` (default is drop them for display purposes) `add_footnote` (fig=None, xpos=0.9, ypos=0.01, text=None, color='#888888', fontsize=8)[source]¶ * Add a footnote based off the environment key: `PLOT_FOOTNOTE` b'Parameters:' * fig – add the footnote to this figure object * xpos – x-axes position * ypos – y-axis position * text – text in the footnote * color – font color * fontsize – text size for the footnote text Date: 2019-01-03 Categories: Tags: Celery tasks are automatically processed by the workers. You can turn off celery task publishing by setting the environment variable `CELERY_DISABLED` is set to `1` (by default celery is enabled for task publishing). Tip all tasks share the analysis_engine.work_tasks.custom_task.CustomTask class for customizing event handling. Handle Pricing Update Task Get the latest stock news, quotes and options chains for a ticker and publish the values to redis and S3 for downstream analysis. Writes pricing updates to S3 and Redis by building a list of publishing sub-task: ``` run_handle_pricing_update_task ``` ``` handle_pricing_update_task ``` Writes pricing updates to S3 and Redis b'Parameters:' b'work_dict &#8211; dictionary for key/values' Get New Pricing Data Task This will fetch data (pricing, financials, earnings, dividends, options, and more) from these sources: * IEX * Tradier * Yahoo - disabled as of 2019/01/03 Detailed example for getting new pricing data ``` import datetime import build_get_new_pricing_request from analysis_engine.api_requests from analysis_engine.work_tasks.get_new_pricing_data import get_new_pricing_data # store data cur_date = datetime.datetime.now().strftime('%Y-%m-%d') work = build_get_new_pricing_request( label=f'get-pricing-{cur_date}') work['ticker'] = 'TSLA' work['s3_bucket'] = 'pricing' work['s3_key'] = f'{work["ticker"]}-{cur_date}' work['redis_key'] = f'{work["ticker"]}-{cur_date}' work['celery_disabled'] = True res = get_new_pricing_data( work) print('full result dictionary:') print(res) if res['data']: print( 'named datasets returned as ' 'json-serialized pandas DataFrames:') for k in res['data']: print(f' - {k}') ``` Warning When fetching pricing data from sources like IEX, Please ensure the returned values are not serialized pandas Dataframes to prevent issues with celery task results. Instead it is preferred to returned a `df.to_json()` before sending the results into the results backend. Tip analysis_engine.api_requests.build_get_new_pricing_request ``` run_get_new_pricing_data ``` `get_new_pricing_data` (work_dict)¶ * Get Ticker information on: Publish Pricing Data Task Publish new stock data to external services and systems (redis and s3) provided the system(s) are running and enabled. ``` work_request = { 'ticker': ticker, 'ticker_id': ticker_id, 'strike': use_strike, 'contract': contract_type, 's3_bucket': s3_bucket_name, 's3_key': s3_key, 'redis_key': redis_key, 'data': use_data } ``` ``` run_publish_pricing_update ``` ``` publish_pricing_update ``` Publish Ticker Data to S3 and Redis Publish Data from S3 to Redis Task analysis_engine.api_requests.build_publish_from_s3_to_redis_request ``` run_publish_from_s3_to_redis ``` ``` publish_from_s3_to_redis ``` Publish Aggregate Ticker Data from S3 Task analysis_engine.api_requests.build_publish_ticker_aggregate_from_s3 _request ``` run_publish_ticker_aggregate_from_s3 ``` ``` publish_ticker_aggregate_from_s3 ``` Work in progress - screener-driven analysis task ``` run_screener_analysis ``` ``` task_screener_analysis ``` (work_dict)¶ * b'Parameters:' b'work_dict &#8211; task dictionary' Prepare Pricing Dataset Prepare dataset for analysis. This task collapses nested json dictionaries into a csv file with a header row and stores the output file in s3 and redis automatically. * if key not in redis, load the key by the same name from s3 * prepare dataset from redis key * the dataset will be stored as a dictionary with a pandas dataframe analysis_engine.api_requests.build_prepare_dataset_request ``` export DEBUG_PREPARE=1 export DEBUG_RESULTS=1 ``` ``` run_prepare_pricing_dataset ``` ``` prepare_pricing_dataset ``` Prepare dataset for analysis. Supports loading dataset from s3 if not found in redis. Outputs prepared artifact as a csv to s3 and redis. b'Parameters:' b'work_dict &#8211; dictionary for key/values' ## Custom Celery Task Handling¶ Define your own `on_failure` and `on_success` with the ``` analysis_engine.work_tasks.custom_task.CustomTask ``` custom class object. Debug values with the environment variable: `export DEBUG_TASK=1` ``` analysis_engine.work_tasks.custom_task. ``` `CustomTask` [source]¶ * * `on_failure` (exc, task_id, args, kwargs, einfo)[source]¶ * Handle custom actions when a task completes not successfully. As an example, if the task throws an exception, then this `on_failure` method can customize how to handle exceptional cases. http://docs.celeryproject.org/en/latest/userguide/tasks.html#task-inheritance b'Parameters:' * exc – exception * task_id – task id * args – arguments passed into task * kwargs – keyword arguments passed into task * einfo – exception info * `on_success` (retval, task_id, args, kwargs)[source]¶ * Handle custom actions when a task completes successfully. http://docs.celeryproject.org/en/latest/reference/celery.app.task.html b'Parameters:' * retval – return value * task_id – celery task id * args – arguments passed into task * kwargs – keyword arguments passed into task Get a Celery Application Helper ``` analysis_engine.work_tasks.get_celery_app. ``` `get_celery_app` (name='worker', auth_url='redis://localhost:6379/11', backend_url='redis://localhost:6379/12', include_tasks=[], ssl_options=None, transport_options=None, path_to_config_module='analysis_engine.work_tasks.celery_config', worker_log_format='%(asctime)s: %(levelname)s %(message)s', **kwargs)[source]¶ * Build a Celery app with support for environment variables to set endpoints locations. * export WORKER_BROKER_URL=redis://localhost:6379/11 * export WORKER_BACKEND_URL=redis://localhost:6379/12 * export WORKER_CELERY_CONFIG_MODULE=analysis_engine.work_tasks.cel ery_config Jupyter notebooks need to use the ``` WORKER_CELERY_CONFIG_MODULE=analysis_engine.work_tasks.celery service_config ``` value which uses resolvable hostnames with docker compose: * export WORKER_BROKER_URL=redis://redis:6379/11 * export WORKER_BACKEND_URL=redis://redis:6379/12 b'Parameters:' * name – name for this app * auth_url – Celery broker address (default is ) * worker_log_format – format for logs # Mocks and Testing # Mocks and Testing¶ ## Known Issues¶ ## Run All Tests¶ `py.test --maxfail=1` # Mock S3 Boto Utilities¶ These are testing utilities for mocking S3 functionality without having an s3 endpoint running. Mock boto3 s3 objects `MockBotoS3Bucket` (name)[source]¶ `MockBotoS3AllBuckets` [source]¶ `MockBotoS3` (name='mock_s3', endpoint_url=None, aws_access_key_id=None, aws_secret_access_key=None, region_name=None, config=None)[source]¶ `build_boto3_resource` (name='mock_s3', endpoint_url=None, aws_access_key_id=None, aws_secret_access_key=None, region_name=None, config=None)[source]¶ * b'Parameters:' * name – name of client * endpoint_url – endpoint url * aws_access_key_id – aws access key * aws_secret_access_key – aws secret key * region_name – region name * config – config object ``` mock_s3_read_contents_from_key_ev ``` (s3, s3_bucket_name, s3_key, encoding, convert_as_json)[source]¶ * mock_s3_read_contents_from_key b'Parameters:' * s3 – s3 client * s3_bucket_name – bucket name * s3_key – key * encoding – utf-8 * convert_as_json – convert to json ``` mock_publish_from_s3_to_redis ``` ``` mock_publish_from_s3_to_redis_err ``` # Mock Redis Utilities¶ These are testing utilities for mocking Redis’s functionality without having a Redis server running. Mock redis objects ``` analysis_engine.mocks.mock_redis. ``` `MockRedis` (host=None, port=None, password=None, db=None)[source]¶ # Mock Yahoo Utilities¶ Mock Pinance Object for unittests `mock_get_options` (ticker=None, contract_type=None, exp_date_str=None, strike=None)[source]¶ * b'Parameters:' `MockPinance` (symbol='SPY')[source]¶ * * `get_options` (ticker=None, contract_type=None, exp_date_str=None, strike=None)[source]¶ * b'Parameters:' # Mock IEX Utilities¶ Mocking data fetch api calls `mock_minute` (url, token=None, version=None, verbose=False)[source]¶ * mock minute history for a chart b'Parameters:' mock quote b'Parameters:' `mock_stats` (url, token=None, version=None, verbose=False)[source]¶ * mock stats b'Parameters:' `mock_peers` (url, token=None, version=None, verbose=False)[source]¶ * mock peers b'Parameters:' `mock_news` (url, token=None, version=None, verbose=False)[source]¶ * mock news b'Parameters:' `mock_financials` (url, token=None, version=None, verbose=False)[source]¶ * mock financials b'Parameters:' `mock_earnings` (url, token=None, version=None, verbose=False)[source]¶ * mock earnings b'Parameters:' mock dividends b'Parameters:' # Mock TA Lib¶ These are mock talib functions to help test indicators using talib. Mock TA-Lib objects `MockWILLRIgnore` (high=None, low=None, close=None, timeperiod=None)[source]¶ * # Mock Trading Tools for Developing Algorithms and Indicators¶ These are mock helper functions for patching the `BaseAlgo` object to simulate various test cases Mock Algorithm Methods for unittesting things like previously-owned shares or sell-side indicators without owning shares Support mocking owned shares to test indicator selling If you can modify your algorithm `config_dict` you can also set a `positions` dictionary like: > algo_config_dict = { # other values omitted for docs 'positions': { 'SPY': { 'shares': 10000, 'buys': [], 'sells': [] } } } Use with your custom algorithm unittests: > import mock import analysis_engine.mocks.mock_algo_trading as mock_trading @mock.patch( ('analysis_engine.algo.BaseAlgo.get_ticker_positions'), new=mock_trading.mock_algo_owns_shares_in_ticker_before_starting) b'Parameters:' * obj – algorithm object * ticker – ticker symbol Dataset Extraction Utilities Helper for extracting a dataset from Redis or S3 and load it into a `pandas.DataFrame` . This was designed to ignore the source of the dataset (IEX vs Yahoo) and perform the extract and load operations without knowledge of the underlying dataset. Supported environment variables: ``` analysis_engine.extract_utils. ``` `perform_extract` (df_type, df_str, work_dict, dataset_id_key='ticker', scrub_mode='sort-by-date', verbose=False)[source]¶ * Helper for extracting from Redis or S3 b'Parameters:' * df_type – datafeed type enum * ds_str – dataset string name * work_dict – incoming work request dictionary * dataset_id_key – configurable dataset identifier key for tracking scrubbing and debugging errors * scrub_mode – scrubbing mode on extraction for one-off cleanup before analysis * verbose – optional - boolean for turning on logging # Slack Publish API # Slack Publish API¶ Want to publish alerts to Slack? The source code reference guide is below and here is the intro to publishing alerts to Slack as a Jupyter Notebook. # Send Celery Task Details to Slack Utilities¶ Helper for extracting details from Celery task and sending it to a slack webhook. ``` # slack webhook export SLACK_WEBHOOK=https://hooks.slack.com/services/ ``` `post_df` (df, columns=None, block=True, jupyter=True, full_width=True, tablefmt='github')[source]¶ * Post a `pandas.DataFrame` to Slack b'Parameters:' * df – `pandas.DataFrame` object * columns – ordered list of columns to for the table header row ( `None` by default) * block – bool for post as a Slack-formatted block ``like this`` ( `True` by default) * jupyter – bool for jupyter attachment handling ( `True` by default) * full_width – bool to ensure the width is preserved the Slack message ( `True` by default) * tablefmt – string for table format ( `github` by default). Additional format values can be found on: https://bitbucket.org/astanin/python-tabulate * df – `post_failure` (msg, jupyter=False, block=False, full_width=False)[source]¶ * Post any message to slack b'Parameters:' b'msg &#8211; A string, list, or dict to send to slack' `parse_msg` (msg, block=False)[source]¶ * Create an array of fields for slack from the msg type b'Parameters:' b'msg &#8211; A string, list, or dict to massage for sending to slack' Date utils `last_close` ()[source]¶ * Get last trading close time as a python `datetime` How it works: ``` datetime.datetime.utcnow() - datetime.timedelta(hours=5) ``` * Before or after market hours, the returned `datetime` * will be 4:00 PM EST on the previous trading day which could be a Friday if this is called on a Saturday or Sunday. * Before or after market hours, the returned does not detect holidays and non-trading days yet and assumes the system time is set to EST or UTC `get_last_close_str` (fmt='%Y-%m-%d')[source]¶ * Get the Last Trading Close Date as a string with default formatting ae_consts.COMMON_DATE_FORMAT (YYYY-MM-DD) b'Parameters:' b'fmt &#8211; optional output format (default ae_consts.COMMON_DATE_FORMAT)' `utc_now_str` (fmt='%Y-%m-%d %H:%M:%S')[source]¶ * Get the UTC now as a string with default formatting ae_consts.COMMON_TICK_DATE_FORMAT (YYYY-MM-DD HH:MM:SS) b'Parameters:' b'fmt &#8211; optional output format (default ae_consts.COMMON_TICK_DATE_FORMAT)' `utc_date_str` (fmt='%Y-%m-%d')[source]¶ * Get the UTC date as a string with default formatting `COMMON_DATE_FORMAT` b'Parameters:' b'fmt &#8211; optional output format (default COMMON_DATE_FORMAT' `YYYY-MM-DD` )
pop3mail
hex
Erlang
Pop3mail === Download email from the inbox and store them (including attachments) in a subdirectory per email. Reads incoming mail via the POP3 protocol, using an Erlang Epop client with SSL support. Decodes multipart content, quoted-printables, base64 and encoded-words. The module also contains functions to perform only the decoding, giving you the choice to do retrieval and storage with your own functions. [Summary](#summary) === [Functions](#functions) --- [cli(args)](#cli/1) Commandline interface for downloading email and storing them on disk. [decode\_body(body\_text, content\_type \\ "text/plain; charset=us-ascii", encoding \\ "7bit", disposition \\ "inline")](#decode_body/4) Decode multipart, base64 and quoted-printable text. Returned is a list of Pop3mail.Part structs. [decode\_body\_content(header\_list, body\_content)](#decode_body_content/2) Decode multipart, base64 and quoted-printable text. Returned is a list of Pop3mail.Part structs. [decode\_raw\_file(filename, output\_dir)](#decode_raw_file/2) Decode raw message file (mostly an .eml file) and store result on disk. [decode\_words(text)](#decode_words/1) Decode a text with encoded words as defined in RFC 2047. Returns a list with tuples of charset name and binary content. [download(params)](#download/1) Download emails from the inbox and store them (including attachments) in a subdirectory per email. [header\_lookup(header\_list, header\_name)](#header_lookup/2) Lookup header in header list retrieved via epop. [Functions](#functions) === Pop3mail.Base64Decoder === Replaceable base64 decoder. Replace with your own implementation via the application config :pop3mail, base64\_decoder: &lt;replacement&gt; After changing the config/config.exs run: * mix deps.compile --force pop3mail [Summary](#summary) === [Functions](#functions) --- [decode!(encoded\_text)](#decode!/1) Decode base64 encoded text, ignoring carriage returns and linefeeds. Returns binary. [Functions](#functions) === Pop3mail.Base64Decoder.Standard === Standard Elixir base64 decoder [Summary](#summary) === [Functions](#functions) --- [decode!(encoded\_text)](#decode!/1) Decode base64 encoded text, ignoring carriage returns and linefeeds. Returns binary. [Functions](#functions) === Pop3mail.Body === Decode and store mail body [Summary](#summary) === [Functions](#functions) --- [decode\_body(body\_text, content\_type, encoding, disposition)](#decode_body/4) Decode multipart content, base64, quoted-printables [store\_multiparts(multipart\_part\_list, dirname)](#store_multiparts/2) Store all found body parts on filesystem [store\_part(multipart\_part, base\_dir)](#store_part/2) Store one part on filesystem [Functions](#functions) === Pop3mail.CLI === Commandline interface for downloading emails and storing them on disk. [Summary](#summary) === [Functions](#functions) --- [main(args)](#main/1) Call main with parameters. E.g. main(["--username=<EMAIL>", "--password=secret"]). Call with --help to get a list of all parameters. [show\_help()](#show_help/0) print usage line and a description for all parameters. [Functions](#functions) === Pop3mail.DateConverter === Date conversions and date utilities [Summary](#summary) === [Functions](#functions) --- [convert\_date(date\_str)](#convert_date/1) Convert date from email header to a standard date format: YYYYMMDD\_HHMMSS [zero\_pad(natural\_number, len \\ 2)](#zero_pad/2) add zero's at left side of the number [Functions](#functions) === Pop3mail.EpopDownloader === Retrieve and parse POP3 mail via the Epop client. [Summary](#summary) === [Types](#types) --- [epop\_client\_type()](#t:epop_client_type/0) Epop client from erlpop [Functions](#functions) --- [download(options)](#download/1) Read all emails and save them to disk. [parse\_process\_and\_store(mail\_content, mail\_loop\_counter, delivered, save\_raw, output\_dir)](#parse_process_and_store/5) Parse headers, decode body and store everything. [retrieve\_and\_store(epop\_client, mail\_loop\_counter, options)](#retrieve_and_store/3) Retrieve, parse and store an email. [retrieve\_and\_store\_all(epop\_client, options)](#retrieve_and_store_all/2) Read all emails and save them to disk. [Types](#types) === [Functions](#functions) === Pop3mail.EpopDownloader.Options === A struct that holds pop3mail parameter options. It's fields are: * `delete` - delete email after downloading. Default: false. * `delivered` - true/false/nil. Skip emails with/without/whatever Delivered-To header. * `max_mails` - maximum number of emails to download. nil = unlimited * `output_dir` - output directory. * `password` - email account password. * `port` - pop3 server port. * `save_raw` - also save the unprocessed mail in a file called 'raw.eml'. * `server` - pop3 server address. * `ssl` - true/false. Turn on/off Secure Socket Layer. * `username` - email account name. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === Pop3mail.FileStore === Store header, messages and attachments on the filesystem. [Summary](#summary) === [Functions](#functions) --- [dos2unix(multipart\_part)](#dos2unix/1) Convert a part's content text from dos-format (with carriage return + linefeed after each line) to unix-format (with just the linefeed). [get\_default\_filename(media\_type, charset, index)](#get_default_filename/3) Construct a filename for an email message. [get\_line\_separator()](#get_line_separator/0) get line seperator for text files. On windows/dos this is carriage return + linefeed, on other platforms it is just the linefeed. [mkdir(base\_dir, name, unsafe\_addition)](#mkdir/3) make directory. Returns created directory name full path. [remove\_unwanted\_chars(text, max\_chars)](#remove_unwanted_chars/2) Remove characters which are undesirable for filesystems (like \ / : \* ? " < > | [ ] and control characters) [set\_default\_filename(multipart\_part)](#set_default_filename/1) set default filename in the `multipart_part`. [store\_mail\_header(content, filename\_prefix, unsafe\_addition, dirname)](#store_mail_header/4) Store mail header. [store\_part(multipart\_part, base\_dir)](#store_part/2) store one part of the body. [store\_raw(mail\_content, filename, dirname)](#store_raw/3) store raw email [Functions](#functions) === Pop3mail.Handler === Glue code for received mail to call the parse, decode and store functions [Summary](#summary) === [Functions](#functions) --- [check\_process\_and\_store(mail, options)](#check_process_and_store/2) Check if the mail must be skipped, if not process and store the email. [convert\_date\_to\_dirname(date\_str)](#convert_date_to_dirname/1) Convert date to a directory name meant for storing the email. Returned date is in format yyyymmdd\_hhmmss [decode\_body\_content(header\_list, body\_content)](#decode_body_content/2) Decode body: multipart content, base64 and quoted-printable. [get\_sender\_name(from)](#get_sender_name/1) Extract the sender name from the email 'From' header. [process\_and\_store(mail, options)](#process_and_store/2) Create directory for the email based on date and subject, save raw email, store header summary and store everything from the body. [process\_and\_store\_body(header\_list, body\_content, dirname)](#process_and_store_body/3) Decode and store body. [remove\_encodings(text)](#remove_encodings/1) This function makes sure that the encoding markers are removed and the text decoded. [Functions](#functions) === Pop3mail.Handler.Mail === A struct that holds mail content. It's fields are: * `mail_content` - string with the complete raw email content * `mail_loop_counter` - Current number of the email in the retrieval loop. In an POP3 connection each email is numbered, starting at 1. * `header_list` - list with tuples of {:header, header name, header value}. * `body_content` - email body. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === Pop3mail.Handler.Options === A struct that holds options for the Pop3mail.Handler. It's fields are: * `delivered` - true/false/nil. Presence, absence or don't care of the 'Delivered-To' email header. * `save_raw` - true/false. Save or don't save the raw email message. * `base_dir` - directory where the emails must be stored. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === Pop3mail.Header === Email header related functions. [Summary](#summary) === [Functions](#functions) --- [lookup(header\_list, header\_name, take \\ nil)](#lookup/3) Lookup value by header name. Returns a string. [store(header\_list, filename\_prefix, filename\_addition, dirname)](#store/4) Store email headers Date,From,To,Cc and Subject in a text file. [Functions](#functions) === Pop3mail.Multipart === Parser for: RFC 2045 Multipart content type (previously RFC 1341). It works recursive because a multipart content can contain other multiparts. The returned sequential list of Pop3mail.Path structs is flattened. The Part.path field shows where it is in the hierarchy. This module can also be useful to parse RFC 7578 multipart/form-data (previously RFC 2388). [Summary](#summary) === [Functions](#functions) --- [decode(encoding, text)](#decode/2) Return decoded text as binary. [decode\_base64!(text)](#decode_base64!/1) Return decoded text as binary. [decode\_lines(encoding, lines)](#decode_lines/2) Return decoded lines as binary. [extract\_and\_set\_filename(multipart\_part, content\_parameters, parametername)](#extract_and_set_filename/3) Extract (file-)name from Content-Disposition value or Content-Type value. Returns Pop3mail.Part with filled-in filename and filename\_charset. [get\_param\_number(key\_value)](#get_param_number/1) Get parameter number of key\_value. `key_value` - format must be: key=value or key*<parameter number>*=value or key\*=value. Returns string. Can be empty. [get\_value(key\_value)](#get_value/1) Get value of key\_value. `key_value` - format must be: key=value or key*<number>*=value or key\*=value. [is\_multipart?(multipart\_part)](#is_multipart?/1) Is this part a multipart? Looks if the media\_type starts with multipart/. [lines\_continued(line1, list)](#lines_continued/2) A multipart header line can continue on the next line. When next line starts with a tab-character or when there is a opening double quote not closed yet. [parse\_content(multipart\_part)](#parse_content/1) Parse multipart content. Returns a flattened list of Pop3mail.Part's [parse\_content\_type(multipart\_part, content\_type)](#parse_content_type/2) Parse multipart Content-Type header line. It can contain media\_type, charset, (file-)name and boundary. Returns a Pop3mail.Part [parse\_content\_type\_parameters(multipart\_part, content\_type\_parameters)](#parse_content_type_parameters/2) Parse value of content-type header line. It can contain media\_type, charset, (file-)name and boundary. Returns a Pop3mail.Part [parse\_disposition(multipart\_part, disposition)](#parse_disposition/2) Parse multipart Content-Disposition header line. This is either inline or attachment, and it can contain a filename. Returns a Pop3mail.Part [parse\_disposition\_parameters(multipart\_part, disposition\_parameters)](#parse_disposition_parameters/2) Parse value of Content-Disposition header line. This is either inline or attachment, and it can contain a filename. Returns a Pop3mail.Part [parse\_multipart(boundary\_name, raw\_content, path)](#parse_multipart/3) Parse the boundary in the multipart content. [parse\_part(arg, boundary\_name, path)](#parse_part/3) Parse a part of the multipart content. [parse\_part\_content\_id(multipart\_part, encoding, list)](#parse_part_content_id/3) Parse multipart Content-ID header line. Returns a Pop3mail.Part [parse\_part\_content\_location(multipart\_part, encoding, list)](#parse_part_content_location/3) Parse multipart Content-Location header line as defined in RFC 2557. Returns a Pop3mail.Part [parse\_part\_content\_type(multipart\_part, encoding, list)](#parse_part_content_type/3) Parse multipart Content-Type header line. It can contain media\_type, charset, (file-)name and boundary. Returns a Pop3mail.Part [parse\_part\_decode(multipart\_part, encoding, lines)](#parse_part_decode/3) Decode lines and add them as content in the multipart part. Returns a Pop3mail.Part [parse\_part\_disposition(multipart\_part, encoding, list)](#parse_part_disposition/3) Parse multipart Content-Disposition header line. Returns a Pop3mail.Part [parse\_part\_finish(multipart\_part, encoding, list)](#parse_part_finish/3) Finish parsing multipart header lines and start decode of the part content. Returns a Pop3mail.Part [parse\_part\_lines(multipart\_part, encoding, list)](#parse_part_lines/3) Parse multipart header lines. Returns a Pop3mail.Part [parse\_part\_skip(multipart\_part, encoding, list)](#parse_part_skip/3) Ignore a multipart header line. Returns a Pop3mail.Part [parse\_part\_transfer\_encoding(multipart\_part, \_, list)](#parse_part_transfer_encoding/3) Parse multipart Content-Transfer-Encoding header line. Returns a Pop3mail.Part [parse\_part\_unknown\_header(multipart\_part, encoding, list)](#parse_part_unknown_header/3) Skip an unknown multipart header line. Logs a warning. Returns a Pop3mail.Part [Functions](#functions) === Pop3mail.Part === A struct that holds a single part of a multipart, and if there isn't a multipart it contains the email body. It's fields are: * `content` - binary with the part's content. * `charset` - character encoding of the content (only applicable for text) * `media_type` - Mime type. Examples: text/plain, text/html, text/rtf, image/jpeg, application/octet-stream * `filename` - binary with filename of the attachment * `filename_charset` - character encoding of the filename * `inline` - true/false/nil. true=inline content, false=attachment, nil=not specified. * `path` - Path within the hierarchy of multipart's. For example: relative/alternative * `index` - Index number of a part within a multipart. * `boundary` - boundary name of the multipart * `content_id` - cid. Generally HTML refers to embedded objects (images mostly) by cid. That is why the related images have a cid. * `content_location` - URI location as defined in RFC 2557. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Types](#types) === Pop3mail.QuotedPrintable === Decode quoted-printable text as in RFC 2045 # 6.7 [Summary](#summary) === [Functions](#functions) --- [decode(str)](#decode/1) Decode `arg1` string. Returns the result as a character list. [Functions](#functions) === Pop3mail.StringUtils === String manipulation utilities. [Summary](#summary) === [Functions](#functions) --- [contains?(content, search)](#contains?/2) true if search is found in content [is\_empty?(str)](#is_empty?/1) test if string is nil or has empty length [printable(str, printable\_alternative \\ "")](#printable/2) Print text if it valid utf-8 encoded. If not, print the alternative text. [remove\_balanced(text, remove)](#remove_balanced/2) Strip a character. Only when it occurs both at the start and end if the text. Strip once only. [unquoted(text)](#unquoted/1) Strip double quotes. Only when they occur at the start and end if the text. Strip balanced. Once only. [Functions](#functions) === Pop3mail.WordDecoder === Decode words as defined in RFC 2047. [Summary](#summary) === [Functions](#functions) --- [decode\_text(input\_text)](#decode_text/1) Decode a text with possibly encoded-words. Returns a list with tuples {charset, text}. Not encoded text is returned with us-ascii charset. [decode\_word(text)](#decode_word/1) Decode a word. text with possibly encoded-words. Returns a list with tuples {charset, text}. Not encoded text is returned with us-ascii charset. [decode\_word(text, encoding)](#decode_word/2) Decode a word with the given encoding. [decoded\_text\_list\_to\_string(decoded\_text\_list, add\_charset\_name \\ false)](#decoded_text_list_to_string/2) Concat the text from the decoded list. Does NOT convert to a common character set like utf-8. [get\_charsets\_besides\_ascii(decoded\_text\_list)](#get_charsets_besides_ascii/1) returns sorted unique list of charsets. [Functions](#functions) === mix run\_pop3mail === Retrieve email from a POP3 mailbox. Examples: * mix run\_pop3mail --username=<EMAIL> --max 100 --raw * mix run\_pop3mail --help [Summary](#summary) === [Functions](#functions) --- [run(args)](#run/1) Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). [Functions](#functions) ===
@mpxjs/url-loader
npm
JavaScript
`mpx-url-loader` === > solve url in mpx 受限于小程序既有的能力,目前在小程序中加载本地图片资源会有诸多限制: * `<style>`中的css属性值只能使用base64引用图片,无法用本地路径 * `<template>`中的`<cover-image>`组件的src属性只能通过本地路径,不能使用base64 * `<template>`中的其他组件,例如` <image src="./bg-img2.png"></image> <cover-image src="./bg-img3.png?fallback"> </template> ``` > `bg-img1.png`大于10KB,会被打包成资源 > `bg-img2.png`小于10KB,会被做base64 > `bg-img3.png`需要在路径后添加`fallback`强制打包资源 --- Readme --- ### Keywords none
pyod
readthedoc
Python
pyod 1.1.0 documentation Hide navigation sidebar Hide table of contents sidebar Toggle site navigation sidebar [pyod](#) Toggle Light / Dark / Auto color theme Toggle table of contents sidebar [pyod](#) Getting Started * [Installation](index.html#document-install) * [Model Save & Load](index.html#document-model_persistence) * [Fast Train with SUOD](index.html#document-fast_train) * [Examples](index.html#document-example) * [Benchmarks](index.html#document-benchmark) Documentation * [API CheatSheet](index.html#document-api_cc) * [API Reference](index.html#document-pyod)Toggle child pages in navigation + [All Models](index.html#document-pyod.models) + [Utility Functions](index.html#document-pyod.utils) Additional Information * [Known Issues & Warnings](index.html#document-issues) * [Outlier Detection 101](index.html#document-relevant_knowledge) * [Citations & Achievements](index.html#document-pubs) * [Frequently Asked Questions](index.html#document-faq) * [About us](index.html#document-about) [Back to top](#) Toggle Light / Dark / Auto color theme Toggle table of contents sidebar Welcome to PyOD documentation![#](#welcome-to-pyod-documentation) === **Deployment & Documentation & Stats & License** --- **News**: We just released a 45-page, the most comprehensive [anomaly detection benchmark paper](https://www.andrew.cmu.edu/user/yuezhao2/papers/22-neurips-adbench.pdf). The fully [open-sourced ADBench](https://github.com/Minqi824/ADBench) compares 30 anomaly detection algorithms on 57 benchmark datasets. **For time-series outlier detection**, please use [TODS](https://github.com/datamllab/tods). **For graph outlier detection**, please use [PyGOD](https://pygod.org/). PyOD is the most comprehensive and scalable **Python library** for **detecting outlying objects** in multivariate data. This exciting yet challenging field is commonly referred as [Outlier Detection](https://en.wikipedia.org/wiki/Anomaly_detection) or [Anomaly Detection](https://en.wikipedia.org/wiki/Anomaly_detection). PyOD includes more than 40 detection algorithms, from classical LOF (SIGMOD 2000) to the latest ECOD (TKDE 2022). Since 2017, PyOD [[AZNL19](#id84)] has been successfully used in numerous academic researches and commercial products with more than [10 million downloads](https://pepy.tech/project/pyod). It is also well acknowledged by the machine learning community with various dedicated posts/tutorials, including [Analytics Vidhya](https://www.analyticsvidhya.com/blog/2019/02/outlier-detection-python-pyod/), [KDnuggets](https://www.kdnuggets.com/2019/02/outlier-detection-methods-cheat-sheet.html), and [Towards Data Science](https://towardsdatascience.com/anomaly-detection-for-dummies-15f148e559c1). **PyOD is featured for**: * **Unified APIs, detailed documentation, and interactive examples** across various algorithms. * **Advanced models**, including **classical distance and density estimation**, **latest deep learning methods**, and **emerging algorithms like ECOD**. * **Optimized performance with JIT and parallelization** using [numba](https://github.com/numba/numba) and [joblib](https://github.com/joblib/joblib). * **Fast training & prediction with SUOD** [[AZHC+21](#id101)]. **Outlier Detection with 5 Lines of Code**: ``` # train an ECOD detector from pyod.models.ecod import ECOD clf = ECOD() clf.fit(X_train) # get outlier scores y_train_scores = clf.decision_scores_ # raw outlier scores on the train data y_test_scores = clf.decision_function(X_test) # predict raw outlier scores on test ``` **Personal suggestion on selecting an OD algorithm**. If you do not know which algorithm to try, go with: * [ECOD](https://github.com/yzhao062/pyod/blob/master/examples/ecod_example.py): Example of using ECOD for outlier detection * [Isolation Forest](https://github.com/yzhao062/pyod/blob/master/examples/iforest_example.py): Example of using Isolation Forest for outlier detection They are both fast and interpretable. Or, you could try more data-driven approach [MetaOD](https://github.com/yzhao062/MetaOD). **Citing PyOD**: [PyOD paper](http://www.jmlr.org/papers/volume20/19-011/19-011.pdf) is published in [Journal of Machine Learning Research (JMLR)](http://www.jmlr.org/) (MLOSS track). If you use PyOD in a scientific publication, we would appreciate citations to the following paper: ``` @article{zhao2019pyod, author = {<NAME> <NAME> and <NAME>}, title = {PyOD: A Python Toolbox for Scalable Outlier Detection}, journal = {Journal of Machine Learning Research}, year = {2019}, volume = {20}, number = {96}, pages = {1-7}, url = {http://jmlr.org/papers/v20/19-011.html} } ``` or: ``` <NAME>., <NAME>. and <NAME>., 2019. PyOD: A Python Toolbox for Scalable Outlier Detection. Journal of machine learning research (JMLR), 20(96), pp.1-7. ``` If you want more general insights of anomaly detection and/or algorithm performance comparison, please see our NeurIPS 2022 paper [ADBench: Anomaly Detection Benchmark](https://www.andrew.cmu.edu/user/yuezhao2/papers/22-neurips-adbench.pdf): ``` @inproceedings{han2022adbench, title={ADBench: Anomaly Detection Benchmark}, author={<NAME> and <NAME> and <NAME> and <NAME> and <NAME>}, booktitle={Neural Information Processing Systems (NeurIPS)} year={2022}, } ``` **Key Links and Resources**: * [View the latest codes on Github](https://github.com/yzhao062/pyod) * [Execute Interactive Jupyter Notebooks](https://mybinder.org/v2/gh/yzhao062/pyod/master) * [Anomaly Detection Resources](https://github.com/yzhao062/anomaly-detection-resources) --- Benchmark[#](#benchmark) === We just released a 45-page, the most comprehensive [ADBench: Anomaly Detection Benchmark](https://arxiv.org/abs/2206.09426). The fully [open-sourced ADBench](https://github.com/Minqi824/ADBench) compares 30 anomaly detection algorithms on 57 benchmark datasets. The organization of **ADBench** is provided below: Implemented Algorithms[#](#implemented-algorithms) === PyOD toolkit consists of three major functional groups: **(i) Individual Detection Algorithms** : | Type | Abbr | Algorithm | Year | Class | Ref | | --- | --- | --- | --- | --- | --- | | Probabilistic | ECOD | Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions | 2022 | [`pyod.models.ecod.ECOD`](index.html#pyod.models.ecod.ECOD) | [[ALZH+22](#id105)] | | Probabilistic | COPOD | COPOD: Copula-Based Outlier Detection | 2020 | [`pyod.models.copod.COPOD`](index.html#pyod.models.copod.COPOD) | [[ALZB+20](#id99)] | | Probabilistic | ABOD | Angle-Based Outlier Detection | 2008 | [`pyod.models.abod.ABOD`](index.html#pyod.models.abod.ABOD) | [[AKZ+08](#id69)] | | Probabilistic | FastABOD | Fast Angle-Based Outlier Detection using approximation | 2008 | [`pyod.models.abod.ABOD`](index.html#pyod.models.abod.ABOD) | [[AKZ+08](#id69)] | | Probabilistic | MAD | Median Absolute Deviation (MAD) | 1993 | [`pyod.models.mad.MAD`](index.html#pyod.models.mad.MAD) | [[AIH93](#id98)] | | Probabilistic | SOS | Stochastic Outlier Selection | 2012 | [`pyod.models.sos.SOS`](index.html#pyod.models.sos.SOS) | [[AJHuszarPvdH12](#id80)] | | Probabilistic | QMCD | Quasi-Monte Carlo Discrepancy outlier detection | 2001 | [`pyod.models.qmcd.QMCD`](index.html#pyod.models.qmcd.QMCD) | [[AFM01](#id116)] | | Probabilistic | KDE | Outlier Detection with Kernel Density Functions | 2007 | [`pyod.models.kde.KDE`](index.html#pyod.models.kde.KDE) | [[ALLP07](#id107)] | | Probabilistic | Sampling | Rapid distance-based outlier detection via sampling | 2013 | [`pyod.models.sampling.Sampling`](index.html#pyod.models.sampling.Sampling) | [[ASB13](#id108)] | | Probabilistic | GMM | Probabilistic Mixture Modeling for Outlier Analysis | | [`pyod.models.gmm.GMM`](index.html#pyod.models.gmm.GMM) | [[AAgg15](#id73)] [Ch.2] | | Linear Model | PCA | Principal Component Analysis (the sum of weighted projected distances to the eigenvector hyperplanes) | 2003 | [`pyod.models.pca.PCA`](index.html#pyod.models.pca.PCA) | [[ASCSC03](#id72)] | | Linear Model | KPCA | Kernel Principal Component Analysis | 2007 | [`pyod.models.kpca.KPCA`](index.html#pyod.models.kpca.KPCA) | [[AHof07](#id115)] | | Linear Model | MCD | Minimum Covariance Determinant (use the mahalanobis distances as the outlier scores) | 1999 | [`pyod.models.mcd.MCD`](index.html#pyod.models.mcd.MCD) | [[AHR04](#id77), [ARD99](#id76)] | | Linear Model | CD | Use Cook’s distance for outlier detection | 1977 | [`pyod.models.cd.CD`](index.html#pyod.models.cd.CD) | [[ACoo77](#id106)] | | Linear Model | OCSVM | One-Class Support Vector Machines | 2001 | [`pyod.models.ocsvm.OCSVM`](index.html#pyod.models.ocsvm.OCSVM) | [[AScholkopfPST+01](#id87)] | | Linear Model | LMDD | Deviation-based Outlier Detection (LMDD) | 1996 | [`pyod.models.lmdd.LMDD`](index.html#pyod.models.lmdd.LMDD) | [[AAAR96](#id94)] | | Proximity-Based | LOF | Local Outlier Factor | 2000 | [`pyod.models.lof.LOF`](index.html#pyod.models.lof.LOF) | [[ABKNS00](#id74)] | | Proximity-Based | COF | Connectivity-Based Outlier Factor | 2002 | [`pyod.models.cof.COF`](index.html#pyod.models.cof.COF) | [[ATCFC02](#id88)] | | Proximity-Based | Incr. COF | Memory Efficient Connectivity-Based Outlier Factor (slower but reduce storage complexity) | 2002 | [`pyod.models.cof.COF`](index.html#pyod.models.cof.COF) | [[ATCFC02](#id88)] | | Proximity-Based | CBLOF | Clustering-Based Local Outlier Factor | 2003 | [`pyod.models.cblof.CBLOF`](index.html#pyod.models.cblof.CBLOF) | [[AHXD03](#id78)] | | Proximity-Based | LOCI | LOCI: Fast outlier detection using the local correlation integral | 2003 | [`pyod.models.loci.LOCI`](index.html#pyod.models.loci.LOCI) | [[APKGF03](#id81)] | | Proximity-Based | HBOS | Histogram-based Outlier Score | 2012 | [`pyod.models.hbos.HBOS`](index.html#pyod.models.hbos.HBOS) | [[AGD12](#id71)] | | Proximity-Based | kNN | k Nearest Neighbors (use the distance to the kth nearest neighbor as the outlier score | 2000 | [`pyod.models.knn.KNN`](index.html#pyod.models.knn.KNN) | [[AAP02](#id68), [ARRS00](#id67)] | | Proximity-Based | AvgKNN | Average kNN (use the average distance to k nearest neighbors as the outlier score) | 2002 | [`pyod.models.knn.KNN`](index.html#pyod.models.knn.KNN) | [[AAP02](#id68), [ARRS00](#id67)] | | Proximity-Based | MedKNN | Median kNN (use the median distance to k nearest neighbors as the outlier score) | 2002 | [`pyod.models.knn.KNN`](index.html#pyod.models.knn.KNN) | [[AAP02](#id68), [ARRS00](#id67)] | | Proximity-Based | SOD | Subspace Outlier Detection | 2009 | [`pyod.models.sod.SOD`](index.html#pyod.models.sod.SOD) | [[AKKrogerSZ09](#id90)] | | Proximity-Based | ROD | Rotation-based Outlier Detection | 2020 | [`pyod.models.rod.ROD`](index.html#pyod.models.rod.ROD) | [[AABC20](#id100)] | | Outlier Ensembles | IForest | Isolation Forest | 2008 | [`pyod.models.iforest.IForest`](index.html#pyod.models.iforest.IForest) | [[ALTZ08](#id63), [ALTZ12](#id64)] | | Outlier Ensembles | INNE | Isolation-based Anomaly Detection Using Nearest-Neighbor Ensembles | 2018 | [`pyod.models.inne.INNE`](index.html#pyod.models.inne.INNE) | [[ABTA+18](#id109)] | | Outlier Ensembles | FB | Feature Bagging | 2005 | [`pyod.models.feature_bagging.FeatureBagging`](index.html#pyod.models.feature_bagging.FeatureBagging) | [[ALK05](#id70)] | | Outlier Ensembles | LSCP | LSCP: Locally Selective Combination of Parallel Outlier Ensembles | 2019 | [`pyod.models.lscp.LSCP`](index.html#pyod.models.lscp.LSCP) | [[AZNHL19](#id82)] | | Outlier Ensembles | XGBOD | Extreme Boosting Based Outlier Detection **(Supervised)** | 2018 | [`pyod.models.xgbod.XGBOD`](index.html#pyod.models.xgbod.XGBOD) | [[AZH18](#id75)] | | Outlier Ensembles | LODA | Lightweight On-line Detector of Anomalies | 2016 | [`pyod.models.loda.LODA`](index.html#pyod.models.loda.LODA) | [[APevny16](#id96)] | | Outlier Ensembles | SUOD | SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection **(Acceleration)** | 2021 | [`pyod.models.suod.SUOD`](index.html#pyod.models.suod.SUOD) | [[AZHC+21](#id101)] | | Neural Networks | AutoEncoder | Fully connected AutoEncoder (use reconstruction error as the outlier score) | 2015 | [`pyod.models.auto_encoder.AutoEncoder`](index.html#pyod.models.auto_encoder.AutoEncoder) | [[AAgg15](#id73)] [Ch.3] | | Neural Networks | VAE | Variational AutoEncoder (use reconstruction error as the outlier score) | 2013 | [`pyod.models.vae.VAE`](index.html#pyod.models.vae.VAE) | [[AKW13](#id95)] | | Neural Networks | Beta-VAE | Variational AutoEncoder (all customized loss term by varying gamma and capacity) | 2018 | [`pyod.models.vae.VAE`](index.html#pyod.models.vae.VAE) | [[ABHP+18](#id97)] | | Neural Networks | SO_GAAL | Single-Objective Generative Adversarial Active Learning | 2019 | [`pyod.models.so_gaal.SO_GAAL`](index.html#pyod.models.so_gaal.SO_GAAL) | [[ALLZ+19](#id83)] | | Neural Networks | MO_GAAL | Multiple-Objective Generative Adversarial Active Learning | 2019 | [`pyod.models.mo_gaal.MO_GAAL`](index.html#pyod.models.mo_gaal.MO_GAAL) | [[ALLZ+19](#id83)] | | Neural Networks | DeepSVDD | Deep One-Class Classification | 2018 | [`pyod.models.deep_svdd.DeepSVDD`](index.html#pyod.models.deep_svdd.DeepSVDD) | [[ARVG+18](#id102)] | | Neural Networks | AnoGAN | Anomaly Detection with Generative Adversarial Networks | 2017 | [`pyod.models.anogan.AnoGAN`](index.html#pyod.models.anogan.AnoGAN) | [[ASSeebockW+17](#id110)] | | Neural Networks | ALAD | Adversarially learned anomaly detection | 2018 | [`pyod.models.alad.ALAD`](index.html#pyod.models.alad.ALAD) | [[AZRF+18](#id114)] | | Graph-based | R-Graph | Outlier detection by R-graph | 2017 | [`pyod.models.rgraph.RGraph`](index.html#pyod.models.rgraph.RGraph) | [[BYRV17](index.html#id848)] | | Graph-based | LUNAR | LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks | 2022 | [`pyod.models.lunar.LUNAR`](index.html#pyod.models.lunar.LUNAR) | [[AGHNN22](#id111)] | **(ii) Outlier Ensembles & Outlier Detector Combination Frameworks**: | Type | Abbr | Algorithm | Year | Ref | | | --- | --- | --- | --- | --- | --- | | Outlier Ensembles | | Feature Bagging | 2005 | [`pyod.models.feature_bagging.FeatureBagging`](index.html#pyod.models.feature_bagging.FeatureBagging) | [[ALK05](#id70)] | | Outlier Ensembles | LSCP | LSCP: Locally Selective Combination of Parallel Outlier Ensembles | 2019 | [`pyod.models.lscp.LSCP`](index.html#pyod.models.lscp.LSCP) | [[AZNHL19](#id82)] | | Outlier Ensembles | XGBOD | Extreme Boosting Based Outlier Detection **(Supervised)** | 2018 | [`pyod.models.xgbod.XGBOD`](index.html#pyod.models.xgbod.XGBOD) | [[AZH18](#id75)] | | Outlier Ensembles | LODA | Lightweight On-line Detector of Anomalies | 2016 | [`pyod.models.loda.LODA`](index.html#pyod.models.loda.LODA) | [[APevny16](#id96)] | | Outlier Ensembles | SUOD | SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection **(Acceleration)** | 2021 | [`pyod.models.suod.SUOD`](index.html#pyod.models.suod.SUOD) | [[AZHC+21](#id101)] | | Combination | Average | Simple combination by averaging the scores | 2015 | [`pyod.models.combination.average()`](index.html#pyod.models.combination.average) | [[AAS15](#id66)] | | Combination | Weighted Average | Simple combination by averaging the scores with detector weights | 2015 | [`pyod.models.combination.average()`](index.html#pyod.models.combination.average) | [[AAS15](#id66)] | | Combination | Maximization | Simple combination by taking the maximum scores | 2015 | [`pyod.models.combination.maximization()`](index.html#pyod.models.combination.maximization) | [[AAS15](#id66)] | | Combination | AOM | Average of Maximum | 2015 | [`pyod.models.combination.aom()`](index.html#pyod.models.combination.aom) | [[AAS15](#id66)] | | Combination | MOA | Maximum of Average | 2015 | [`pyod.models.combination.moa()`](index.html#pyod.models.combination.moa) | [[AAS15](#id66)] | | Combination | Median | Simple combination by taking the median of the scores | 2015 | [`pyod.models.combination.median()`](index.html#pyod.models.combination.median) | [[AAS15](#id66)] | | Combination | majority Vote | Simple combination by taking the majority vote of the labels (weights can be used) | 2015 | [`pyod.models.combination.majority_vote()`](index.html#pyod.models.combination.majority_vote) | [[AAS15](#id66)] | **(iii) Utility Functions**: | Type | Name | Function | | --- | --- | --- | | Data | [`pyod.utils.data.generate_data()`](index.html#pyod.utils.data.generate_data) | Synthesized data generation; normal data is generated by a multivariate Gaussian and outliers are generated by a uniform distribution | | Data | [`pyod.utils.data.generate_data_clusters()`](index.html#pyod.utils.data.generate_data_clusters) | Synthesized data generation in clusters; more complex data patterns can be created with multiple clusters | | Stat | [`pyod.utils.stat_models.wpearsonr()`](index.html#pyod.utils.stat_models.wpearsonr) | Calculate the weighted Pearson correlation of two samples | | Utility | [`pyod.utils.utility.get_label_n()`](index.html#pyod.utils.utility.get_label_n) | Turn raw outlier scores into binary labels by assign 1 to top n outlier scores | | Utility | [`pyod.utils.utility.precision_n_scores()`](index.html#pyod.utils.utility.precision_n_scores) | calculate precision @ rank n | **The comparison among of implemented models** is made available below ([Figure](https://raw.githubusercontent.com/yzhao062/pyod/master/examples/ALL.png), [compare_all_models.py](https://github.com/yzhao062/pyod/blob/master/examples/compare_all_models.py), [Interactive Jupyter Notebooks](https://mybinder.org/v2/gh/yzhao062/pyod/master)). For Jupyter Notebooks, please navigate to **“/notebooks/Compare All Models.ipynb”**. Check the latest [benchmark](https://pyod.readthedocs.io/en/latest/benchmark.html). You could replicate this process by running [benchmark.py](https://github.com/yzhao062/pyod/blob/master/notebooks/benchmark.py). API Cheatsheet & Reference[#](#api-cheatsheet-reference) === The following APIs are applicable for all detector models for easy use. * [`pyod.models.base.BaseDetector.fit()`](index.html#pyod.models.base.BaseDetector.fit): Fit detector. y is ignored in unsupervised methods. * [`pyod.models.base.BaseDetector.decision_function()`](index.html#pyod.models.base.BaseDetector.decision_function): Predict raw anomaly score of X using the fitted detector. * [`pyod.models.base.BaseDetector.predict()`](index.html#pyod.models.base.BaseDetector.predict): Predict if a particular sample is an outlier or not using the fitted detector. * [`pyod.models.base.BaseDetector.predict_proba()`](index.html#pyod.models.base.BaseDetector.predict_proba): Predict the probability of a sample being outlier using the fitted detector. * [`pyod.models.base.BaseDetector.predict_confidence()`](index.html#pyod.models.base.BaseDetector.predict_confidence): Predict the model’s sample-wise confidence (available in predict and predict_proba). Key Attributes of a fitted model: * `pyod.models.base.BaseDetector.decision_scores_`: The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. * `pyod.models.base.BaseDetector.labels_`: The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. --- Installation[#](#installation) --- It is recommended to use **pip** or **conda** for installation. Please make sure **the latest version** is installed, as PyOD is updated frequently: ``` pip install pyod # normal install pip install --upgrade pyod # or update if needed ``` ``` conda install -c conda-forge pyod ``` Alternatively, you could clone and run setup.py file: ``` git clone https://github.com/yzhao062/pyod.git cd pyod pip install . ``` **Required Dependencies**: * Python 3.6+ * joblib * matplotlib * numpy>=1.19 * numba>=0.51 * scipy>=1.5.1 * scikit_learn>=0.20.0 * six **Optional Dependencies (see details below)**: * combo (optional, required for models/combination.py and FeatureBagging) * keras/tensorflow (optional, required for AutoEncoder, and other deep learning models) * pandas (optional, required for running benchmark) * suod (optional, required for running SUOD model) * xgboost (optional, required for XGBOD) * pythresh to use thresholding Warning PyOD has multiple neural network based models, e.g., AutoEncoders, which are implemented in both Tensorflow and PyTorch. However, PyOD does **NOT** install these deep learning libraries for you. This reduces the risk of interfering with your local copies. If you want to use neural-net based models, please make sure these deep learning libraries are installed. Instructions are provided: [neural-net FAQ](https://github.com/yzhao062/pyod/wiki/Setting-up-Keras-and-Tensorflow-for-Neural-net-Based-models). Similarly, models depending on **xgboost**, e.g., XGBOD, would **NOT** enforce xgboost installation by default. Model Save & Load[#](#model-save-load) --- PyOD takes a similar approach of sklearn regarding model persistence. See [model persistence](https://scikit-learn.org/stable/modules/model_persistence.html) for clarification. In short, we recommend to use joblib or pickle for saving and loading PyOD models. See [“examples/save_load_model_example.py”](https://github.com/yzhao062/pyod/blob/master/examples/save_load_model_example.py) for an example. In short, it is simple as below: ``` from joblib import dump, load # save the model dump(clf, 'clf.joblib') # load the model clf = load('clf.joblib') ``` It is known that there are challenges in saving neural network models. Check [#328](https://github.com/yzhao062/pyod/issues/328#issuecomment-917192704) and [#88](https://github.com/yzhao062/pyod/issues/88#issuecomment-615343139) for temporary workaround. Fast Train with SUOD[#](#fast-train-with-suod) --- **Fast training and prediction**: it is possible to train and predict with a large number of detection models in PyOD by leveraging SUOD framework. See [SUOD Paper](https://www.andrew.cmu.edu/user/yuezhao2/papers/21-mlsys-suod.pdf) and [SUOD example](https://github.com/yzhao062/pyod/blob/master/examples/suod_example.py). ``` from pyod.models.suod import SUOD # initialized a group of outlier detectors for acceleration detector_list = [LOF(n_neighbors=15), LOF(n_neighbors=20), LOF(n_neighbors=25), LOF(n_neighbors=35), COPOD(), IForest(n_estimators=100), IForest(n_estimators=200)] # decide the number of parallel process, and the combination method # then clf can be used as any outlier detection model clf = SUOD(base_estimators=detector_list, n_jobs=2, combination='average', verbose=False) ``` Examples[#](#examples) --- --- ### Featured Tutorials[#](#featured-tutorials) PyOD has been well acknowledged by the machine learning community with a few featured posts and tutorials. **Analytics Vidhya**: [An Awesome Tutorial to Learn Outlier Detection in Python using PyOD Library](https://www.analyticsvidhya.com/blog/2019/02/outlier-detection-python-pyod/) **KDnuggets**: [Intuitive Visualization of Outlier Detection Methods](https://www.kdnuggets.com/2019/02/outlier-detection-methods-cheat-sheet.html) **Towards Data Science**: [Anomaly Detection for Dummies](https://towardsdatascience.com/anomaly-detection-for-dummies-15f148e559c1) **Computer Vision News (March 2019)**: [Python Open Source Toolbox for Outlier Detection](https://rsipvision.com/ComputerVisionNews-2019March/18/) **awesome-machine-learning**: [General-Purpose Machine Learning](https://github.com/josephmisiti/awesome-machine-learning#python-general-purpose) --- ### kNN Example[#](#knn-example) Full example: [knn_example.py](https://github.com/yzhao062/Pyod/blob/master/examples/knn_example.py) 1. Import models > ``` > from pyod.models.knn import KNN # kNN detector > ``` > 2. Generate sample data with [`pyod.utils.data.generate_data()`](index.html#pyod.utils.data.generate_data): > ``` > contamination = 0.1 # percentage of outliers > n_train = 200 # number of training points > n_test = 100 # number of testing points > X_train, X_test, y_train, y_test = generate_data( > n_train=n_train, n_test=n_test, contamination=contamination) > ``` > 3. Initialize a [`pyod.models.knn.KNN`](index.html#pyod.models.knn.KNN) detector, fit the model, and make the prediction. > ``` > # train kNN detector > clf_name = 'KNN' > clf = KNN() > clf.fit(X_train) > # get the prediction labels and outlier scores of the training data > y_train_pred = clf.labels_ # binary labels (0: inliers, 1: outliers) > y_train_scores = clf.decision_scores_ # raw outlier scores > # get the prediction on the test data > y_test_pred = clf.predict(X_test) # outlier labels (0 or 1) > y_test_scores = clf.decision_function(X_test) # outlier scores > # it is possible to get the prediction confidence as well > y_test_pred, y_test_pred_confidence = clf.predict(X_test, return_confidence=True) # outlier labels (0 or 1) and confidence in the range of [0,1] > ``` > 4. Evaluate the prediction using ROC and Precision @ Rank n [`pyod.utils.data.evaluate_print()`](index.html#pyod.utils.data.evaluate_print). > ``` > from pyod.utils.data import evaluate_print > # evaluate and print the results > print("\nOn Training Data:") > evaluate_print(clf_name, y_train, y_train_scores) > print("\nOn Test Data:") > evaluate_print(clf_name, y_test, y_test_scores) > ``` > 5. See sample outputs on both training and test data. > ``` > On Training Data: > KNN ROC:1.0, precision @ rank n:1.0 > On Test Data: > KNN ROC:0.9989, precision @ rank n:0.9 > ``` > 6. Generate the visualizations by visualize function included in all examples. > ``` > visualize(clf_name, X_train, y_train, X_test, y_test, y_train_pred, > y_test_pred, show_figure=True, save_figure=False) > ``` --- ### Model Combination Example[#](#model-combination-example) Outlier detection often suffers from model instability due to its unsupervised nature. Thus, it is recommended to combine various detector outputs, e.g., by averaging, to improve its robustness. Detector combination is a subfield of outlier ensembles; refer [[BKalayciE18](#id26)] for more information. Four score combination mechanisms are shown in this demo: 1. **Average**: average scores of all detectors. 2. **maximization**: maximum score across all detectors. 3. **Average of Maximum (AOM)**: divide base detectors into subgroups and take the maximum score for each subgroup. The final score is the average of all subgroup scores. 4. **Maximum of Average (MOA)**: divide base detectors into subgroups and take the average score for each subgroup. The final score is the maximum of all subgroup scores. “examples/comb_example.py” illustrates the API for combining the output of multiple base detectors ([comb_example.py](https://github.com/yzhao062/pyod/blob/master/examples/comb_example.py), [Jupyter Notebooks](https://mybinder.org/v2/gh/yzhao062/pyod/master)). For Jupyter Notebooks, please navigate to **“/notebooks/Model Combination.ipynb”** 1. Import models and generate sample data. > ``` > from pyod.models.knn import KNN # kNN detector > from pyod.models.combination import aom, moa, average, maximization > from pyod.utils.data import generate_data > X, y= generate_data(train_only=True) # load data > ``` > 2. Initialize 20 kNN outlier detectors with different k (10 to 200), and get the outlier scores. > ``` > # initialize 20 base detectors for combination > k_list = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, > 150, 160, 170, 180, 190, 200] > n_clf = len(k_list) # Number of classifiers being trained > train_scores = np.zeros([X_train.shape[0], n_clf]) > test_scores = np.zeros([X_test.shape[0], n_clf]) > for i in range(n_clf): > k = k_list[i] > clf = KNN(n_neighbors=k, method='largest') > clf.fit(X_train_norm) > train_scores[:, i] = clf.decision_scores_ > test_scores[:, i] = clf.decision_function(X_test_norm) > ``` > 3. Then the output scores are standardized into zero average and unit std before combination. This step is crucial to adjust the detector outputs to the same scale. > ``` > from pyod.utils.utility import standardizer > # scores have to be normalized before combination > train_scores_norm, test_scores_norm = standardizer(train_scores, test_scores) > ``` > 4. Four different combination algorithms are applied as described above: > ``` > comb_by_average = average(test_scores_norm) > comb_by_maximization = maximization(test_scores_norm) > comb_by_aom = aom(test_scores_norm, 5) # 5 groups > comb_by_moa = moa(test_scores_norm, 5) # 5 groups > ``` > 5. Finally, all four combination methods are evaluated by ROC and Precision @ Rank n: > ``` > Combining 20 kNN detectors > Combination by Average ROC:0.9194, precision @ rank n:0.4531 > Combination by Maximization ROC:0.9198, precision @ rank n:0.4688 > Combination by AOM ROC:0.9257, precision @ rank n:0.4844 > Combination by MOA ROC:0.9263, precision @ rank n:0.4688 > ``` ### Thresholding Example[#](#thresholding-example) Full example: [threshold_example.py](https://github.com/yzhao062/Pyod/blob/master/examples/threshold_example.py) 1. Import models > ``` > from pyod.models.knn import KNN # kNN detector > from pyod.models.thresholds import FILTER # Filter thresholder > ``` > 2. Generate sample data with [`pyod.utils.data.generate_data()`](index.html#pyod.utils.data.generate_data): > ``` > contamination = 0.1 # percentage of outliers > n_train = 200 # number of training points > n_test = 100 # number of testing points > X_train, X_test, y_train, y_test = generate_data( > n_train=n_train, n_test=n_test, contamination=contamination) > ``` > 3. Initialize a [`pyod.models.knn.KNN`](index.html#pyod.models.knn.KNN) detector, fit the model, and make the prediction. > ``` > # train kNN detector and apply FILTER thresholding > clf_name = 'KNN' > clf = KNN(contamination=FILTER()) > clf.fit(X_train) > # get the prediction labels and outlier scores of the training data > y_train_pred = clf.labels_ # binary labels (0: inliers, 1: outliers) > y_train_scores = clf.decision_scores_ # raw outlier scores > ``` References [BKalayciE18](#id1) <NAME> and <NAME>. Anomaly detection in wireless sensor networks data by using histogram based outlier score method. In *2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)*, 1–6. IEEE, 2018. Benchmarks[#](#benchmarks) --- ### Latest ADBench (2022)[#](#latest-adbench-2022) We just released a 36-page, the most comprehensive [anomaly detection benchmark paper](https://www.andrew.cmu.edu/user/yuezhao2/papers/22-preprint-adbench.pdf) [[AHHH+22](index.html#id112)]. The fully [open-sourced ADBench](https://github.com/Minqi824/ADBench) compares 30 anomaly detection algorithms on 55 benchmark datasets. The organization of **ADBench** is provided below: ### Old Results (2019)[#](#old-results-2019) A benchmark is supplied for select algorithms to provide an overview of the implemented models. In total, 17 benchmark datasets are used for comparison, which can be downloaded at [ODDS](http://odds.cs.stonybrook.edu/#table1). For each dataset, it is first split into 60% for training and 40% for testing. All experiments are repeated 10 times independently with random splits. The mean of 10 trials is regarded as the final result. Three evaluation metrics are provided: * The area under receiver operating characteristic (ROC) curve * Precision @ rank n (P@N) * Execution time You could replicate this process by running [benchmark.py](https://github.com/yzhao062/pyod/blob/master/notebooks/benchmark.py). We also provide the hardware specification for reference. | Specification | Value | | --- | --- | | Platform | PC | | OS | Microsoft Windows 10 Enterprise | | CPU | Intel i7-6820HQ @ 2.70GHz | | RAM | 32GB | | Software | PyCharm 2018.02 | | Python | Python 3.6.2 | | Core | Single core (no parallelization) | ### ROC Performance[#](#roc-performance) ROC Performances (average of 10 independent trials)[#](#id2) | Data | #Samples | # Dimensions | Outlier Perc | ABOD | CBLOF | FB | HBOS | IForest | KNN | LOF | MCD | OCSVM | PCA | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | arrhythmia | 452 | 274 | 14.6018 | 0.7688 | 0.7835 | 0.7781 | 0.8219 | 0.8005 | 0.7861 | 0.7787 | 0.7790 | 0.7812 | 0.7815 | | cardio | 1831 | 21 | 9.6122 | 0.5692 | 0.9276 | 0.5867 | 0.8351 | 0.9213 | 0.7236 | 0.5736 | 0.8135 | 0.9348 | 0.9504 | | glass | 214 | 9 | 4.2056 | 0.7951 | 0.8504 | 0.8726 | 0.7389 | 0.7569 | 0.8508 | 0.8644 | 0.7901 | 0.6324 | 0.6747 | | ionosphere | 351 | 33 | 35.8974 | 0.9248 | 0.8134 | 0.8730 | 0.5614 | 0.8499 | 0.9267 | 0.8753 | 0.9557 | 0.8419 | 0.7962 | | letter | 1600 | 32 | 6.2500 | 0.8783 | 0.5070 | 0.8660 | 0.5927 | 0.6420 | 0.8766 | 0.8594 | 0.8074 | 0.6118 | 0.5283 | | lympho | 148 | 18 | 4.0541 | 0.9110 | 0.9728 | 0.9753 | 0.9957 | 0.9941 | 0.9745 | 0.9771 | 0.9000 | 0.9759 | 0.9847 | | mnist | 7603 | 100 | 9.2069 | 0.7815 | 0.8009 | 0.7205 | 0.5742 | 0.8159 | 0.8481 | 0.7161 | 0.8666 | 0.8529 | 0.8527 | | musk | 3062 | 166 | 3.1679 | 0.1844 | 0.9879 | 0.5263 | 1.0000 | 0.9999 | 0.7986 | 0.5287 | 0.9998 | 1.0000 | 1.0000 | | optdigits | 5216 | 64 | 2.8758 | 0.4667 | 0.5089 | 0.4434 | 0.8732 | 0.7253 | 0.3708 | 0.4500 | 0.3979 | 0.4997 | 0.5086 | | pendigits | 6870 | 16 | 2.2707 | 0.6878 | 0.9486 | 0.4595 | 0.9238 | 0.9435 | 0.7486 | 0.4698 | 0.8344 | 0.9303 | 0.9352 | | pima | 768 | 8 | 34.8958 | 0.6794 | 0.7348 | 0.6235 | 0.7000 | 0.6806 | 0.7078 | 0.6271 | 0.6753 | 0.6215 | 0.6481 | | satellite | 6435 | 36 | 31.6395 | 0.5714 | 0.6693 | 0.5572 | 0.7581 | 0.7022 | 0.6836 | 0.5573 | 0.8030 | 0.6622 | 0.5988 | | satimage-2 | 5803 | 36 | 1.2235 | 0.8190 | 0.9917 | 0.4570 | 0.9804 | 0.9947 | 0.9536 | 0.4577 | 0.9959 | 0.9978 | 0.9822 | | shuttle | 49097 | 9 | 7.1511 | 0.6234 | 0.6272 | 0.4724 | 0.9855 | 0.9971 | 0.6537 | 0.5264 | 0.9903 | 0.9917 | 0.9898 | | vertebral | 240 | 6 | 12.5000 | 0.4262 | 0.3486 | 0.4166 | 0.3263 | 0.3905 | 0.3817 | 0.4081 | 0.3906 | 0.4431 | 0.4027 | | vowels | 1456 | 12 | 3.4341 | 0.9606 | 0.5856 | 0.9425 | 0.6727 | 0.7585 | 0.9680 | 0.9410 | 0.8076 | 0.7802 | 0.6027 | | wbc | 378 | 30 | 5.5556 | 0.9047 | 0.9227 | 0.9325 | 0.9516 | 0.9310 | 0.9366 | 0.9349 | 0.9210 | 0.9319 | 0.9159 | ### P@N Performance[#](#p-n-performance) Precision @ N Performances (average of 10 independent trials)[#](#id3) | Data | #Samples | # Dimensions | Outlier Perc | ABOD | CBLOF | FB | HBOS | IForest | KNN | LOF | MCD | OCSVM | PCA | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | arrhythmia | 452 | 274 | 14.6018 | 0.3808 | 0.4539 | 0.4230 | 0.5111 | 0.4961 | 0.4464 | 0.4334 | 0.3995 | 0.4614 | 0.4613 | | cardio | 1831 | 21 | 9.6122 | 0.2374 | 0.5876 | 0.1690 | 0.4476 | 0.5041 | 0.3323 | 0.1541 | 0.4317 | 0.5011 | 0.6090 | | glass | 214 | 9 | 4.2056 | 0.1702 | 0.0726 | 0.1476 | 0.0000 | 0.0726 | 0.0726 | 0.1476 | 0.0000 | 0.1726 | 0.0726 | | ionosphere | 351 | 33 | 35.8974 | 0.8442 | 0.6088 | 0.7056 | 0.3295 | 0.6369 | 0.8602 | 0.7063 | 0.8806 | 0.7000 | 0.5729 | | letter | 1600 | 32 | 6.2500 | 0.3801 | 0.0749 | 0.3642 | 0.0715 | 0.1003 | 0.3312 | 0.3641 | 0.1933 | 0.1510 | 0.0875 | | lympho | 148 | 18 | 4.0541 | 0.4483 | 0.7517 | 0.7517 | 0.8467 | 0.9267 | 0.7517 | 0.7517 | 0.5183 | 0.7517 | 0.7517 | | mnist | 7603 | 100 | 9.2069 | 0.3555 | 0.3348 | 0.3299 | 0.1188 | 0.3135 | 0.4204 | 0.3343 | 0.3462 | 0.3962 | 0.3846 | | musk | 3062 | 166 | 3.1679 | 0.0507 | 0.7766 | 0.2230 | 0.9783 | 0.9680 | 0.2733 | 0.1695 | 0.9742 | 1.0000 | 0.9799 | | optdigits | 5216 | 64 | 2.8758 | 0.0060 | 0.0000 | 0.0244 | 0.2194 | 0.0301 | 0.0000 | 0.0234 | 0.0000 | 0.0000 | 0.0000 | | pendigits | 6870 | 16 | 2.2707 | 0.0812 | 0.2768 | 0.0658 | 0.2979 | 0.3422 | 0.0984 | 0.0653 | 0.0893 | 0.3287 | 0.3187 | | pima | 768 | 8 | 34.8958 | 0.5193 | 0.5413 | 0.4480 | 0.5424 | 0.5111 | 0.5413 | 0.4555 | 0.4962 | 0.4704 | 0.4943 | | satellite | 6435 | 36 | 31.6395 | 0.3902 | 0.4152 | 0.3902 | 0.5690 | 0.5676 | 0.4994 | 0.3893 | 0.6845 | 0.5346 | 0.4784 | | satimage-2 | 5803 | 36 | 1.2235 | 0.2130 | 0.8846 | 0.0555 | 0.6939 | 0.8754 | 0.3809 | 0.0555 | 0.6481 | 0.9356 | 0.8041 | | shuttle | 49097 | 9 | 7.1511 | 0.1977 | 0.2943 | 0.0695 | 0.9551 | 0.9546 | 0.2184 | 0.1424 | 0.7506 | 0.9542 | 0.9501 | | vertebral | 240 | 6 | 12.5000 | 0.0601 | 0.0000 | 0.0644 | 0.0071 | 0.0343 | 0.0238 | 0.0506 | 0.0071 | 0.0238 | 0.0226 | | vowels | 1456 | 12 | 3.4341 | 0.5710 | 0.0831 | 0.3224 | 0.1297 | 0.1875 | 0.5093 | 0.3551 | 0.2186 | 0.2791 | 0.1364 | | wbc | 378 | 30 | 5.5556 | 0.3060 | 0.5055 | 0.5188 | 0.5817 | 0.5088 | 0.4952 | 0.5188 | 0.4577 | 0.5125 | 0.4767 | ### Execution Time[#](#execution-time) Time Elapsed in Seconds (average of 10 independent trials)[#](#id4) | Data | #Samples | # Dimensions | Outlier Perc | ABOD | CBLOF | FB | HBOS | IForest | KNN | LOF | MCD | OCSVM | PCA | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | arrhythmia | 452 | 274 | 14.6018 | 0.3667 | 0.2123 | 0.5651 | 0.1383 | 0.2669 | 0.1075 | 0.0743 | 1.4165 | 0.0473 | 0.0596 | | cardio | 1831 | 21 | 9.6122 | 0.3824 | 0.1255 | 0.7741 | 0.0053 | 0.2672 | 0.2249 | 0.0993 | 0.5418 | 0.0883 | 0.0035 | | glass | 214 | 9 | 4.2056 | 0.0352 | 0.0359 | 0.0317 | 0.0022 | 0.1724 | 0.0173 | 0.0025 | 0.0325 | 0.0010 | 0.0011 | | ionosphere | 351 | 33 | 35.8974 | 0.0645 | 0.0459 | 0.0728 | 0.0082 | 0.1864 | 0.0302 | 0.0070 | 0.0718 | 0.0048 | 0.0018 | | letter | 1600 | 32 | 6.2500 | 0.3435 | 0.1014 | 0.7361 | 0.0080 | 0.2617 | 0.1882 | 0.0935 | 1.1942 | 0.0888 | 0.0041 | | lympho | 148 | 18 | 4.0541 | 0.0277 | 0.0353 | 0.0266 | 0.0037 | 0.1712 | 0.0111 | 0.0021 | 0.0327 | 0.0014 | 0.0012 | | mnist | 7603 | 100 | 9.2069 | 7.4192 | 1.1339 | 48.2750 | 0.0480 | 1.9314 | 7.3431 | 6.7901 | 4.7448 | 5.0203 | 0.1569 | | musk | 3062 | 166 | 3.1679 | 2.3860 | 0.4134 | 13.8610 | 0.0587 | 1.2736 | 2.2057 | 1.9835 | 25.5501 | 1.3774 | 0.1637 | | optdigits | 5216 | 64 | 2.8758 | 2.7279 | 0.4977 | 14.2399 | 0.0303 | 0.7783 | 2.1205 | 1.7799 | 1.8599 | 1.5618 | 0.0519 | | pendigits | 6870 | 16 | 2.2707 | 1.4339 | 0.2847 | 3.8185 | 0.0090 | 0.5879 | 0.8659 | 0.5936 | 2.2209 | 0.9666 | 0.0062 | | pima | 768 | 8 | 34.8958 | 0.1357 | 0.0698 | 0.0908 | 0.0019 | 0.1923 | 0.0590 | 0.0102 | 0.0474 | 0.0087 | 0.0013 | | satellite | 6435 | 36 | 31.6395 | 1.7970 | 0.4269 | 7.5566 | 0.0161 | 0.6449 | 1.2578 | 0.9868 | 2.6916 | 1.3697 | 0.0245 | | satimage-2 | 5803 | 36 | 1.2235 | 1.5209 | 0.3705 | 5.6561 | 0.0148 | 0.5529 | 1.0587 | 0.7525 | 2.3935 | 1.1114 | 0.0151 | | shuttle | 49097 | 9 | 7.1511 | 14.3611 | 1.2524 | 59.2131 | 0.0953 | 3.3906 | 9.4958 | 11.1500 | 12.1449 | 44.6830 | 0.0378 | | vertebral | 240 | 6 | 12.5000 | 0.0529 | 0.0444 | 0.0339 | 0.0014 | 0.1786 | 0.0161 | 0.0025 | 0.0446 | 0.0015 | 0.0010 | | vowels | 1456 | 12 | 3.4341 | 0.3380 | 0.0889 | 0.3125 | 0.0044 | 0.2751 | 0.1125 | 0.0367 | 0.9745 | 0.0469 | 0.0023 | | wbc | 378 | 30 | 5.5556 | 0.1014 | 0.0691 | 0.0771 | 0.0063 | 0.2030 | 0.0287 | 0.0078 | 0.0864 | 0.0062 | 0.0035 | API CheatSheet[#](#api-cheatsheet) --- The following APIs are applicable for all detector models for easy use. * [`pyod.models.base.BaseDetector.fit()`](#pyod.models.base.BaseDetector.fit): Fit detector. y is ignored in unsupervised methods. * [`pyod.models.base.BaseDetector.decision_function()`](#pyod.models.base.BaseDetector.decision_function): Predict raw anomaly score of X using the fitted detector. * [`pyod.models.base.BaseDetector.predict()`](#pyod.models.base.BaseDetector.predict): Predict if a particular sample is an outlier or not using the fitted detector. * [`pyod.models.base.BaseDetector.predict_proba()`](#pyod.models.base.BaseDetector.predict_proba): Predict the probability of a sample being outlier using the fitted detector. * [`pyod.models.base.BaseDetector.predict_confidence()`](#pyod.models.base.BaseDetector.predict_confidence): Predict the model’s sample-wise confidence (available in predict and predict_proba). Key Attributes of a fitted model: * `pyod.models.base.BaseDetector.decision_scores_`: The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. * `pyod.models.base.BaseDetector.labels_`: The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. See base class definition below: ### pyod.models.base module[#](#module-pyod.models.base) Base class for all outlier detector models *class* pyod.models.base.BaseDetector(*contamination=0.1*)[[source]](_modules/pyod/models/base.html#BaseDetector)[#](#pyod.models.base.BaseDetector) Bases: [`object`](https://docs.python.org/3/library/functions.html#object) Abstract class for all outlier detection algorithms. pyod would stop supporting Python 2 in the future. Consider move to Python 3.5+. #### Parameters[#](#parameters) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. #### Attributes[#](#attributes) [decision_scores_](#id15)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id17)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id19)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. *abstract* decision_function(*X*)[[source]](_modules/pyod/models/base.html#BaseDetector.decision_function)[#](#pyod.models.base.BaseDetector.decision_function) Predict raw anomaly scores of X using the fitted detector. The anomaly score of an input sample is computed based on the fitted detector. For consistency, outliers are assigned with higher anomaly scores. ##### Parameters[#](#id1) Xnumpy array of shape (n_samples, n_features)The input samples. Sparse matrices are accepted only if they are supported by the base estimator. ##### Returns[#](#returns) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. *abstract* fit(*X*, *y=None*)[[source]](_modules/pyod/models/base.html#BaseDetector.fit)[#](#pyod.models.base.BaseDetector.fit) Fit detector. y is ignored in unsupervised methods. ##### Parameters[#](#id2) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ##### Returns[#](#id3) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[[source]](_modules/pyod/models/base.html#BaseDetector.fit_predict)[#](#pyod.models.base.BaseDetector.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[[source]](_modules/pyod/models/base.html#BaseDetector.fit_predict_score)[#](#pyod.models.base.BaseDetector.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[[source]](_modules/pyod/models/base.html#BaseDetector.get_params)[#](#pyod.models.base.BaseDetector.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ##### Parameters[#](#id4) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ##### Returns[#](#id5) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[[source]](_modules/pyod/models/base.html#BaseDetector.predict)[#](#pyod.models.base.BaseDetector.predict) Predict if a particular sample is an outlier or not. ##### Parameters[#](#id6) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ##### Returns[#](#id7) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[[source]](_modules/pyod/models/base.html#BaseDetector.predict_confidence)[#](#pyod.models.base.BaseDetector.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](index.html#id839)]. ##### Parameters[#](#id9) Xnumpy array of shape (n_samples, n_features)The input samples. ##### Returns[#](#id10) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[[source]](_modules/pyod/models/base.html#BaseDetector.predict_proba)[#](#pyod.models.base.BaseDetector.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](index.html#id800)]. ##### Parameters[#](#id12) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ##### Returns[#](#id13) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[[source]](_modules/pyod/models/base.html#BaseDetector.set_params)[#](#pyod.models.base.BaseDetector.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ##### Returns[#](#id14) self : object API Reference[#](#api-reference) --- ### All Models[#](#all-models) #### pyod.models.abod module[#](#module-pyod.models.abod) Angle-based Outlier Detector (ABOD) *class* pyod.models.abod.ABOD(*contamination=0.1*, *n_neighbors=5*, *method='fast'*)[[source]](_modules/pyod/models/abod.html#ABOD)[#](#pyod.models.abod.ABOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) ABOD class for Angle-base Outlier Detection. For an observation, the variance of its weighted cosine scores to all neighbors could be viewed as the outlying score. See [[BKZ+08](#id804)] for details. Two version of ABOD are supported: * Fast ABOD: use k nearest neighbors to approximate. * Original ABOD: consider all training points with high time complexity at O(n^3). ##### Parameters[#](#parameters) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_neighborsint, optional (default=10)Number of neighbors to use by default for k neighbors queries. method: str, optional (default=’fast’)Valid values for metric are: * ‘fast’: fast ABOD. Only consider n_neighbors of training points * ‘default’: original ABOD with all training points, which could be slow ##### Attributes[#](#attributes) [decision_scores_](#id852)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id854)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id856)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/abod.html#ABOD.decision_function)[#](#pyod.models.abod.ABOD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id2) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#returns) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/abod.html#ABOD.fit)[#](#pyod.models.abod.ABOD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id3) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id4) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.abod.ABOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.abod.ABOD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.abod.ABOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id5) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id6) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.abod.ABOD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id7) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id8) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.abod.ABOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id10) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id11) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.abod.ABOD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id13) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id14) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.abod.ABOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id15) self : object #### pyod.models.alad module[#](#module-pyod.models.alad) Using Adversarially Learned Anomaly Detection *class* pyod.models.alad.ALAD(*activation_hidden_gen='tanh'*, *activation_hidden_disc='tanh'*, *output_activation=None*, *dropout_rate=0.2*, *latent_dim=2*, *dec_layers=[5, 10, 25]*, *enc_layers=[25, 10, 5]*, *disc_xx_layers=[25, 10, 5]*, *disc_zz_layers=[25, 10, 5]*, *disc_xz_layers=[25, 10, 5]*, *learning_rate_gen=0.0001*, *learning_rate_disc=0.0001*, *add_recon_loss=False*, *lambda_recon_loss=0.1*, *epochs=200*, *verbose=0*, *preprocessing=False*, *add_disc_zz_loss=True*, *spectral_normalization=False*, *batch_size=32*, *contamination=0.1*)[[source]](_modules/pyod/models/alad.html#ALAD)[#](#pyod.models.alad.ALAD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Adversarially Learned Anomaly Detection (ALAD). Paper: <https://arxiv.org/pdf/1812.02288.pdfSee [[BZRF+18](#id849)] for details. ##### Parameters[#](#id17) output_activationstr, optional (default=None)Activation function to use for output layers for encoder and dector. See <https://keras.io/activations/activation_hidden_discstr, optional (default=’tanh’)Activation function to use for hidden layers in discrimators. See <https://keras.io/activations/activation_hidden_genstr, optional (default=’tanh’)Activation function to use for hidden layers in encoder and decoder (i.e. generator). See <https://keras.io/activations/epochsint, optional (default=500)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. dec_layerslist, optional (default=[5,10,25])List that indicates the number of nodes per hidden layer for the d ecoder network. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. enc_layerslist, optional (default=[25,10,5])List that indicates the number of nodes per hidden layer for the encoder network. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. disc_xx_layerslist, optional (default=[25,10,5])List that indicates the number of nodes per hidden layer for discriminator_xx. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. disc_zz_layerslist, optional (default=[25,10,5])List that indicates the number of nodes per hidden layer for discriminator_zz. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. disc_xz_layerslist, optional (default=[25,10,5])List that indicates the number of nodes per hidden layer for discriminator_xz. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. learning_rate_gen: float in (0., 1), optional (default=0.001)learning rate of training the encoder and decoder learning_rate_disc: float in (0., 1), optional (default=0.001)learning rate of training the discriminators add_recon_loss: bool optional (default=False)add an extra loss for encoder and decoder based on the reconstruction error lambda_recon_loss: float in (0., 1), optional (default=0.1)if `add_recon_loss= True`, the reconstruction loss gets multiplied by `lambda_recon_loss` and added to the total loss for the generator > (i.e. encoder and decoder). preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)Verbosity mode. - 0 = silent - 1 = progress bar contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. ##### Attributes[#](#id18) [decision_scores_](#id858)numpy array of shape (n_samples,)The outlier scores of the training data [0,1]. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id860)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id862)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/alad.html#ALAD.decision_function)[#](#pyod.models.alad.ALAD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. Parameters ———- X : numpy array of shape (n_samples, n_features) > The training input samples. Sparse matrices are accepted only > if they are supported by the base estimator. ###### Returns[#](#id19) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*, *noise_std=0.1*)[[source]](_modules/pyod/models/alad.html#ALAD.fit)[#](#pyod.models.alad.ALAD.fit) Fit detector. y is ignored in unsupervised methods. Parameters ———- X : numpy array of shape (n_samples, n_features) > The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id20) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.alad.ALAD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.alad.ALAD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_outlier_scores(*X_norm*)[[source]](_modules/pyod/models/alad.html#ALAD.get_outlier_scores)[#](#pyod.models.alad.ALAD.get_outlier_scores) get_params(*deep=True*)[#](#pyod.models.alad.ALAD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id21) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id22) paramsmapping of string to anyParameter names mapped to their values. plot_learning_curves(*start_ind=0*, *window_smoothening=10*)[[source]](_modules/pyod/models/alad.html#ALAD.plot_learning_curves)[#](#pyod.models.alad.ALAD.plot_learning_curves) predict(*X*, *return_confidence=False*)[#](#pyod.models.alad.ALAD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id23) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id24) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.alad.ALAD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id26) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id27) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.alad.ALAD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id29) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id30) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.alad.ALAD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id31) self : object train_more(*X*, *epochs=100*, *noise_std=0.1*)[[source]](_modules/pyod/models/alad.html#ALAD.train_more)[#](#pyod.models.alad.ALAD.train_more) This function allows the researcher to perform extra training instead of the fixed number determined by the fit() function. train_step(*data*)[[source]](_modules/pyod/models/alad.html#ALAD.train_step)[#](#pyod.models.alad.ALAD.train_step) #### pyod.models.anogan module[#](#module-pyod.models.anogan) Anomaly Detection with Generative Adversarial Networks (AnoGAN) Paper: <https://arxiv.org/pdf/1703.05921.pdf> Note, that this is another implementation of AnoGAN as the one from <https://github.com/fuchami/ANOGAN*class* pyod.models.anogan.AnoGAN(*activation_hidden='tanh'*, *dropout_rate=0.2*, *latent_dim_G=2*, *G_layers=[20, 10, 3, 10, 20]*, *verbose=0*, *D_layers=[20, 10, 5]*, *index_D_layer_for_recon_error=1*, *epochs=500*, *preprocessing=False*, *learning_rate=0.001*, *learning_rate_query=0.01*, *epochs_query=20*, *batch_size=32*, *output_activation=None*, *contamination=0.1*)[[source]](_modules/pyod/models/anogan.html#AnoGAN)[#](#pyod.models.anogan.AnoGAN) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Anomaly Detection with Generative Adversarial Networks (AnoGAN). See the original paper “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery”. See [[BSSeebockW+17](#id845)] for details. ##### Parameters[#](#id33) output_activationstr, optional (default=None)Activation function to use for output layer. See <https://keras.io/activations/activation_hiddenstr, optional (default=’tanh’)Activation function to use for output layer. See <https://keras.io/activations/epochsint, optional (default=500)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. G_layerslist, optional (default=[20,10,3,10,20])List that indicates the number of nodes per hidden layer for the generator. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. D_layerslist, optional (default=[20,10,5])List that indicates the number of nodes per hidden layer for the discriminator. Thus, [10,10] indicates 2 hidden layers having each 10 nodes. learning_rate: float in (0., 1), optional (default=0.001)learning rate of training the network index_D_layer_for_recon_error: int, optional (default = 1)This is the index of the hidden layer in the discriminator for which the reconstruction error will be determined between query sample and the sample created from the latent space. learning_rate_query: float in (0., 1), optional (default=0.001)learning rate for the backpropagation steps needed to find a point in the latent space of the generator that approximate the query sample epochs_query: int, optional (default=20) Number of epochs to approximate the query sample in the latent space of the generator preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)Verbosity mode. - 0 = silent - 1 = progress bar contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. ##### Attributes[#](#id34) [decision_scores_](#id864)numpy array of shape (n_samples,)The outlier scores of the training data [0,1]. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id866)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id868)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/anogan.html#AnoGAN.decision_function)[#](#pyod.models.anogan.AnoGAN.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id35) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id36) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/anogan.html#AnoGAN.fit)[#](#pyod.models.anogan.AnoGAN.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id37) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id38) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.anogan.AnoGAN.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.anogan.AnoGAN.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. fit_query(*query_sample*)[[source]](_modules/pyod/models/anogan.html#AnoGAN.fit_query)[#](#pyod.models.anogan.AnoGAN.fit_query) get_params(*deep=True*)[#](#pyod.models.anogan.AnoGAN.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id39) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id40) paramsmapping of string to anyParameter names mapped to their values. plot_learning_curves(*start_ind=0*, *window_smoothening=10*)[[source]](_modules/pyod/models/anogan.html#AnoGAN.plot_learning_curves)[#](#pyod.models.anogan.AnoGAN.plot_learning_curves) predict(*X*, *return_confidence=False*)[#](#pyod.models.anogan.AnoGAN.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id41) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id42) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.anogan.AnoGAN.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id44) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id45) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.anogan.AnoGAN.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id47) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id48) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.anogan.AnoGAN.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id49) self : object train_step(*data*)[[source]](_modules/pyod/models/anogan.html#AnoGAN.train_step)[#](#pyod.models.anogan.AnoGAN.train_step) #### pyod.models.auto_encoder module[#](#module-pyod.models.auto_encoder) Using Auto Encoder with Outlier Detection *class* pyod.models.auto_encoder.AutoEncoder(*hidden_neurons=None*, *hidden_activation='relu'*, *output_activation='sigmoid'*, *loss=<function mean_squared_error>*, *optimizer='adam'*, *epochs=100*, *batch_size=32*, *dropout_rate=0.2*, *l2_regularizer=0.1*, *validation_size=0.1*, *preprocessing=True*, *verbose=1*, *random_state=None*, *contamination=0.1*)[[source]](_modules/pyod/models/auto_encoder.html#AutoEncoder)[#](#pyod.models.auto_encoder.AutoEncoder) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Auto Encoder (AE) is a type of neural networks for learning useful data representations unsupervisedly. Similar to PCA, AE could be used to detect outlying objects in the data by calculating the reconstruction errors. See [[BAgg15](#id808)] Chapter 3 for details. ##### Parameters[#](#id51) hidden_neuronslist, optional (default=[64, 32, 32, 64])The number of neurons per hidden layers. hidden_activationstr, optional (default=’relu’)Activation function to use for hidden layers. All hidden layers are forced to use the same type of activation. See <https://keras.io/activations/output_activationstr, optional (default=’sigmoid’)Activation function to use for output layer. See <https://keras.io/activations/lossstr or obj, optional (default=keras.losses.mean_squared_error)String (name of objective function) or objective function. See <https://keras.io/losses/optimizerstr, optional (default=’adam’)String (name of optimizer) or optimizer instance. See <https://keras.io/optimizers/epochsint, optional (default=100)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. l2_regularizerfloat in (0., 1), optional (default=0.1)The regularization strength of activity_regularizer applied on each layer. By default, l2 regularizer is used. See <https://keras.io/regularizers/validation_sizefloat in (0., 1), optional (default=0.1)The percentage of data to be used for validation. preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)Verbosity mode. * 0 = silent * 1 = progress bar * 2 = one line per epoch. For verbose >= 1, model summary may be printed. random_staterandom_state: int, RandomState instance or None, optional(default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. ##### Attributes[#](#id52) [encoding_dim_](#id870)intThe number of neurons in the encoding layer. [compression_rate_](#id872)floatThe ratio between the original feature and the number of neurons in the encoding layer. [model_](#id874)Keras ObjectThe underlying AutoEncoder in Keras. [history_](#id876): Keras ObjectThe AutoEncoder training history. [decision_scores_](#id878)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id880)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id882)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/auto_encoder.html#AutoEncoder.decision_function)[#](#pyod.models.auto_encoder.AutoEncoder.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id53) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id54) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/auto_encoder.html#AutoEncoder.fit)[#](#pyod.models.auto_encoder.AutoEncoder.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id55) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id56) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.auto_encoder.AutoEncoder.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.auto_encoder.AutoEncoder.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.auto_encoder.AutoEncoder.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id57) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id58) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.auto_encoder.AutoEncoder.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id59) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id60) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.auto_encoder.AutoEncoder.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id62) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id63) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.auto_encoder.AutoEncoder.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id65) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id66) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.auto_encoder.AutoEncoder.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id67) self : object #### pyod.models.auto_encoder_torch module[#](#module-pyod.models.auto_encoder_torch) Using AutoEncoder with Outlier Detection (PyTorch) *class* pyod.models.auto_encoder_torch.AutoEncoder(*hidden_neurons=None*, *hidden_activation='relu'*, *batch_norm=True*, *learning_rate=0.001*, *epochs=100*, *batch_size=32*, *dropout_rate=0.2*, *weight_decay=1e-05*, *preprocessing=True*, *loss_fn=None*, *contamination=0.1*, *device=None*)[[source]](_modules/pyod/models/auto_encoder_torch.html#AutoEncoder)[#](#pyod.models.auto_encoder_torch.AutoEncoder) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Auto Encoder (AE) is a type of neural networks for learning useful data representations in an unsupervised manner. Similar to PCA, AE could be used to detect outlying objects in the data by calculating the reconstruction errors. See [[BAgg15](#id808)] Chapter 3 for details. ##### Notes[#](#notes) > This is the PyTorch version of AutoEncoder. See auto_encoder.py for > the TensorFlow version. > The documentation is not finished! ##### Parameters[#](#id69) hidden_neuronslist, optional (default=[64, 32])The number of neurons per hidden layers. So the network has the structure as [n_features, 64, 32, 32, 64, n_features] hidden_activationstr, optional (default=’relu’)Activation function to use for hidden layers. All hidden layers are forced to use the same type of activation. See <https://pytorch.org/docs/stable/nn.html> for details. Currently only ‘relu’: nn.ReLU() ‘sigmoid’: nn.Sigmoid() ‘tanh’: nn.Tanh() are supported. See pyod/utils/torch_utility.py for details. batch_normboolean, optional (default=True)Whether to apply Batch Normalization, See <https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.htmllearning_ratefloat, optional (default=1e-3)Learning rate for the optimizer. This learning_rate is given to an Adam optimizer (torch.optim.Adam). See <https://pytorch.org/docs/stable/generated/torch.optim.Adam.htmlepochsint, optional (default=100)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. weight_decayfloat, optional (default=1e-5)The weight decay for Adam optimizer. See <https://pytorch.org/docs/stable/generated/torch.optim.Adam.htmlpreprocessingbool, optional (default=True)If True, apply standardization on the data. loss_fnobj, optional (default=torch.nn.MSELoss)Optimizer instance which implements torch.nn._Loss. One of <https://pytorch.org/docs/stable/nn.html#loss-functions> or a custom loss. Custom losses are currently unstable. verboseint, optional (default=1)Verbosity mode. * 0 = silent * 1 = progress bar * 2 = one line per epoch. For verbose >= 1, model summary may be printed. !CURRENTLY NOT SUPPORTED.! random_staterandom_state: int, RandomState instance or None, optional(default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. !CURRENTLY NOT SUPPORTED.! contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. ##### Attributes[#](#id70) [encoding_dim_](#id884)intThe number of neurons in the encoding layer. [compression_rate_](#id886)floatThe ratio between the original feature and the number of neurons in the encoding layer. [model_](#id888)Keras ObjectThe underlying AutoEncoder in Keras. [history_](#id890): Keras ObjectThe AutoEncoder training history. [decision_scores_](#id892)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id894)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id896)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/auto_encoder_torch.html#AutoEncoder.decision_function)[#](#pyod.models.auto_encoder_torch.AutoEncoder.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id71) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id72) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/auto_encoder_torch.html#AutoEncoder.fit)[#](#pyod.models.auto_encoder_torch.AutoEncoder.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id73) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id74) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id75) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id76) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id77) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id78) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id80) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id81) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id83) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id84) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.auto_encoder_torch.AutoEncoder.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id85) self : object *class* pyod.models.auto_encoder_torch.InnerAutoencoder(*n_features*, *hidden_neurons=(128, 64)*, *dropout_rate=0.2*, *batch_norm=True*, *hidden_activation='relu'*)[[source]](_modules/pyod/models/auto_encoder_torch.html#InnerAutoencoder)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder) Bases: `Module` add_module(*name: [str](https://docs.python.org/3/library/stdtypes.html#str)*, *module: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[Module]*) → [None](https://docs.python.org/3/library/constants.html#None)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.add_module) Adds a child module to the current module. The module can be accessed as an attribute using the given name. Args: name (str): name of the child module. The child module can beaccessed from this module using the given name module (Module): child module to be added to the module. apply(*fn: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable)[[Module], [None](https://docs.python.org/3/library/constants.html#None)]*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.apply) Applies `fn` recursively to every submodule (as returned by `.children()`) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc). Args:fn (`Module` -> None): function to be applied to each submodule Returns:Module: self Example: ``` >>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) ``` bfloat16() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.bfloat16) Casts all floating point parameters and buffers to `bfloat16` datatype. Note This method modifies the module in-place. Returns:Module: self buffers(*recurse: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[Tensor][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.buffers) Returns an iterator over module buffers. Args: recurse (bool): if True, then yields buffers of this moduleand all submodules. Otherwise, yields only buffers that are direct members of this module. Yields:torch.Tensor: module buffer Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) ``` children() → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[Module][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.children) Returns an iterator over immediate children modules. Yields:Module: a child module cpu() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.cpu) Moves all model parameters and buffers to the CPU. Note This method modifies the module in-place. Returns:Module: self cuda(*device: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Union](https://docs.python.org/3/library/typing.html#typing.Union)[[int](https://docs.python.org/3/library/functions.html#int), device]] = None*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.cuda) Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Note This method modifies the module in-place. Args: device (int, optional): if specified, all parameters will becopied to that device Returns:Module: self double() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.double) Casts all floating point parameters and buffers to `double` datatype. Note This method modifies the module in-place. Returns:Module: self eval() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.eval) Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. `Dropout`, `BatchNorm`, etc. This is equivalent with `self.train(False)`. See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it. Returns:Module: self extra_repr() → [str](https://docs.python.org/3/library/stdtypes.html#str)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.extra_repr) Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. float() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.float) Casts all floating point parameters and buffers to `float` datatype. Note This method modifies the module in-place. Returns:Module: self forward(*x*)[[source]](_modules/pyod/models/auto_encoder_torch.html#InnerAutoencoder.forward)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.forward) Defines the computation performed at every call. Should be overridden by all subclasses. Note Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them. get_buffer(*target: [str](https://docs.python.org/3/library/stdtypes.html#str)*) → Tensor[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.get_buffer) Returns the buffer given by `target` if it exists, otherwise throws an error. See the docstring for `get_submodule` for a more detailed explanation of this method’s functionality as well as how to correctly specify `target`. Args: target: The fully-qualified string name of the bufferto look for. (See `get_submodule` for how to specify a fully-qualified string.) Returns:torch.Tensor: The buffer referenced by `target` Raises: AttributeError: If the target string references an invalidpath or resolves to something that is not a buffer get_extra_state() → [Any](https://docs.python.org/3/library/typing.html#typing.Any)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.get_extra_state) Returns any extra state to include in the module’s state_dict. Implement this and a corresponding [`set_extra_state()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.set_extra_state) for your module if you need to store extra state. This function is called when building the module’s state_dict(). Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes. Returns:object: Any extra state to store in the module’s state_dict get_parameter(*target: [str](https://docs.python.org/3/library/stdtypes.html#str)*) → Parameter[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.get_parameter) Returns the parameter given by `target` if it exists, otherwise throws an error. See the docstring for `get_submodule` for a more detailed explanation of this method’s functionality as well as how to correctly specify `target`. Args: target: The fully-qualified string name of the Parameterto look for. (See `get_submodule` for how to specify a fully-qualified string.) Returns:torch.nn.Parameter: The Parameter referenced by `target` Raises: AttributeError: If the target string references an invalidpath or resolves to something that is not an `nn.Parameter` get_submodule(*target: [str](https://docs.python.org/3/library/stdtypes.html#str)*) → Module[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.get_submodule) Returns the submodule given by `target` if it exists, otherwise throws an error. For example, let’s say you have an `nn.Module` `A` that looks like this: ``` A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) ) ``` (The diagram shows an `nn.Module` `A`. `A` has a nested submodule `net_b`, which itself has two submodules `net_c` and `linear`. `net_c` then has a submodule `conv`.) To check whether or not we have the `linear` submodule, we would call `get_submodule("net_b.linear")`. To check whether we have the `conv` submodule, we would call `get_submodule("net_b.net_c.conv")`. The runtime of `get_submodule` is bounded by the degree of module nesting in `target`. A query against `named_modules` achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, `get_submodule` should always be used. Args: target: The fully-qualified string name of the submoduleto look for. (See above example for how to specify a fully-qualified string.) Returns:torch.nn.Module: The submodule referenced by `target` Raises: AttributeError: If the target string references an invalidpath or resolves to something that is not an `nn.Module` half() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.half) Casts all floating point parameters and buffers to `half` datatype. Note This method modifies the module in-place. Returns:Module: self ipu(*device: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Union](https://docs.python.org/3/library/typing.html#typing.Union)[[int](https://docs.python.org/3/library/functions.html#int), device]] = None*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.ipu) Moves all model parameters and buffers to the IPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized. Note This method modifies the module in-place. Arguments: device (int, optional): if specified, all parameters will becopied to that device Returns:Module: self load_state_dict(*state_dict: [Mapping](https://docs.python.org/3/library/typing.html#typing.Mapping)[[str](https://docs.python.org/3/library/stdtypes.html#str), [Any](https://docs.python.org/3/library/typing.html#typing.Any)]*, *strict: [bool](https://docs.python.org/3/library/functions.html#bool) = True*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.load_state_dict) Copies parameters and buffers from [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict) into this module and its descendants. If `strict` is `True`, then the keys of [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict) must exactly match the keys returned by this module’s `state_dict()` function. Args: state_dict (dict): a dict containing parameters andpersistent buffers. strict (bool, optional): whether to strictly enforce that the keysin [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict) match the keys returned by this module’s `state_dict()` function. Default: `True` Returns: `NamedTuple` with `missing_keys` and `unexpected_keys` fields:* **missing_keys** is a list of str containing the missing keys * **unexpected_keys** is a list of str containing the unexpected keys Note:If a parameter or buffer is registered as `None` and its corresponding key exists in [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict), [`load_state_dict()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.load_state_dict) will raise a `RuntimeError`. modules() → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[Module][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.modules) Returns an iterator over all modules in the network. Yields:Module: a module in the network Note:Duplicate modules are returned only once. In the following example, `l` will be returned only once. Example: ``` >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True) ``` named_buffers(*prefix: [str](https://docs.python.org/3/library/stdtypes.html#str) = ''*, *recurse: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[[str](https://docs.python.org/3/library/stdtypes.html#str), Tensor]][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.named_buffers) Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Args:prefix (str): prefix to prepend to all buffer names. recurse (bool): if True, then yields buffers of this module > and all submodules. Otherwise, yields only buffers that > are direct members of this module. Yields:(str, torch.Tensor): Tuple containing the name and buffer Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size()) ``` named_children() → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[[str](https://docs.python.org/3/library/stdtypes.html#str), Module]][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.named_children) Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields:(str, Module): Tuple containing a name and child module Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module) ``` named_modules(*memo: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Set](https://docs.python.org/3/library/typing.html#typing.Set)[Module]] = None*, *prefix: [str](https://docs.python.org/3/library/stdtypes.html#str) = ''*, *remove_duplicate: [bool](https://docs.python.org/3/library/functions.html#bool) = True*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.named_modules) Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Args:memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result > or not Yields:(str, Module): Tuple of name and module Note:Duplicate modules are returned only once. In the following example, `l` will be returned only once. Example: ``` >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True)) ``` named_parameters(*prefix: [str](https://docs.python.org/3/library/stdtypes.html#str) = ''*, *recurse: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[[str](https://docs.python.org/3/library/stdtypes.html#str), Parameter]][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.named_parameters) Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Args:prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module > and all submodules. Otherwise, yields only parameters that > are direct members of this module. Yields:(str, Parameter): Tuple containing the name and parameter Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size()) ``` parameters(*recurse: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → [Iterator](https://docs.python.org/3/library/typing.html#typing.Iterator)[Parameter][#](#pyod.models.auto_encoder_torch.InnerAutoencoder.parameters) Returns an iterator over module parameters. This is typically passed to an optimizer. Args: recurse (bool): if True, then yields parameters of this moduleand all submodules. Otherwise, yields only parameters that are direct members of this module. Yields:Parameter: module parameter Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) ``` register_backward_hook(*hook: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable)[[Module, [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[Tensor, ...], Tensor], [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[Tensor, ...], Tensor]], [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[None](https://docs.python.org/3/library/constants.html#None), Tensor]]*) → RemovableHandle[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_backward_hook) Registers a backward hook on the module. This function is deprecated in favor of `register_full_backward_hook()` and the behavior of this function will change in future versions. Returns: `torch.utils.hooks.RemovableHandle`:a handle that can be used to remove the added hook by calling `handle.remove()` register_buffer(*name: [str](https://docs.python.org/3/library/stdtypes.html#str)*, *tensor: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[Tensor]*, *persistent: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → [None](https://docs.python.org/3/library/constants.html#None)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_buffer) Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s `running_mean` is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting `persistent` to `False`. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict). Buffers can be accessed as attributes using given names. Args: name (str): name of the buffer. The buffer can be accessedfrom this module using the given name tensor (Tensor or None): buffer to be registered. If `None`, then operationsthat run on buffers, such as [`cuda`](#pyod.models.auto_encoder_torch.InnerAutoencoder.cuda), are ignored. If `None`, the buffer is **not** included in the module’s [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict). persistent (bool): whether the buffer is part of this module’s[`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict). Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features)) ``` register_forward_hook(*hook: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable)[[...], [None](https://docs.python.org/3/library/constants.html#None)]*) → RemovableHandle[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_forward_hook) Registers a forward hook on the module. The hook will be called every time after [`forward()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.forward) has computed an output. It should have the following signature: ``` hook(module, input, output) -> None or modified output ``` The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the `forward`. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after [`forward()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.forward) is called. Returns: `torch.utils.hooks.RemovableHandle`:a handle that can be used to remove the added hook by calling `handle.remove()` register_forward_pre_hook(*hook: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable)[[...], [None](https://docs.python.org/3/library/constants.html#None)]*) → RemovableHandle[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_forward_pre_hook) Registers a forward pre-hook on the module. The hook will be called every time before [`forward()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.forward) is invoked. It should have the following signature: ``` hook(module, input) -> None or modified input ``` The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the `forward`. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns: `torch.utils.hooks.RemovableHandle`:a handle that can be used to remove the added hook by calling `handle.remove()` register_full_backward_hook(*hook: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable)[[Module, [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[Tensor, ...], Tensor], [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple)[Tensor, ...], Tensor]], [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[None](https://docs.python.org/3/library/constants.html#None), Tensor]]*) → RemovableHandle[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_full_backward_hook) Registers a backward hook on the module. The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature: ``` hook(module, grad_input, grad_output) -> tuple(Tensor) or None ``` The `grad_input` and `grad_output` are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of `grad_input` in subsequent computations. `grad_input` will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in `grad_input` and `grad_output` will be `None` for all non-Tensor arguments. For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns: `torch.utils.hooks.RemovableHandle`:a handle that can be used to remove the added hook by calling `handle.remove()` register_load_state_dict_post_hook(*hook*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_load_state_dict_post_hook) Registers a post hook to be run after module’s `load_state_dict` is called. It should have the following signature::hook(module, incompatible_keys) -> None The `module` argument is the current module that this hook is registered on, and the `incompatible_keys` argument is a `NamedTuple` consisting of attributes `missing_keys` and `unexpected_keys`. `missing_keys` is a `list` of `str` containing the missing keys and `unexpected_keys` is a `list` of `str` containing the unexpected keys. The given incompatible_keys can be modified inplace if needed. Note that the checks performed when calling [`load_state_dict()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.load_state_dict) with `strict=True` are affected by modifications the hook makes to `missing_keys` or `unexpected_keys`, as expected. Additions to either set of keys will result in an error being thrown when `strict=True`, and clearning out both missing and unexpected keys will avoid an error. Returns: `torch.utils.hooks.RemovableHandle`:a handle that can be used to remove the added hook by calling `handle.remove()` register_module(*name: [str](https://docs.python.org/3/library/stdtypes.html#str)*, *module: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[Module]*) → [None](https://docs.python.org/3/library/constants.html#None)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_module) Alias for [`add_module()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.add_module). register_parameter(*name: [str](https://docs.python.org/3/library/stdtypes.html#str)*, *param: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[Parameter]*) → [None](https://docs.python.org/3/library/constants.html#None)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.register_parameter) Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Args: name (str): name of the parameter. The parameter can be accessedfrom this module using the given name param (Parameter or None): parameter to be added to the module. If`None`, then operations that run on parameters, such as [`cuda`](#pyod.models.auto_encoder_torch.InnerAutoencoder.cuda), are ignored. If `None`, the parameter is **not** included in the module’s [`state_dict`](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict). requires_grad_(*requires_grad: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.requires_grad_) Change if autograd should record operations on parameters in this module. This method sets the parameters’ `requires_grad` attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it. Args: requires_grad (bool): whether autograd should record operations onparameters in this module. Default: `True`. Returns:Module: self set_extra_state(*state: [Any](https://docs.python.org/3/library/typing.html#typing.Any)*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.set_extra_state) This function is called from [`load_state_dict()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.load_state_dict) to handle any extra state found within the state_dict. Implement this function and a corresponding [`get_extra_state()`](#pyod.models.auto_encoder_torch.InnerAutoencoder.get_extra_state) for your module if you need to store extra state within its state_dict. Args:state (dict): Extra state from the state_dict share_memory() → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.share_memory) See `torch.Tensor.share_memory_()` state_dict(**args*, *destination=None*, *prefix=''*, *keep_vars=False*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.state_dict) Returns a dictionary containing references to the whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to `None` are not included. Note The returned object is a shallow copy. It contains references to the module’s parameters and buffers. Warning Currently `state_dict()` also accepts positional arguments for `destination`, `prefix` and `keep_vars` in order. However, this is being deprecated and keyword arguments will be enforced in future releases. Warning Please avoid the use of argument `destination` as it is not designed for end-users. Args: destination (dict, optional): If provided, the state of module willbe updated into the dict and the same object is returned. Otherwise, an `OrderedDict` will be created and returned. Default: `None`. prefix (str, optional): a prefix added to parameter and buffernames to compose the keys in state_dict. Default: `''`. keep_vars (bool, optional): by default the `Tensor` sreturned in the state dict are detached from autograd. If it’s set to `True`, detaching will not be performed. Default: `False`. Returns: dict:a dictionary containing a whole state of the module Example: ``` >>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight'] ``` to(**args*, ***kwargs*)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.to) Moves and/or casts the parameters and buffers. This can be called as to(*device=None*, *dtype=None*, *non_blocking=False*) to(*dtype*, *non_blocking=False*) to(*tensor*, *non_blocking=False*) to(*memory_format=torch.channels_last*) Its signature is similar to `torch.Tensor.to()`, but only accepts floating point or complex `dtype`s. In addition, this method will only cast the floating point or complex parameters and buffers to `dtype` (if given). The integral parameters and buffers will be moved `device`, if that is given, but with dtypes unchanged. When `non_blocking` is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Args: device (`torch.device`): the desired device of the parametersand buffers in this module dtype (`torch.dtype`): the desired floating point or complex dtype ofthe parameters and buffers in this module tensor (torch.Tensor): Tensor whose dtype and device are the desireddtype and device for all parameters and buffers in this module memory_format (`torch.memory_format`): the desired memoryformat for 4D parameters and buffers in this module (keyword only argument) Returns:Module: self Examples: ``` >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128) ``` to_empty(***, *device: [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[str](https://docs.python.org/3/library/stdtypes.html#str), device]*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.to_empty) Moves the parameters and buffers to the specified device without copying storage. Args: device (`torch.device`): The desired device of the parametersand buffers in this module. Returns:Module: self train(*mode: [bool](https://docs.python.org/3/library/functions.html#bool) = True*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.train) Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. `Dropout`, `BatchNorm`, etc. Args: mode (bool): whether to set training mode (`True`) or evaluationmode (`False`). Default: `True`. Returns:Module: self type(*dst_type: [Union](https://docs.python.org/3/library/typing.html#typing.Union)[dtype, [str](https://docs.python.org/3/library/stdtypes.html#str)]*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.type) Casts all parameters and buffers to `dst_type`. Note This method modifies the module in-place. Args:dst_type (type or string): the desired type Returns:Module: self xpu(*device: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Union](https://docs.python.org/3/library/typing.html#typing.Union)[[int](https://docs.python.org/3/library/functions.html#int), device]] = None*) → T[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.xpu) Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Note This method modifies the module in-place. Arguments: device (int, optional): if specified, all parameters will becopied to that device Returns:Module: self zero_grad(*set_to_none: [bool](https://docs.python.org/3/library/functions.html#bool) = False*) → [None](https://docs.python.org/3/library/constants.html#None)[#](#pyod.models.auto_encoder_torch.InnerAutoencoder.zero_grad) Sets gradients of all model parameters to zero. See similar function under `torch.optim.Optimizer` for more context. Args: set_to_none (bool): instead of setting to zero, set the grads to None.See `torch.optim.Optimizer.zero_grad()` for details. *class* pyod.models.auto_encoder_torch.PyODDataset(*X*, *y=None*, *mean=None*, *std=None*)[[source]](_modules/pyod/models/auto_encoder_torch.html#PyODDataset)[#](#pyod.models.auto_encoder_torch.PyODDataset) Bases: `Dataset` PyOD Dataset class for PyTorch Dataloader #### pyod.models.cblof module[#](#module-pyod.models.cblof) Clustering Based Local Outlier Factor (CBLOF) *class* pyod.models.cblof.CBLOF(*n_clusters=8*, *contamination=0.1*, *clustering_estimator=None*, *alpha=0.9*, *beta=5*, *use_weights=False*, *check_estimator=False*, *random_state=None*, *n_jobs=1*)[[source]](_modules/pyod/models/cblof.html#CBLOF)[#](#pyod.models.cblof.CBLOF) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) The CBLOF operator calculates the outlier score based on cluster-based local outlier factor. CBLOF takes as an input the data set and the cluster model that was generated by a clustering algorithm. It classifies the clusters into small clusters and large clusters using the parameters alpha and beta. The anomaly score is then calculated based on the size of the cluster the point belongs to as well as the distance to the nearest large cluster. Use weighting for outlier factor based on the sizes of the clusters as proposed in the original publication. Since this might lead to unexpected behavior (outliers close to small clusters are not found), it is disabled by default.Outliers scores are solely computed based on their distance to the closest large cluster center. By default, kMeans is used for clustering algorithm instead of Squeezer algorithm mentioned in the original paper for multiple reasons. See [[BHXD03](#id813)] for details. ##### Parameters[#](#id87) n_clustersint, optional (default=8)The number of clusters to form as well as the number of centroids to generate. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. clustering_estimatorEstimator, optional (default=None)The base clustering algorithm for performing data clustering. A valid clustering algorithm should be passed in. The estimator should have standard sklearn APIs, fit() and predict(). The estimator should have attributes `labels_` and `cluster_centers_`. If `cluster_centers_` is not in the attributes once the model is fit, it is calculated as the mean of the samples in a cluster. If not set, CBLOF uses KMeans for scalability. See <https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlalphafloat in (0.5, 1), optional (default=0.9)Coefficient for deciding small and large clusters. The ratio of the number of samples in large clusters to the number of samples in small clusters. betaint or float in (1,), optional (default=5).Coefficient for deciding small and large clusters. For a list sorted clusters by size |C1|, |C2|, …, |Cn|, beta = |Ck|/|Ck-1| use_weightsbool, optional (default=False)If set to True, the size of clusters are used as weights in outlier score calculation. check_estimatorbool, optional (default=False)If set to True, check whether the base estimator is consistent with sklearn standard. Warning check_estimator may throw errors with scikit-learn 0.20 above. random_stateint, RandomState or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id88) [clustering_estimator_](#id898)Estimator, sklearn instanceBase estimator for clustering. [cluster_labels_](#id900)list of shape (n_samples,)Cluster assignment for the training samples. [n_clusters_](#id902)intActual number of clusters (possibly different from n_clusters). [cluster_sizes_](#id904)list of shape ([n_clusters_](#id906),)The size of each cluster once fitted with the training data. [decision_scores_](#id908)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [cluster_centers_](#id910)numpy array of shape ([n_clusters_](#id912), n_features)The center of each cluster. [small_cluster_labels_](#id914)list of clusters numbersThe cluster assignments belonging to small clusters. [large_cluster_labels_](#id916)list of clusters numbersThe cluster assignments belonging to large clusters. [threshold_](#id918)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id920)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/cblof.html#CBLOF.decision_function)[#](#pyod.models.cblof.CBLOF.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id89) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id90) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/cblof.html#CBLOF.fit)[#](#pyod.models.cblof.CBLOF.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id91) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id92) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.cblof.CBLOF.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.cblof.CBLOF.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.cblof.CBLOF.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id93) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id94) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.cblof.CBLOF.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id95) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id96) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.cblof.CBLOF.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id98) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id99) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.cblof.CBLOF.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id101) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id102) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.cblof.CBLOF.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id103) self : object #### pyod.models.cof module[#](#module-pyod.models.cof) Connectivity-Based Outlier Factor (COF) Algorithm *class* pyod.models.cof.COF(*contamination=0.1*, *n_neighbors=20*, *method='fast'*)[[source]](_modules/pyod/models/cof.html#COF)[#](#pyod.models.cof.COF) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Connectivity-Based Outlier Factor (COF) COF uses the ratio of average chaining distance of data point and the average of average chaining distance of k nearest neighbor of the data point, as the outlier score for observations. See [[BTCFC02](#id823)] for details. Two version of COF are supported: * Fast COF: computes the entire pairwise distance matrix at the cost of a O(n^2) memory requirement. * Memory efficient COF: calculates pairwise distances incrementally. Use this implementation when it is not feasible to fit the n-by-n distance in memory. This leads to a linear overhead because many distances will have to be recalculated. ##### Parameters[#](#id105) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_neighborsint, optional (default=20)Number of neighbors to use by default for k neighbors queries. Note that n_neighbors should be less than the number of samples. If n_neighbors is larger than the number of samples provided, all samples will be used. methodstring, optional (default=’fast’)Valid values for method are: * ‘fast’ Fast COF, computes the full pairwise distance matrix up front. * ‘memory’ Memory-efficient COF, computes pairwise distances only when needed at the cost of computational speed. ##### Attributes[#](#id106) [decision_scores_](#id922)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id924)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id926)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. [n_neighbors_](#id928): intNumber of neighbors to use by default for k neighbors queries. decision_function(*X*)[[source]](_modules/pyod/models/cof.html#COF.decision_function)[#](#pyod.models.cof.COF.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id107) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id108) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/cof.html#COF.fit)[#](#pyod.models.cof.COF.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id109) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id110) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.cof.COF.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.cof.COF.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.cof.COF.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id111) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id112) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.cof.COF.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id113) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id114) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.cof.COF.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id116) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id117) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.cof.COF.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id119) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id120) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.cof.COF.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id121) self : object #### pyod.models.combination module[#](#module-pyod.models.combination) A collection of model combination functionalities. pyod.models.combination.aom(*scores*, *n_buckets=5*, *method='static'*, *bootstrap_estimators=False*, *random_state=None*)[[source]](_modules/pyod/models/combination.html#aom)[#](#pyod.models.combination.aom) Average of Maximum - An ensemble method for combining multiple estimators. See [[BAS15](#id801)] for details. First dividing estimators into subgroups, take the maximum score as the subgroup score. Finally, take the average of all subgroup outlier scores. ##### Parameters[#](#id123) scoresnumpy array of shape (n_samples, n_estimators)The score matrix outputted from various estimators n_bucketsint, optional (default=5)The number of subgroups to build methodstr, optional (default=’static’){‘static’, ‘dynamic’}, if ‘dynamic’, build subgroups randomly with dynamic bucket size. bootstrap_estimatorsbool, optional (default=False)Whether estimators are drawn with replacement. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Returns[#](#id124) combined_scoresNumpy array of shape (n_samples,)The combined outlier scores. pyod.models.combination.average(*scores*, *estimator_weights=None*)[[source]](_modules/pyod/models/combination.html#average)[#](#pyod.models.combination.average) Combination method to merge the outlier scores from multiple estimators by taking the average. ##### Parameters[#](#id125) scoresnumpy array of shape (n_samples, n_estimators)Score matrix from multiple estimators on the same samples. estimator_weightslist of shape (1, n_estimators)If specified, using weighted average ##### Returns[#](#id126) combined_scoresnumpy array of shape (n_samples, )The combined outlier scores. pyod.models.combination.majority_vote(*scores*, *weights=None*)[[source]](_modules/pyod/models/combination.html#majority_vote)[#](#pyod.models.combination.majority_vote) Combination method to merge the scores from multiple estimators by majority vote. ##### Parameters[#](#id127) scoresnumpy array of shape (n_samples, n_estimators)Score matrix from multiple estimators on the same samples. weightsnumpy array of shape (1, n_estimators)If specified, using weighted majority weight. ##### Returns[#](#id128) combined_scoresnumpy array of shape (n_samples, )The combined scores. pyod.models.combination.maximization(*scores*)[[source]](_modules/pyod/models/combination.html#maximization)[#](#pyod.models.combination.maximization) Combination method to merge the outlier scores from multiple estimators by taking the maximum. ##### Parameters[#](#id129) scoresnumpy array of shape (n_samples, n_estimators)Score matrix from multiple estimators on the same samples. ##### Returns[#](#id130) combined_scoresnumpy array of shape (n_samples, )The combined outlier scores. pyod.models.combination.median(*scores*)[[source]](_modules/pyod/models/combination.html#median)[#](#pyod.models.combination.median) Combination method to merge the scores from multiple estimators by taking the median. ##### Parameters[#](#id131) scoresnumpy array of shape (n_samples, n_estimators)Score matrix from multiple estimators on the same samples. ##### Returns[#](#id132) combined_scoresnumpy array of shape (n_samples, )The combined scores. pyod.models.combination.moa(*scores*, *n_buckets=5*, *method='static'*, *bootstrap_estimators=False*, *random_state=None*)[[source]](_modules/pyod/models/combination.html#moa)[#](#pyod.models.combination.moa) Maximization of Average - An ensemble method for combining multiple estimators. See [[BAS15](#id801)] for details. First dividing estimators into subgroups, take the average score as the subgroup score. Finally, take the maximization of all subgroup outlier scores. ##### Parameters[#](#id134) scoresnumpy array of shape (n_samples, n_estimators)The score matrix outputted from various estimators n_bucketsint, optional (default=5)The number of subgroups to build methodstr, optional (default=’static’){‘static’, ‘dynamic’}, if ‘dynamic’, build subgroups randomly with dynamic bucket size. bootstrap_estimatorsbool, optional (default=False)Whether estimators are drawn with replacement. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Returns[#](#id135) combined_scoresNumpy array of shape (n_samples,)The combined outlier scores. #### pyod.models.cd module[#](#module-pyod.models.cd) Cook’s distance outlier detection (CD) *class* pyod.models.cd.CD(*contamination=0.1*, *model=LinearRegression()*)[[source]](_modules/pyod/models/cd.html#CD)[#](#pyod.models.cd.CD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Cook’s distance can be used to identify points that negativelyaffect a regression model. A combination of each observation’s leverage and residual values are used in the measurement. Higher leverage and residuals relate to higher Cook’s distances. Note that this method is unsupervised and requires at least two features for X with which to calculate the mean Cook’s distance for each datapoint. Read more in the [[BCoo77](#id841)]. ##### Parameters[#](#id137) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. modelobject, optional (default=LinearRegression())Regression model used to calculate the Cook’s distance ##### Attributes[#](#id138) [decision_scores_](#id930)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id932)floatThe modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. [labels_](#id934)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/cd.html#CD.decision_function)[#](#pyod.models.cd.CD.decision_function) Predict raw anomaly score of X using the fitted detector. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id139) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id140) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/cd.html#CD.fit)[#](#pyod.models.cd.CD.fit) “Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id141) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id142) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.cd.CD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.cd.CD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.cd.CD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id143) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id144) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.cd.CD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id145) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id146) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.cd.CD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id148) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id149) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.cd.CD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id151) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id152) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.cd.CD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id153) self : object #### pyod.models.copod module[#](#module-pyod.models.copod) Copula Based Outlier Detector (COPOD) *class* pyod.models.copod.COPOD(*contamination=0.1*, *n_jobs=1*)[[source]](_modules/pyod/models/copod.html#COPOD)[#](#pyod.models.copod.COPOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) COPOD class for Copula Based Outlier Detector. COPOD is a parameter-free, highly interpretable outlier detection algorithm based on empirical copula models. See [[BLZB+20](#id834)] for details. ##### Parameters[#](#id155) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_jobsoptional (default=1)The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores. ##### Attributes[#](#id156) [decision_scores_](#id936)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id938)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id940)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/copod.html#COPOD.decision_function)[#](#pyod.models.copod.COPOD.decision_function) Predict raw anomaly score of X using the fitted detector.For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id157) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id158) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. explain_outlier(*ind*, *columns=None*, *cutoffs=None*, *feature_names=None*, *file_name=None*, *file_type=None*)[[source]](_modules/pyod/models/copod.html#COPOD.explain_outlier)[#](#pyod.models.copod.COPOD.explain_outlier) Plot dimensional outlier graph for a given data point within the dataset. ###### Parameters[#](#id159) indintThe index of the data point one wishes to obtain a dimensional outlier graph for. columnslistSpecify a list of features/dimensions for plotting. If not specified, use all features. cutoffslist of floats in (0., 1), optional (default=[0.95, 0.99])The significance cutoff bands of the dimensional outlier graph. feature_nameslist of stringsThe display names of all columns of the dataset, to show on the x-axis of the plot. file_namestringThe name to save the figure file_typestringThe file type to save the figure ###### Returns[#](#id160) Plotmatplotlib plotThe dimensional outlier graph for data point with index ind. fit(*X*, *y=None*)[[source]](_modules/pyod/models/copod.html#COPOD.fit)[#](#pyod.models.copod.COPOD.fit) Fit detector. y is ignored in unsupervised methods. Parameters ———- X : numpy array of shape (n_samples, n_features) > The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id161) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.copod.COPOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.copod.COPOD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.copod.COPOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id162) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id163) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.copod.COPOD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id164) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id165) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.copod.COPOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id167) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id168) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.copod.COPOD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id170) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id171) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.copod.COPOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id172) self : object pyod.models.copod.skew(*X*, *axis=0*)[[source]](_modules/pyod/models/copod.html#skew)[#](#pyod.models.copod.skew) #### pyod.models.deep_svdd module[#](#module-pyod.models.deep_svdd) Deep One-Class Classification for outlier detection *class* pyod.models.deep_svdd.DeepSVDD(*c=None*, *use_ae=False*, *hidden_neurons=None*, *hidden_activation='relu'*, *output_activation='sigmoid'*, *optimizer='adam'*, *epochs=100*, *batch_size=32*, *dropout_rate=0.2*, *l2_regularizer=0.1*, *validation_size=0.1*, *preprocessing=True*, *verbose=1*, *random_state=None*, *contamination=0.1*)[[source]](_modules/pyod/models/deep_svdd.html#DeepSVDD)[#](#pyod.models.deep_svdd.DeepSVDD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Deep One-Class Classifier with AutoEncoder (AE) is a type of neural networks for learning useful data representations in an unsupervised way. DeepSVDD trains a neural network while minimizing the volume of a hypersphere that encloses the network representations of the data, forcing the network to extract the common factors of variation. Similar to PCA, DeepSVDD could be used to detect outlying objects in the data by calculating the distance from center See [[BRVG+18](#id837)] for details. ##### Parameters[#](#id174) c: float, optional (default=’forwad_nn_pass’)Deep SVDD center, the default will be calculated based on network initialization first forward pass. To get repeated results set random_state if c is set to None. use_ae: bool, optional (default=False)The AutoEncoder type of DeepSVDD it reverse neurons from hidden_neurons if set to True. hidden_neuronslist, optional (default=[64, 32])The number of neurons per hidden layers. if use_ae is True, neurons will be reversed eg. [64, 32] -> [64, 32, 32, 64, n_features] hidden_activationstr, optional (default=’relu’)Activation function to use for hidden layers. All hidden layers are forced to use the same type of activation. See <https://keras.io/activations/output_activationstr, optional (default=’sigmoid’)Activation function to use for output layer. See <https://keras.io/activations/optimizerstr, optional (default=’adam’)String (name of optimizer) or optimizer instance. See <https://keras.io/optimizers/epochsint, optional (default=100)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. l2_regularizerfloat in (0., 1), optional (default=0.1)The regularization strength of activity_regularizer applied on each layer. By default, l2 regularizer is used. See <https://keras.io/regularizers/validation_sizefloat in (0., 1), optional (default=0.1)The percentage of data to be used for validation. preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)Verbosity mode. * 0 = silent * 1 = progress bar * 2 = one line per epoch. For verbose >= 1, model summary may be printed. random_staterandom_state: int, RandomState instance or None, optional(default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. ##### Attributes[#](#id175) [model_](#id942)Keras ObjectThe underlying DeppSVDD in Keras. [history_](#id944): Keras ObjectThe AutoEncoder training history. [decision_scores_](#id946)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id948)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id950)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/deep_svdd.html#DeepSVDD.decision_function)[#](#pyod.models.deep_svdd.DeepSVDD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id176) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id177) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/deep_svdd.html#DeepSVDD.fit)[#](#pyod.models.deep_svdd.DeepSVDD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id178) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id179) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.deep_svdd.DeepSVDD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.deep_svdd.DeepSVDD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.deep_svdd.DeepSVDD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id180) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id181) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.deep_svdd.DeepSVDD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id182) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id183) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.deep_svdd.DeepSVDD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id185) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id186) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.deep_svdd.DeepSVDD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id188) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id189) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.deep_svdd.DeepSVDD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id190) self : object #### pyod.models.ecod module[#](#module-pyod.models.ecod) Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions (ECOD) *class* pyod.models.ecod.ECOD(*contamination=0.1*, *n_jobs=1*)[[source]](_modules/pyod/models/ecod.html#ECOD)[#](#pyod.models.ecod.ECOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) ECOD class for Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions (ECOD) ECOD is a parameter-free, highly interpretable outlier detection algorithm based on empirical CDF functions. See [] for details. ##### Parameters[#](#id192) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_jobsoptional (default=1)The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores. ##### Attributes[#](#id193) [decision_scores_](#id952)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id954)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id956)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/ecod.html#ECOD.decision_function)[#](#pyod.models.ecod.ECOD.decision_function) Predict raw anomaly score of X using the fitted detector.For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id194) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id195) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. explain_outlier(*ind*, *columns=None*, *cutoffs=None*, *feature_names=None*, *file_name=None*, *file_type=None*)[[source]](_modules/pyod/models/ecod.html#ECOD.explain_outlier)[#](#pyod.models.ecod.ECOD.explain_outlier) Plot dimensional outlier graph for a given data point within the dataset. ###### Parameters[#](#id196) indintThe index of the data point one wishes to obtain a dimensional outlier graph for. columnslistSpecify a list of features/dimensions for plotting. If not specified, use all features. cutoffslist of floats in (0., 1), optional (default=[0.95, 0.99])The significance cutoff bands of the dimensional outlier graph. feature_nameslist of stringsThe display names of all columns of the dataset, to show on the x-axis of the plot. file_namestringThe name to save the figure file_typestringThe file type to save the figure ###### Returns[#](#id197) Plotmatplotlib plotThe dimensional outlier graph for data point with index ind. fit(*X*, *y=None*)[[source]](_modules/pyod/models/ecod.html#ECOD.fit)[#](#pyod.models.ecod.ECOD.fit) Fit detector. y is ignored in unsupervised methods. Parameters ———- X : numpy array of shape (n_samples, n_features) > The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id198) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.ecod.ECOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.ecod.ECOD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.ecod.ECOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id199) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id200) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.ecod.ECOD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id201) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id202) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.ecod.ECOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id204) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id205) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.ecod.ECOD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id207) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id208) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.ecod.ECOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id209) self : object pyod.models.ecod.skew(*X*, *axis=0*)[[source]](_modules/pyod/models/ecod.html#skew)[#](#pyod.models.ecod.skew) #### pyod.models.feature_bagging module[#](#module-pyod.models.feature_bagging) Feature bagging detector *class* pyod.models.feature_bagging.FeatureBagging(*base_estimator=None*, *n_estimators=10*, *contamination=0.1*, *max_features=1.0*, *bootstrap_features=False*, *check_detector=True*, *check_estimator=False*, *n_jobs=1*, *random_state=None*, *combination='average'*, *verbose=0*, *estimator_params=None*)[[source]](_modules/pyod/models/feature_bagging.html#FeatureBagging)[#](#pyod.models.feature_bagging.FeatureBagging) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) A feature bagging detector is a meta estimator that fits a number of base detectors on various sub-samples of the dataset and use averaging or other combination methods to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the features are randomly sampled from half of the features to all features. By default, LOF is used as the base estimator. However, any estimator could be used as the base estimator, such as kNN and ABOD. Feature bagging first construct n subsamples by random selecting a subset of features, which induces the diversity of base estimators. Finally, the prediction score is generated by averaging/taking the maximum of all base detectors. See [[BLK05](#id805)] for details. ##### Parameters[#](#id211) base_estimatorobject or None, optional (default=None)The base estimator to fit on random subsets of the dataset. If None, then the base estimator is a LOF detector. n_estimatorsint, optional (default=10)The number of base estimators in the ensemble. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. max_featuresint or float, optional (default=1.0)The number of features to draw from X to train each base estimator. * If int, then draw max_features features. * If float, then draw max_features * X.shape[1] features. bootstrap_featuresbool, optional (default=False)Whether features are drawn with replacement. check_detectorbool, optional (default=True)If set to True, check whether the base estimator is consistent with pyod standard. check_estimatorbool, optional (default=False)If set to True, check whether the base estimator is consistent with sklearn standard. Deprecated since version 0.6.9: check_estimator will be removed in pyod 0.8.0.; it will be replaced by check_detector. n_jobsoptional (default=1)The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores. random_stateint, RandomState or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. combinationstr, optional (default=’average’)The method of combination: * if ‘average’: take the average of all detectors * if ‘max’: take the maximum scores of all detectors verboseint, optional (default=0)Controls the verbosity of the building process. estimator_paramsdict, optional (default=None)The list of attributes to use as parameters when instantiating a new base estimator. If none are given, default parameters are used. ##### Attributes[#](#id212) [decision_scores_](#id958)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id960)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id962)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/feature_bagging.html#FeatureBagging.decision_function)[#](#pyod.models.feature_bagging.FeatureBagging.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id213) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id214) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/feature_bagging.html#FeatureBagging.fit)[#](#pyod.models.feature_bagging.FeatureBagging.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id215) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id216) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.feature_bagging.FeatureBagging.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.feature_bagging.FeatureBagging.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.feature_bagging.FeatureBagging.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id217) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id218) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.feature_bagging.FeatureBagging.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id219) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id220) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.feature_bagging.FeatureBagging.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id222) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id223) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.feature_bagging.FeatureBagging.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id225) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id226) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.feature_bagging.FeatureBagging.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id227) self : object #### pyod.models.gmm module[#](#module-pyod.models.gmm) Outlier detection based on Gaussian Mixture Model (GMM). *class* pyod.models.gmm.GMM(*n_components=1*, *covariance_type='full'*, *tol=0.001*, *reg_covar=1e-06*, *max_iter=100*, *n_init=1*, *init_params='kmeans'*, *weights_init=None*, *means_init=None*, *precisions_init=None*, *random_state=None*, *warm_start=False*, *contamination=0.1*)[[source]](_modules/pyod/models/gmm.html#GMM)[#](#pyod.models.gmm.GMM) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Wrapper of scikit-learn Gaussian Mixture Model with more functionalities. Unsupervised Outlier Detection. See [[BAgg15](#id808)] Chapter 2 for details. ##### Parameters[#](#id229) n_componentsint, default=1The number of mixture components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’String describing the type of covariance parameters to use. tolfloat, default=1e-3The convergence threshold. EM iterations will stop when the lower bound average gain is below this threshold. reg_covarfloat, default=1e-6Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100The number of EM iterations to perform. n_initint, default=1The number of initializations to perform. The best results are kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’The method used to initialize the weights, the means and the precisions. weights_initarray-like of shape (n_components, ), default=NoneThe user-provided initial weights. If it is None, weights are initialized using the init_params method. means_initarray-like of shape (n_components, n_features), default=NoneThe user-provided initial means, If it is None, means are initialized using the init_params method. precisions_initarray-like, default=NoneThe user-provided initial precisions (inverse of the covariance matrices). If it is None, precisions are initialized using the ‘init_params’ method. random_stateint, RandomState instance or None, default=NoneControls the random seed given to the method chosen to initialize the parameters. warm_startbool, default=FalseIf ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). verboseint, default=0Enable verbose output. verbose_intervalint, default=10Number of iteration done before the next print. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set. ##### Attributes[#](#id230) [weights_](#id964)array-like of shape (n_components,)The weights of each mixture components. [means_](#id966)array-like of shape (n_components, n_features)The mean of each mixture component. [covariances_](#id968)array-likeThe covariance of each mixture component. [precisions_](#id970)array-likeThe precision matrices for each component in the mixture. [precisions_cholesky_](#id972)array-likeThe cholesky decomposition of the precision matrices of each mixture component. [converged_](#id974)boolTrue when convergence was reached in fit(), False otherwise. [n_iter_](#id976)intNumber of step used by the best fit of EM to reach the convergence. [lower_bound_](#id978)floatLower bound value on the log-likelihood (of the training data with respect to the model) of the best fit of EM. [decision_scores_](#id980)numpy array of shape (n_samples,)The outlier scores of the training data. [threshold_](#id982)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id984)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. *property* converged_[#](#pyod.models.gmm.GMM.converged_) True when convergence was reached in fit(), False otherwise. Decorator for scikit-learn Gaussian Mixture Model attributes. *property* covariances_[#](#pyod.models.gmm.GMM.covariances_) The covariance of each mixture component. Decorator for scikit-learn Gaussian Mixture Model attributes. decision_function(*X*)[[source]](_modules/pyod/models/gmm.html#GMM.decision_function)[#](#pyod.models.gmm.GMM.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id231) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id232) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/gmm.html#GMM.fit)[#](#pyod.models.gmm.GMM.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id233) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. sample_weightarray-like, shape (n_samples,)Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. ###### Returns[#](#id234) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.gmm.GMM.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.gmm.GMM.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.gmm.GMM.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id235) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id236) paramsmapping of string to anyParameter names mapped to their values. *property* lower_bound_[#](#pyod.models.gmm.GMM.lower_bound_) Lower bound value on the log-likelihood of the best fit of EM. Decorator for scikit-learn Gaussian Mixture Model attributes. *property* means_[#](#pyod.models.gmm.GMM.means_) The mean of each mixture component. Decorator for scikit-learn Gaussian Mixture Model attributes. *property* n_iter_[#](#pyod.models.gmm.GMM.n_iter_) Number of step used by the best fit of EM to reach the convergence. Decorator for scikit-learn Gaussian Mixture Model attributes. *property* precisions_[#](#pyod.models.gmm.GMM.precisions_) The precision matrices for each component in the mixture. Decorator for scikit-learn Gaussian Mixture Model attributes. *property* precisions_cholesky_[#](#pyod.models.gmm.GMM.precisions_cholesky_) The cholesky decomposition of the precision matricesof each mixture component. Decorator for scikit-learn Gaussian Mixture Model attributes. predict(*X*, *return_confidence=False*)[#](#pyod.models.gmm.GMM.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id237) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id238) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.gmm.GMM.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id240) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id241) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.gmm.GMM.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id243) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id244) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.gmm.GMM.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id245) self : object *property* weights_[#](#pyod.models.gmm.GMM.weights_) The weights of each mixture components. Decorator for scikit-learn Gaussian Mixture Model attributes. #### pyod.models.hbos module[#](#module-pyod.models.hbos) Histogram-based Outlier Detection (HBOS) *class* pyod.models.hbos.HBOS(*n_bins=10*, *alpha=0.1*, *tol=0.5*, *contamination=0.1*)[[source]](_modules/pyod/models/hbos.html#HBOS)[#](#pyod.models.hbos.HBOS) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Histogram- based outlier detection (HBOS) is an efficient unsupervised method. It assumes the feature independence and calculates the degree of outlyingness by building histograms. See [[BGD12](#id806)] for details. Two versions of HBOS are supported: - Static number of bins: uses a static number of bins for all features. - Automatic number of bins: every feature uses a number of bins deemed to > be optimal according to the Birge-Rozenblac method > ([[BBirgeR06](#id838)]). ##### Parameters[#](#id248) n_binsint or string, optional (default=10)The number of bins. “auto” uses the birge-rozenblac method for automatic selection of the optimal number of bins for each feature. alphafloat in (0, 1), optional (default=0.1)The regularizer for preventing overflow. tolfloat in (0, 1), optional (default=0.5)The parameter to decide the flexibility while dealing the samples falling outside the bins. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. ##### Attributes[#](#id249) [bin_edges_](#id986)numpy array of shape (n_bins + 1, n_features )The edges of the bins. [hist_](#id988)numpy array of shape (n_bins, n_features)The density of each histogram. [decision_scores_](#id990)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id992)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id994)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/hbos.html#HBOS.decision_function)[#](#pyod.models.hbos.HBOS.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id250) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id251) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/hbos.html#HBOS.fit)[#](#pyod.models.hbos.HBOS.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id252) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id253) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.hbos.HBOS.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.hbos.HBOS.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.hbos.HBOS.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id254) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id255) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.hbos.HBOS.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id256) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id257) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.hbos.HBOS.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id259) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id260) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.hbos.HBOS.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id262) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id263) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.hbos.HBOS.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id264) self : object #### pyod.models.iforest module[#](#module-pyod.models.iforest) IsolationForest Outlier Detector. Implemented on scikit-learn library. *class* pyod.models.iforest.IForest(*n_estimators=100*, *max_samples='auto'*, *contamination=0.1*, *max_features=1.0*, *bootstrap=False*, *n_jobs=1*, *behaviour='old'*, *random_state=None*, *verbose=0*)[[source]](_modules/pyod/models/iforest.html#IForest)[#](#pyod.models.iforest.IForest) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Wrapper of scikit-learn Isolation Forest with more functionalities. The IsolationForest ‘isolates’ observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. See [[BLTZ08](#id798), [BLTZ12](#id799)] for details. Since recursive partitioning can be represented by a tree structure, the number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of such random trees, is a measure of normality and our decision function. Random partitioning produces noticeably shorter paths for anomalies. Hence, when a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies. ##### Parameters[#](#id266) n_estimatorsint, optional (default=100)The number of base estimators in the ensemble. max_samplesint or float, optional (default=”auto”)The number of samples to draw from X to train each base estimator. > * If int, then draw max_samples samples. > * If float, then draw max_samples * X.shape[0] samples. > * If “auto”, then max_samples=min(256, n_samples). If max_samples is larger than the number of samples provided, all samples will be used for all trees (no sampling). contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. max_featuresint or float, optional (default=1.0)The number of features to draw from X to train each base estimator. > * If int, then draw max_features features. > * If float, then draw max_features * X.shape[1] features. bootstrapbool, optional (default=False)If True, individual trees are fit on random subsets of the training data sampled with replacement. If False, sampling without replacement is performed. n_jobsinteger, optional (default=1)The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores. behaviourstr, default=’old’Behaviour of the `decision_function` which can be either ‘old’ or ‘new’. Passing `behaviour='new'` makes the `decision_function` change to match other anomaly detection algorithm API which will be the default behaviour in the future. As explained in details in the `offset_` attribute documentation, the `decision_function` becomes dependent on the contamination parameter, in such a way that 0 becomes its natural threshold to detect outliers. New in version 0.7.0: `behaviour` is added in 0.7.0 for back-compatibility purpose. Deprecated since version 0.20: `behaviour='old'` is deprecated in sklearn 0.20 and will not be possible in 0.22. Deprecated since version 0.22: `behaviour` parameter will be deprecated in sklearn 0.22 and removed in 0.24. Warning Only applicable for sklearn 0.20 above. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. verboseint, optional (default=0)Controls the verbosity of the tree building process. ##### Attributes[#](#id267) [estimators_](#id996)list of DecisionTreeClassifierThe collection of fitted sub-estimators. [estimators_samples_](#id998)list of arraysThe subset of drawn samples (i.e., the in-bag samples) for each base estimator. [max_samples_](#id1000)integerThe actual number of samples [decision_scores_](#id1002)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1004)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1006)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/iforest.html#IForest.decision_function)[#](#pyod.models.iforest.IForest.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id268) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id269) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. *property* feature_importances_[#](#pyod.models.iforest.IForest.feature_importances_) The impurity-based feature importance. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. impurity-based feature importance can be misleading for high cardinality features (many unique values). See <https://scikit-learn.org/stable/modules/generated/sklearn.inspection.permutation_importance.html> as an alternative. ###### Returns[#](#id270) [feature_importances_](#id1008)ndarray of shape (n_features,)The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros. fit(*X*, *y=None*)[[source]](_modules/pyod/models/iforest.html#IForest.fit)[#](#pyod.models.iforest.IForest.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id271) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id272) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.iforest.IForest.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.iforest.IForest.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.iforest.IForest.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id273) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id274) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.iforest.IForest.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id275) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id276) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.iforest.IForest.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id278) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id279) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.iforest.IForest.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id281) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id282) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.iforest.IForest.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id283) self : object #### pyod.models.inne module[#](#module-pyod.models.inne) Isolation-based anomaly detection using nearest-neighbor ensembles. Part of the codes are adapted from <https://github.com/xhan97/inne*class* pyod.models.inne.INNE(*n_estimators=200*, *max_samples='auto'*, *contamination=0.1*, *random_state=None*)[[source]](_modules/pyod/models/inne.html#INNE)[#](#pyod.models.inne.INNE) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Isolation-based anomaly detection using nearest-neighbor ensembles. The INNE algorithm uses the nearest neighbour ensemble to isolate anomalies. It partitions the data space into regions using a subsample and determines an isolation score for each region. As each region adapts to local distribution, the calculated isolation score is a local measure that is relative to the local neighbourhood, enabling it to detect both global and local anomalies. INNE has linear time complexity to efficiently handle large and high-dimensional datasets with complex distributions. See [[BBTA+18](#id844)] for details. ##### Parameters[#](#id285) n_estimatorsint, default=200The number of base estimators in the ensemble. max_samplesint or float, optional (default=”auto”)The number of samples to draw from X to train each base estimator. > * If int, then draw max_samples samples. > * If float, then draw max_samples * X.shape[0]` samples. > * If “auto”, then max_samples=min(8, n_samples). contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id286) [max_samples_](#id1010)integerThe actual number of samples [decision_scores_](#id1012)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1014)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1016)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/inne.html#INNE.decision_function)[#](#pyod.models.inne.INNE.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id287) Xnumpy array of shape (n_samples, n_features)The training input samples. ###### Returns[#](#id288) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/inne.html#INNE.fit)[#](#pyod.models.inne.INNE.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id289) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id290) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.inne.INNE.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.inne.INNE.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.inne.INNE.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id291) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id292) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.inne.INNE.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id293) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id294) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.inne.INNE.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id296) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id297) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.inne.INNE.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id299) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id300) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.inne.INNE.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id301) self : object #### pyod.models.kde module[#](#module-pyod.models.kde) Kernel Density Estimation (KDE) for Unsupervised Outlier Detection. *class* pyod.models.kde.KDE(*contamination=0.1*, *bandwidth=1.0*, *algorithm='auto'*, *leaf_size=30*, *metric='minkowski'*, *metric_params=None*)[[source]](_modules/pyod/models/kde.html#KDE)[#](#pyod.models.kde.KDE) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) KDE class for outlier detection. For an observation, its negative log probability density could be viewed as the outlying score. See [[BLLP07](#id842)] for details. ##### Parameters[#](#id303) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. bandwidthfloat, optional (default=1.0)The bandwidth of the kernel. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’}, optionalAlgorithm used to compute the kernel density estimator: * ‘ball_tree’ will use BallTree * ‘kd_tree’ will use KDTree * ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit()`](#pyod.models.kde.KDE.fit) method. leaf_sizeint, optional (default = 30)Leaf size passed to BallTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstring or callable, default ‘minkowski’metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: * from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] * from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. metric_paramsdict, optional (default = None)Additional keyword arguments for the metric function. ##### Attributes[#](#id304) [decision_scores_](#id1018)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1020)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1022)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/kde.html#KDE.decision_function)[#](#pyod.models.kde.KDE.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id305) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id306) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/kde.html#KDE.fit)[#](#pyod.models.kde.KDE.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id307) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id308) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.kde.KDE.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.kde.KDE.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.kde.KDE.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id309) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id310) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.kde.KDE.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id311) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id312) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.kde.KDE.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id314) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id315) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.kde.KDE.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id317) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id318) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.kde.KDE.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id319) self : object #### pyod.models.knn module[#](#module-pyod.models.knn) k-Nearest Neighbors Detector (kNN) *class* pyod.models.knn.KNN(*contamination=0.1*, *n_neighbors=5*, *method='largest'*, *radius=1.0*, *algorithm='auto'*, *leaf_size=30*, *metric='minkowski'*, *p=2*, *metric_params=None*, *n_jobs=1*, ***kwargs*)[[source]](_modules/pyod/models/knn.html#KNN)[#](#pyod.models.knn.KNN) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) kNN class for outlier detection. For an observation, its distance to its kth nearest neighbor could be viewed as the outlying score. It could be viewed as a way to measure the density. See [[BAP02](#id803), [BRRS00](#id802)] for details. Three kNN detectors are supported: largest: use the distance to the kth neighbor as the outlier score mean: use the average of all k neighbors as the outlier score median: use the median of the distance to k neighbors as the outlier score ##### Parameters[#](#id321) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_neighborsint, optional (default = 5)Number of neighbors to use by default for k neighbors queries. methodstr, optional (default=’largest’){‘largest’, ‘mean’, ‘median’} * ‘largest’: use the distance to the kth neighbor as the outlier score * ‘mean’: use the average of all k neighbors as the outlier score * ‘median’: use the median of the distance to k neighbors as the outlier score radiusfloat, optional (default = 1.0)Range of parameter space to use by default for radius_neighbors queries. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optionalAlgorithm used to compute the nearest neighbors: * ‘ball_tree’ will use BallTree * ‘kd_tree’ will use KDTree * ‘brute’ will use a brute-force search. * ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit()`](#pyod.models.knn.KNN.fit) method. Note: fitting on sparse input will override the setting of this parameter, using brute force. Deprecated since version 0.74: `algorithm` is deprecated in PyOD 0.7.4 and will not be possible in 0.7.6. It has to use BallTree for consistency. leaf_sizeint, optional (default = 30)Leaf size passed to BallTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstring or callable, default ‘minkowski’metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: * from scikit-learn: [‘cityblock’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] * from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. pinteger, optional (default = 2)Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. See <http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distancesmetric_paramsdict, optional (default = None)Additional keyword arguments for the metric function. n_jobsint, optional (default = 1)The number of parallel jobs to run for neighbors search. If `-1`, then the number of jobs is set to the number of CPU cores. Affects only kneighbors and kneighbors_graph methods. ##### Attributes[#](#id322) [decision_scores_](#id1024)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1026)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1028)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/knn.html#KNN.decision_function)[#](#pyod.models.knn.KNN.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id323) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id324) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/knn.html#KNN.fit)[#](#pyod.models.knn.KNN.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id325) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id326) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.knn.KNN.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.knn.KNN.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.knn.KNN.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id327) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id328) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.knn.KNN.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id329) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id330) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.knn.KNN.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id332) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id333) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.knn.KNN.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id335) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id336) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.knn.KNN.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id337) self : object #### pyod.models.kpca module[#](#module-pyod.models.kpca) Kernel Principal Component Analysis (KPCA) Outlier Detector *class* pyod.models.kpca.KPCA(*contamination=0.1*, *n_components=None*, *n_selected_components=None*, *kernel='rbf'*, *gamma=None*, *degree=3*, *coef0=1*, *kernel_params=None*, *alpha=1.0*, *eigen_solver='auto'*, *tol=0*, *max_iter=None*, *remove_zero_eig=False*, *copy_X=True*, *n_jobs=None*, *sampling=False*, *subset_size=20*, *random_state=None*)[[source]](_modules/pyod/models/kpca.html#KPCA)[#](#pyod.models.kpca.KPCA) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) KPCA class for outlier detection. PCA is performed on the feature space uniquely determined by the kernel, and the reconstruction error on the feature space is used as the anomaly score. See [[BHof07](#id850)] <NAME>, “Kernel PCA for novelty detection,” Pattern Recognition, vol.40, no.3, pp. 863-874, 2007. <https://www.sciencedirect.com/science/article/pii/S0031320306003414> for details. ##### Parameters[#](#id339) n_componentsint, optional (default=None)Number of components. If None, all non-zero components are kept. n_selected_componentsint, optional (default=None)Number of selected principal components for calculating the outlier scores. It is not necessarily equal to the total number of the principal components. If not set, use all principal components. kernelstring {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, > ‘cosine’, ‘precomputed’}, optional (default=’rbf’) Kernel used for PCA. gammafloat, optional (default=None)Kernel coefficient for rbf, poly and sigmoid kernels. Ignored by other kernels. If `gamma` is `None`, then it is set to `1/n_features`. degreeint, optional (default=3)Degree for poly kernels. Ignored by other kernels. coef0float, optional (default=1)Independent term in poly and sigmoid kernels. Ignored by other kernels. kernel_paramsdict, optional (default=None)Parameters (keyword arguments) and values for kernel passed as callable object. Ignored by other kernels. alphafloat, optional (default=1.0)Hyperparameter of the ridge regression that learns the inverse transform (when inverse_transform=True). eigen_solverstring, {‘auto’, ‘dense’, ‘arpack’, ‘randomized’}, default=’auto’Select eigensolver to use. If n_components is much less than the number of training samples, randomized (or arpack to a smaller extend) may be more efficient than the dense eigensolver. Randomized SVD is performed according to the method of Halko et al. auto :the solver is selected by a default policy based on n_samples (the number of training samples) and n_components: if the number of components to extract is less than 10 (strict) and the number of samples is more than 200 (strict), the ‘arpack’ method is enabled. Otherwise the exact full eigenvalue decomposition is computed and optionally truncated afterwards (‘dense’ method). dense :run exact full eigenvalue decomposition calling the standard LAPACK solver via scipy.linalg.eigh, and select the components by postprocessing. arpack :run SVD truncated to n_components calling ARPACK solver using scipy.sparse.linalg.eigsh. It requires strictly 0 < n_components < n_samples randomized :run randomized SVD. implementation selects eigenvalues based on their module; therefore using this method can lead to unexpected results if the kernel is not positive semi-definite. tolfloat, optional (default=0)Convergence tolerance for arpack. If 0, optimal value will be chosen by arpack. max_iterint, optional (default=None)Maximum number of iterations for arpack. If None, optimal value will be chosen by arpack. remove_zero_eigbool, optional (default=False)If True, then all components with zero eigenvalues are removed, so that the number of components in the output may be < n_components (and sometimes even zero due to numerical instability). When n_components is None, this parameter is ignored and components with zero eigenvalues are removed regardless. copy_Xbool, optional (default=True)If True, input X is copied and stored by the model in the X_fit_ attribute. If no further changes will be done to X, setting copy_X=False saves memory by storing a reference. n_jobsint, optional (default=None)The number of parallel jobs to run. `None` means 1 unless in a `joblib.parallel_backend` context. `-1` means using all processors. samplingbool, optional (default=False)If True, sampling subset from the dataset is performed only once, in order to reduce time complexity while keeping detection performance. subset_sizefloat in (0., 1.0) or int (0, n_samples), optional (default=20)If sampling is True, the size of subset is specified. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id340) [decision_scores_](#id1030)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1032)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1034)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/kpca.html#KPCA.decision_function)[#](#pyod.models.kpca.KPCA.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id341) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id342) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/kpca.html#KPCA.fit)[#](#pyod.models.kpca.KPCA.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id343) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id344) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.kpca.KPCA.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.kpca.KPCA.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.kpca.KPCA.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id345) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id346) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.kpca.KPCA.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id347) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id348) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.kpca.KPCA.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id350) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id351) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.kpca.KPCA.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id353) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id354) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.kpca.KPCA.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id355) self : object *class* pyod.models.kpca.PyODKernelPCA(*n_components=None*, *kernel='rbf'*, *gamma=None*, *degree=3*, *coef0=1*, *kernel_params=None*, *alpha=1.0*, *fit_inverse_transform=False*, *eigen_solver='auto'*, *tol=0*, *max_iter=None*, *remove_zero_eig=False*, *copy_X=True*, *n_jobs=None*, *random_state=None*)[[source]](_modules/pyod/models/kpca.html#PyODKernelPCA)[#](#pyod.models.kpca.PyODKernelPCA) Bases: `KernelPCA` A wrapper class for KernelPCA class of scikit-learn. *property* alphas_[#](#pyod.models.kpca.PyODKernelPCA.alphas_) DEPRECATED: Attribute alphas_ was deprecated in version 1.0 and will be removed in 1.2. Use eigenvectors_ instead. fit(*X*, *y=None*)[#](#pyod.models.kpca.PyODKernelPCA.fit) Fit the model from data in X. ##### Parameters[#](#id356) X{array-like, sparse matrix} of shape (n_samples, n_features)Training vector, where n_samples is the number of samples and n_features is the number of features. yIgnoredNot used, present for API consistency by convention. ##### Returns[#](#id357) selfobjectReturns the instance itself. fit_transform(*X*, *y=None*, ***params*)[#](#pyod.models.kpca.PyODKernelPCA.fit_transform) Fit the model from data in X and transform X. ##### Parameters[#](#id358) X{array-like, sparse matrix} of shape (n_samples, n_features)Training vector, where n_samples is the number of samples and n_features is the number of features. yIgnoredNot used, present for API consistency by convention. [**](#id359)paramskwargsParameters (keyword arguments) and values passed to the fit_transform instance. ##### Returns[#](#id361) X_newndarray of shape (n_samples, n_components)Returns the instance itself. *property* get_centerer[#](#pyod.models.kpca.PyODKernelPCA.get_centerer) Return a protected member _centerer. *property* get_kernel[#](#pyod.models.kpca.PyODKernelPCA.get_kernel) Return a protected member _get_kernel. get_params(*deep=True*)[#](#pyod.models.kpca.PyODKernelPCA.get_params) Get parameters for this estimator. ##### Parameters[#](#id362) deepbool, default=TrueIf True, will return the parameters for this estimator and contained subobjects that are estimators. ##### Returns[#](#id363) paramsdictParameter names mapped to their values. inverse_transform(*X*)[#](#pyod.models.kpca.PyODKernelPCA.inverse_transform) Transform X back to original space. `inverse_transform` approximates the inverse transformation using a learned pre-image. The pre-image is learned by kernel ridge regression of the original data on their low-dimensional representation vectors. Note When users want to compute inverse transformation for ‘linear’ kernel, it is recommended that they use `PCA` instead. Unlike `PCA`, `KernelPCA`’s `inverse_transform` does not reconstruct the mean of data when ‘linear’ kernel is used due to the use of centered kernel. ##### Parameters[#](#id364) X{array-like, sparse matrix} of shape (n_samples, n_components)Training vector, where n_samples is the number of samples and n_features is the number of features. ##### Returns[#](#id365) X_newndarray of shape (n_samples, n_features)Returns the instance itself. ##### References[#](#references) [Bakır, <NAME>., <NAME>, and <NAME>. “Learning to find pre-images.” Advances in neural information processing systems 16 (2004): 449-456.](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.5164&rep=rep1&type=pdf) *property* lambdas_[#](#pyod.models.kpca.PyODKernelPCA.lambdas_) DEPRECATED: Attribute lambdas_ was deprecated in version 1.0 and will be removed in 1.2. Use eigenvalues_ instead. set_params(***params*)[#](#pyod.models.kpca.PyODKernelPCA.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as `Pipeline`). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. ##### Parameters[#](#id366) [**](#id367)paramsdictEstimator parameters. ##### Returns[#](#id369) selfestimator instanceEstimator instance. transform(*X*)[#](#pyod.models.kpca.PyODKernelPCA.transform) Transform X. ##### Parameters[#](#id370) X{array-like, sparse matrix} of shape (n_samples, n_features)Training vector, where n_samples is the number of samples and n_features is the number of features. ##### Returns[#](#id371) X_newndarray of shape (n_samples, n_components)Returns the instance itself. #### pyod.models.lmdd module[#](#module-pyod.models.lmdd) Linear Model Deviation-base outlier detection (LMDD). *class* pyod.models.lmdd.LMDD(*contamination=0.1*, *n_iter=50*, *dis_measure='aad'*, *random_state=None*)[[source]](_modules/pyod/models/lmdd.html#LMDD)[#](#pyod.models.lmdd.LMDD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Linear Method for Deviation-based Outlier Detection. LMDD employs the concept of the smoothing factor which indicates how much the dissimilarity can be reduced by removing a subset of elements from the data-set. Read more in the [[BAAR96](#id829)]. Note: this implementation has minor modification to make it output scores instead of labels. ##### Parameters[#](#id373) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_iterint, optional (default=50)Number of iterations where in each iteration, the process is repeated after randomizing the order of the input. Note that n_iter is a very important factor that affects the accuracy. The higher the better the accuracy and the longer the execution. dis_measure: str, optional (default=’aad’)Dissimilarity measure to be used in calculating the smoothing factor for points, options available: * ‘aad’: Average Absolute Deviation * ‘var’: Variance * ‘iqr’: Interquartile Range random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id374) [decision_scores_](#id1036)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1038)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1040)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/lmdd.html#LMDD.decision_function)[#](#pyod.models.lmdd.LMDD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id375) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id376) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/lmdd.html#LMDD.fit)[#](#pyod.models.lmdd.LMDD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id377) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id378) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.lmdd.LMDD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.lmdd.LMDD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.lmdd.LMDD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id379) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id380) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.lmdd.LMDD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id381) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id382) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.lmdd.LMDD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id384) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id385) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.lmdd.LMDD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id387) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id388) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.lmdd.LMDD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id389) self : object #### pyod.models.loda module[#](#module-pyod.models.loda) Loda: Lightweight on-line detector of anomalies Adapted from tilitools (<https://github.com/nicococo/tilitools>) by *class* pyod.models.loda.LODA(*contamination=0.1*, *n_bins=10*, *n_random_cuts=100*)[[source]](_modules/pyod/models/loda.html#LODA)[#](#pyod.models.loda.LODA) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Loda: Lightweight on-line detector of anomalies. See [[BPevny16](#id831)] for more information. Two versions of LODA are supported: - Static number of bins: uses a static number of bins for all random cuts. - Automatic number of bins: every random cut uses a number of bins deemed > to be optimal according to the Birge-Rozenblac method > ([[BBirgeR06](#id838)]). ##### Parameters[#](#id392) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_binsint or string, optional (default = 10)The number of bins for the histogram. If set to “auto”, the Birge-Rozenblac method will be used to automatically determine the optimal number of bins. n_random_cutsint, optional (default = 100)The number of random cuts. ##### Attributes[#](#id393) [decision_scores_](#id1042)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1044)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1046)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/loda.html#LODA.decision_function)[#](#pyod.models.loda.LODA.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id394) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id395) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/loda.html#LODA.fit)[#](#pyod.models.loda.LODA.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id396) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id397) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.loda.LODA.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.loda.LODA.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.loda.LODA.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id398) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id399) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.loda.LODA.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id400) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id401) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.loda.LODA.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id403) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id404) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.loda.LODA.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id406) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id407) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.loda.LODA.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id408) self : object #### pyod.models.lof module[#](#module-pyod.models.lof) Local Outlier Factor (LOF). Implemented on scikit-learn library. *class* pyod.models.lof.LOF(*n_neighbors=20*, *algorithm='auto'*, *leaf_size=30*, *metric='minkowski'*, *p=2*, *metric_params=None*, *contamination=0.1*, *n_jobs=1*, *novelty=True*)[[source]](_modules/pyod/models/lof.html#LOF)[#](#pyod.models.lof.LOF) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Wrapper of scikit-learn LOF Class with more functionalities. Unsupervised Outlier Detection using Local Outlier Factor (LOF). The anomaly score of each sample is called Local Outlier Factor. It measures the local deviation of density of a given sample with respect to its neighbors. It is local in that the anomaly score depends on how isolated the object is with respect to the surrounding neighborhood. More precisely, locality is given by k-nearest neighbors, whose distance is used to estimate the local density. By comparing the local density of a sample to the local densities of its neighbors, one can identify samples that have a substantially lower density than their neighbors. These are considered outliers. See [[BBKNS00](#id809)] for details. ##### Parameters[#](#id410) n_neighborsint, optional (default=20)Number of neighbors to use by default for kneighbors queries. If n_neighbors is larger than the number of samples provided, all samples will be used. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optionalAlgorithm used to compute the nearest neighbors: * ‘ball_tree’ will use BallTree * ‘kd_tree’ will use KDTree * ‘brute’ will use a brute-force search. * ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit()`](#pyod.models.lof.LOF.fit) method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, optional (default=30)Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstring or callable, default ‘minkowski’metric used for the distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If ‘precomputed’, the training input X is expected to be a distance matrix. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Valid values for metric are: * from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] * from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics: <http://docs.scipy.org/doc/scipy/reference/spatial.distance.htmlpinteger, optional (default = 2)Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. See <http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distancesmetric_paramsdict, optional (default = None)Additional keyword arguments for the metric function. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. n_jobsint, optional (default = 1)The number of parallel jobs to run for neighbors search. If `-1`, then the number of jobs is set to the number of CPU cores. Affects only kneighbors and kneighbors_graph methods. noveltybool (default=False)By default, LocalOutlierFactor is only meant to be used for outlier detection (novelty=False). Set novelty to True if you want to use LocalOutlierFactor for novelty detection. In this case be aware that that you should only use predict, decision_function and score_samples on new unseen data and not on the training set. ##### Attributes[#](#id411) [n_neighbors_](#id1048)intThe actual number of neighbors used for kneighbors queries. [decision_scores_](#id1050)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1052)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1054)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/lof.html#LOF.decision_function)[#](#pyod.models.lof.LOF.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id412) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id413) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/lof.html#LOF.fit)[#](#pyod.models.lof.LOF.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id414) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id415) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.lof.LOF.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.lof.LOF.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.lof.LOF.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id416) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id417) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.lof.LOF.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id418) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id419) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.lof.LOF.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id421) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id422) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.lof.LOF.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id424) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id425) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.lof.LOF.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id426) self : object #### pyod.models.loci module[#](#module-pyod.models.loci) Local Correlation Integral (LOCI). Part of the codes are adapted from <https://github.com/Cloudy10/loci*class* pyod.models.loci.LOCI(*contamination=0.1*, *alpha=0.5*, *k=3*)[[source]](_modules/pyod/models/loci.html#LOCI)[#](#pyod.models.loci.LOCI) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Local Correlation Integral. LOCI is highly effective for detecting outliers and groups of outliers ( a.k.a.micro-clusters), which offers the following advantages and novelties: (a) It provides an automatic, data-dictated cut-off to determine whether a point is an outlier—in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score.(c) It can be computed as quickly as the best previous methods Read more in the [[BPKGF03](#id816)]. ##### Parameters[#](#id428) contaminationfloat in (0., 0.5), optional (default=0.1) The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. alphaint, default = 0.5The neighbourhood parameter measures how large of a neighbourhood should be considered “local”. k: int, default = 3An outlier cutoff threshold for determine whether or not a point should be considered an outlier. ##### Attributes[#](#id429) [decision_scores_](#id1056)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1058)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1060)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. ##### Examples[#](#examples) ``` >>> from pyod.models.loci import LOCI >>> from pyod.utils.data import generate_data >>> n_train = 50 >>> n_test = 50 >>> contamination = 0.1 >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=n_train, n_test=n_test, ... contamination=contamination, random_state=42) >>> clf = LOCI() >>> clf.fit(X_train) LOCI(alpha=0.5, contamination=0.1, k=None) ``` decision_function(*X*)[[source]](_modules/pyod/models/loci.html#LOCI.decision_function)[#](#pyod.models.loci.LOCI.decision_function) Predict raw anomaly scores of X using the fitted detector. The anomaly score of an input sample is computed based on the fitted detector. For consistency, outliers are assigned with higher anomaly scores. ###### Parameters[#](#id430) Xnumpy array of shape (n_samples, n_features)The input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id431) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/loci.html#LOCI.fit)[#](#pyod.models.loci.LOCI.fit) Fit the model using X as training data. ###### Parameters[#](#id432) Xarray, shape (n_samples, n_features)Training data. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id433) self : object fit_predict(*X*, *y=None*)[#](#pyod.models.loci.LOCI.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.loci.LOCI.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.loci.LOCI.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id434) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id435) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.loci.LOCI.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id436) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id437) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.loci.LOCI.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id439) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id440) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.loci.LOCI.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id442) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id443) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.loci.LOCI.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id444) self : object #### pyod.models.lunar module[#](#module-pyod.models.lunar) LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks *class* pyod.models.lunar.LUNAR(*model_type='WEIGHT'*, *n_neighbours=5*, *negative_sampling='MIXED'*, *val_size=0.1*, *scaler=MinMaxScaler()*, *epsilon=0.1*, *proportion=1.0*, *n_epochs=200*, *lr=0.001*, *wd=0.1*, *verbose=0*)[[source]](_modules/pyod/models/lunar.html#LUNAR)[#](#pyod.models.lunar.LUNAR) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) LUNAR class for outlier detection. See <https://www.aaai.org/AAAI22Papers/AAAI-51.GoodgeA.pdf> for details. For an observation, its ordered list of distances to its k nearest neighbours is input to a neural network, with one of the following outputs: > 1. SCORE_MODEL: network directly outputs the anomaly score. > 2. WEIGHT_MODEL: network outputs a set of weights for the k distances, the anomaly score is then thesum of weighted distances. See [[BGHNN22](#id846)] for details. ##### Parameters[#](#id446) model_type: str in [‘WEIGHT’, ‘SCORE’], optional (default = ‘WEIGHT’)Whether to use WEIGHT_MODEL or SCORE_MODEL for anomaly scoring. n_neighbors: int, optional (default = 5)Number of neighbors to use by default for k neighbors queries. negative_sampling: str in [‘UNIFORM’, ‘SUBSPACE’, MIXED’], optional (default = ‘MIXED)Type of negative samples to use between: * ‘UNIFORM’: uniformly distributed samples * ‘SUBSPACE’: subspace perturbation (additive random noise in a subset of features) * ‘MIXED’: a combination of both types of samples val_size: float in [0,1], optional (default = 0.1)Proportion of samples to be used for model validation scaler: object in {StandardScaler(), MinMaxScaler(), optional (default = MinMaxScaler())Method of data normalization epsilon: float, optional (default = 0.1)Hyper-parameter for the generation of negative samples. A smaller epsilon results in negative samples more similar to normal samples. proportion: float, optional (default = 1.0)Hyper-parameter for the proprotion of negative samples to use relative to the number of normal training samples. n_epochs: int, optional (default = 200)Number of epochs to train neural network. lr: float, optional (default = 0.001)Learning rate. wd: float, optional (default = 0.1)Weight decay. verbose: int in {0,1}, optional (default = 0):To view or hide training progress ##### Attributes[#](#id447) decision_function(*X*)[[source]](_modules/pyod/models/lunar.html#LUNAR.decision_function)[#](#pyod.models.lunar.LUNAR.decision_function) Predict raw anomaly score of X using the fitted detector. For consistency, outliers are assigned with larger anomaly scores. Parameters ———- X : numpy array of shape (n_samples, n_features) > The training input samples. ###### Returns[#](#id448) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/lunar.html#LUNAR.fit)[#](#pyod.models.lunar.LUNAR.fit) Fit detector. y is assumed to be 0 for all training samples. Parameters ———- X : numpy array of shape (n_samples, n_features) > The input samples. yIgnoredOverwritten with 0 for all training samples (assumed to be normal). ###### Returns[#](#id449) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.lunar.LUNAR.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.lunar.LUNAR.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.lunar.LUNAR.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id450) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id451) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.lunar.LUNAR.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id452) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id453) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.lunar.LUNAR.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id455) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id456) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.lunar.LUNAR.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id458) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id459) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.lunar.LUNAR.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id460) self : object #### pyod.models.lscp module[#](#module-pyod.models.lscp) Locally Selective Combination of Parallel Outlier Ensembles (LSCP). Adapted from the original implementation. *class* pyod.models.lscp.LSCP(*detector_list*, *local_region_size=30*, *local_max_features=1.0*, *n_bins=10*, *random_state=None*, *contamination=0.1*)[[source]](_modules/pyod/models/lscp.html#LSCP)[#](#pyod.models.lscp.LSCP) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Locally Selection Combination in Parallel Outlier Ensembles LSCP is an unsupervised parallel outlier detection ensemble which selects competent detectors in the local region of a test instance. This implementation uses an Average of Maximum strategy. First, a heterogeneous list of base detectors is fit to the training data and then generates a pseudo ground truth for each train instance is generated by taking the maximum outlier score. For each test instance: 1) The local region is defined to be the set of nearest training points in randomly sampled feature subspaces which occur more frequently than a defined threshold over multiple iterations. 2) Using the local region, a local pseudo ground truth is defined and the pearson correlation is calculated between each base detector’s training outlier scores and the pseudo ground truth. 3) A histogram is built out of pearson correlation scores; detectors in the largest bin are selected as competent base detectors for the given test instance. 4) The average outlier score of the selected competent detectors is taken to be the final score. See [[BZNHL19](#id817)] for details. ##### Parameters[#](#id462) detector_listList, length must be greater than 1Base unsupervised outlier detectors from PyOD. (Note: requires fit and decision_function methods) local_region_sizeint, optional (default=30)Number of training points to consider in each iteration of the local region generation process (30 by default). local_max_featuresfloat in (0.5, 1.), optional (default=1.0)Maximum proportion of number of features to consider when defining the local region (1.0 by default). n_binsint, optional (default=10)Number of bins to use when selecting the local region random_stateRandomState, optional (default=None)A random number generator instance to define the state of the random permutations generator. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function (0.1 by default). ##### Attributes[#](#id463) [decision_scores_](#id1062)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1064)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1066)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. ##### Examples[#](#id464) ``` >>> from pyod.utils.data import generate_data >>> from pyod.utils.utility import standardizer >>> from pyod.models.lscp import LSCP >>> from pyod.models.lof import LOF >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=50, n_test=50, ... contamination=0.1, random_state=42) >>> X_train, X_test = standardizer(X_train, X_test) >>> detector_list = [LOF(), LOF()] >>> clf = LSCP(detector_list) >>> clf.fit(X_train) LSCP(...) ``` decision_function(*X*)[[source]](_modules/pyod/models/lscp.html#LSCP.decision_function)[#](#pyod.models.lscp.LSCP.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id465) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id466) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/lscp.html#LSCP.fit)[#](#pyod.models.lscp.LSCP.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id467) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id468) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.lscp.LSCP.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.lscp.LSCP.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.lscp.LSCP.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id469) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id470) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.lscp.LSCP.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id471) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id472) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.lscp.LSCP.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id474) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id475) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.lscp.LSCP.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id477) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id478) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.lscp.LSCP.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id479) self : object #### pyod.models.mad module[#](#module-pyod.models.mad) Median Absolute deviation (MAD) Algorithm. Strictly for Univariate Data. *class* pyod.models.mad.MAD(*threshold=3.5*)[[source]](_modules/pyod/models/mad.html#MAD)[#](#pyod.models.mad.MAD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Median Absolute Deviation: for measuring the distances between data points and the median in terms of median distance. See [[BIH93](#id833)] for details. ##### Parameters[#](#id481) thresholdfloat, optional (default=3.5)The modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. ##### Attributes[#](#id482) [decision_scores_](#id1068)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1070)floatThe modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. [labels_](#id1072)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/mad.html#MAD.decision_function)[#](#pyod.models.mad.MAD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id483) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Note that n_features must equal 1. ###### Returns[#](#id484) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/mad.html#MAD.fit)[#](#pyod.models.mad.MAD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id485) Xnumpy array of shape (n_samples, n_features)The input samples. Note that n_features must equal 1. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id486) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.mad.MAD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.mad.MAD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.mad.MAD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id487) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id488) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.mad.MAD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id489) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id490) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.mad.MAD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id492) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id493) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.mad.MAD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id495) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id496) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.mad.MAD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id497) self : object #### pyod.models.mcd module[#](#module-pyod.models.mcd) Outlier Detection with Minimum Covariance Determinant (MCD) *class* pyod.models.mcd.MCD(*contamination=0.1*, *store_precision=True*, *assume_centered=False*, *support_fraction=None*, *random_state=None*)[[source]](_modules/pyod/models/mcd.html#MCD)[#](#pyod.models.mcd.MCD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Detecting outliers in a Gaussian distributed dataset using Minimum Covariance Determinant (MCD): robust estimator of covariance. The Minimum Covariance Determinant covariance estimator is to be applied on Gaussian-distributed data, but could still be relevant on data drawn from a unimodal, symmetric distribution. It is not meant to be used with multi-modal data (the algorithm used to fit a MinCovDet object is likely to fail in such a case). One should consider projection pursuit methods to deal with multi-modal datasets. First fit a minimum covariance determinant model and then compute the Mahalanobis distance as the outlier degree of the data See [[BHR04](#id812), [BRD99](#id811)] for details. ##### Parameters[#](#id499) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. store_precisionboolSpecify if the estimated precision is stored. assume_centeredboolIf True, the support of the robust location and the covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment. support_fractionfloat, 0 < support_fraction < 1The proportion of points to be included in the support of the raw MCD estimate. Default is None, which implies that the minimum value of support_fraction will be used within the algorithm: [n_sample + n_features + 1] / 2 random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id500) [raw_location_](#id1074)array-like, shape (n_features,)The raw robust estimated location before correction and re-weighting. [raw_covariance_](#id1076)array-like, shape (n_features, n_features)The raw robust estimated covariance before correction and re-weighting. [raw_support_](#id1078)array-like, shape (n_samples,)A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting. [location_](#id1080)array-like, shape (n_features,)Estimated robust location [covariance_](#id1082)array-like, shape (n_features, n_features)Estimated robust covariance matrix [precision_](#id1084)array-like, shape (n_features, n_features)Estimated pseudo inverse matrix. (stored only if store_precision is True) [support_](#id1086)array-like, shape (n_samples,)A mask of the observations that have been used to compute the robust estimates of location and shape. [decision_scores_](#id1088)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. Mahalanobis distances of the training set (on which :meth:`fit is called) observations. [threshold_](#id1090)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1092)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/mcd.html#MCD.decision_function)[#](#pyod.models.mcd.MCD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id501) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id502) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/mcd.html#MCD.fit)[#](#pyod.models.mcd.MCD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id503) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id504) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.mcd.MCD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.mcd.MCD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.mcd.MCD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id505) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id506) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.mcd.MCD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id507) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id508) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.mcd.MCD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id510) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id511) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.mcd.MCD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id513) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id514) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.mcd.MCD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id515) self : object #### pyod.models.mo_gaal module[#](#module-pyod.models.mo_gaal) Multiple-Objective Generative Adversarial Active Learning. Part of the codes are adapted from <https://github.com/leibinghe/GAAL-based-outlier-detection*class* pyod.models.mo_gaal.MO_GAAL(*k=10*, *stop_epochs=20*, *lr_d=0.01*, *lr_g=0.0001*, *momentum=0.9*, *contamination=0.1*)[[source]](_modules/pyod/models/mo_gaal.html#MO_GAAL)[#](#pyod.models.mo_gaal.MO_GAAL) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Multi-Objective Generative Adversarial Active Learning. MO_GAAL directly generates informative potential outliers to assist the classifier in describing a boundary that can separate outliers from normal data effectively. Moreover, to prevent the generator from falling into the mode collapsing problem, the network structure of SO-GAAL is expanded from a single generator (SO-GAAL) to multiple generators with different objectives (MO-GAAL) to generate a reasonable reference distribution for the whole dataset. Read more in the [[BLLZ+19](#id818)]. ##### Parameters[#](#id517) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. kint, optional (default=10)The number of sub generators. stop_epochsint, optional (default=20)The number of epochs of training. The number of total epochs equals to three times of stop_epochs. lr_dfloat, optional (default=0.01)The learn rate of the discriminator. lr_gfloat, optional (default=0.0001)The learn rate of the generator. momentumfloat, optional (default=0.9)The momentum parameter for SGD. ##### Attributes[#](#id518) [decision_scores_](#id1094)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1096)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1098)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/mo_gaal.html#MO_GAAL.decision_function)[#](#pyod.models.mo_gaal.MO_GAAL.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id519) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id520) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/mo_gaal.html#MO_GAAL.fit)[#](#pyod.models.mo_gaal.MO_GAAL.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id521) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id522) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.mo_gaal.MO_GAAL.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.mo_gaal.MO_GAAL.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.mo_gaal.MO_GAAL.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id523) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id524) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.mo_gaal.MO_GAAL.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id525) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id526) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.mo_gaal.MO_GAAL.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id528) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id529) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.mo_gaal.MO_GAAL.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id531) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id532) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.mo_gaal.MO_GAAL.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id533) self : object #### pyod.models.ocsvm module[#](#module-pyod.models.ocsvm) One-class SVM detector. Implemented on scikit-learn library. *class* pyod.models.ocsvm.OCSVM(*kernel='rbf'*, *degree=3*, *gamma='auto'*, *coef0=0.0*, *tol=0.001*, *nu=0.5*, *shrinking=True*, *cache_size=200*, *verbose=False*, *max_iter=-1*, *contamination=0.1*)[[source]](_modules/pyod/models/ocsvm.html#OCSVM)[#](#pyod.models.ocsvm.OCSVM) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Wrapper of scikit-learn one-class SVM Class with more functionalities. Unsupervised Outlier Detection. Estimate the support of a high-dimensional distribution. The implementation is based on libsvm. See <http://scikit-learn.org/stable/modules/svm.html#svm-outlier-detection> and [[BScholkopfPST+01](#id822)]. ##### Parameters[#](#id535) kernelstring, optional (default=’rbf’)Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix. nufloat, optionalAn upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken. degreeint, optional (default=3)Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. gammafloat, optional (default=’auto’)Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. If gamma is ‘auto’ then 1/n_features will be used instead. coef0float, optional (default=0.0)Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’. tolfloat, optionalTolerance for stopping criterion. shrinkingbool, optionalWhether to use the shrinking heuristic. cache_sizefloat, optionalSpecify the size of the kernel cache (in MB). verbosebool, default: FalseEnable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context. max_iterint, optional (default=-1)Hard limit on iterations within solver, or -1 for no limit. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. ##### Attributes[#](#id536) [support_](#id1100)array-like, shape = [n_SV]Indices of support vectors. [support_vectors_](#id1102)array-like, shape = [nSV, n_features]Support vectors. [dual_coef_](#id1104)array, shape = [1, n_SV]Coefficients of the support vectors in the decision function. [coef_](#id1106)array, shape = [1, n_features]Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is readonly property derived from dual_coef_ and support_vectors_ [intercept_](#id1108)array, shape = [1,]Constant in the decision function. [decision_scores_](#id1110)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1112)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1114)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/ocsvm.html#OCSVM.decision_function)[#](#pyod.models.ocsvm.OCSVM.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id537) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id538) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*, *sample_weight=None*, ***params*)[[source]](_modules/pyod/models/ocsvm.html#OCSVM.fit)[#](#pyod.models.ocsvm.OCSVM.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id539) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. sample_weightarray-like, shape (n_samples,)Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. ###### Returns[#](#id540) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.ocsvm.OCSVM.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.ocsvm.OCSVM.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.ocsvm.OCSVM.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id541) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id542) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.ocsvm.OCSVM.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id543) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id544) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.ocsvm.OCSVM.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id546) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id547) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.ocsvm.OCSVM.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id549) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id550) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.ocsvm.OCSVM.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id551) self : object #### pyod.models.pca module[#](#module-pyod.models.pca) Principal Component Analysis (PCA) Outlier Detector *class* pyod.models.pca.PCA(*n_components=None*, *n_selected_components=None*, *contamination=0.1*, *copy=True*, *whiten=False*, *svd_solver='auto'*, *tol=0.0*, *iterated_power='auto'*, *random_state=None*, *weighted=True*, *standardization=True*)[[source]](_modules/pyod/models/pca.html#PCA)[#](#pyod.models.pca.PCA) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Principal component analysis (PCA) can be used in detecting outliers. PCA is a linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. In this procedure, covariance matrix of the data can be decomposed to orthogonal vectors, called eigenvectors, associated with eigenvalues. The eigenvectors with high eigenvalues capture most of the variance in the data. Therefore, a low dimensional hyperplane constructed by k eigenvectors can capture most of the variance in the data. However, outliers are different from normal data points, which is more obvious on the hyperplane constructed by the eigenvectors with small eigenvalues. Therefore, outlier scores can be obtained as the sum of the projected distance of a sample on all eigenvectors. See [[BAgg15](#id808), [BSCSC03](#id807)] for details. Score(X) = Sum of weighted euclidean distance between each sample to the hyperplane constructed by the selected eigenvectors ##### Parameters[#](#id553) n_componentsint, float, None or stringNumber of components to keep. if n_components is not set all components are kept: ``` n_components == min(n_samples, n_features) ``` if n_components == ‘mle’ and svd_solver == ‘full’, Minka’s MLE is used to guess the dimension if `0 < n_components < 1` and svd_solver == ‘full’, select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components n_components cannot be equal to n_features for svd_solver == ‘arpack’. n_selected_componentsint, optional (default=None)Number of selected principal components for calculating the outlier scores. It is not necessarily equal to the total number of the principal components. If not set, use all principal components. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. copybool (default True)If False, data passed to fit are overwritten and running fit(X).transform(X) will not yield the expected results, use fit_transform(X) instead. whitenbool, optional (default False)When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions. svd_solverstring {‘auto’, ‘full’, ‘arpack’, ‘randomized’} auto :the solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards. full :run exact full SVD calling the standard LAPACK solver via scipy.linalg.svd and select the components by postprocessing arpack :run SVD truncated to n_components calling ARPACK solver via scipy.sparse.linalg.svds. It requires strictly 0 < n_components < X.shape[1] randomized :run randomized SVD by the method of Halko et al. tolfloat >= 0, optional (default .0)Tolerance for singular values computed by svd_solver == ‘arpack’. iterated_powerint >= 0, or ‘auto’, (default ‘auto’)Number of iterations for the power method computed by svd_solver == ‘randomized’. random_stateint, RandomState instance or None, optional (default None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when `svd_solver` == ‘arpack’ or ‘randomized’. weightedbool, optional (default=True)If True, the eigenvalues are used in score computation. The eigenvectors with small eigenvalues comes with more importance in outlier score calculation. standardizationbool, optional (default=True)If True, perform standardization first to convert data to zero mean and unit variance. See <http://scikit-learn.org/stable/auto_examples/preprocessing/plot_scaling_importance.html##### Attributes[#](#id554) [components_](#id1116)array, shape (n_components, n_features)Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by `explained_variance_`. [explained_variance_](#id1118)array, shape (n_components,)The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. [explained_variance_ratio_](#id1120)array, shape (n_components,)Percentage of variance explained by each of the selected components. If `n_components` is not set then all components are stored and the sum of explained variances is equal to 1.0. [singular_values_](#id1122)array, shape (n_components,)The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the `n_components` variables in the lower-dimensional space. [mean_](#id1124)array, shape (n_features,)Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). [n_components_](#id1126)intThe estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or n_features if n_components is None. [noise_variance_](#id1128)floatThe estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by <NAME>, 12.2.1 p. 574 or <http://www.miketipping.com/papers/met-mppca.pdf>. It is required to computed the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X. [decision_scores_](#id1130)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1132)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1134)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/pca.html#PCA.decision_function)[#](#pyod.models.pca.PCA.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id555) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id556) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. *property* explained_variance_[#](#pyod.models.pca.PCA.explained_variance_) The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. Decorator for scikit-learn PCA attributes. fit(*X*, *y=None*)[[source]](_modules/pyod/models/pca.html#PCA.fit)[#](#pyod.models.pca.PCA.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id557) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id558) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.pca.PCA.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.pca.PCA.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.pca.PCA.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id559) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id560) paramsmapping of string to anyParameter names mapped to their values. *property* noise_variance_[#](#pyod.models.pca.PCA.noise_variance_) The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by <NAME>, 12.2.1 p. 574 or <http://www.miketipping.com/papers/met-mppca.pdf>. It is required to computed the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X. Decorator for scikit-learn PCA attributes. predict(*X*, *return_confidence=False*)[#](#pyod.models.pca.PCA.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id561) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id562) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.pca.PCA.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id564) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id565) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.pca.PCA.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id567) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id568) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.pca.PCA.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id569) self : object #### pyod.models.qmcd module[#](#module-pyod.models.qmcd) Quasi-Monte Carlo Discrepancy outlier detection (QMCD) *class* pyod.models.qmcd.QMCD(*contamination=0.1*)[[source]](_modules/pyod/models/qmcd.html#QMCD)[#](#pyod.models.qmcd.QMCD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) The Wrap-around Quasi-Monte Carlo discrepancy is a uniformity criterion which is used to assess the space filling of a number of samples in a hypercube. It quantifies the distance between the continuous uniform distribution on a hypercube and the discrete uniform distribution on distinct sample points. Therefore, lower discrepancy values for a sample point indicates that it provides better coverage of the parameter space with regard to the rest of the samples. This method is kernel based and a higher discrepancy score is relative to the rest of the samples, the higher the likelihood of it being an outlier. Read more in the [[BFM01](#id851)]. ##### Parameters[#](#id571) ##### Attributes[#](#id572) [decision_scores_](#id1136)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1138)floatThe modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. [labels_](#id1140)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/qmcd.html#QMCD.decision_function)[#](#pyod.models.qmcd.QMCD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id573) Xnumpy array of shape (n_samples, n_features)The independent and dependent/target samples with the target samples being the last column of the numpy array such that eg: X = np.append(x, y.reshape(-1,1), axis=1). Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id574) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/qmcd.html#QMCD.fit)[#](#pyod.models.qmcd.QMCD.fit) Fit detector ###### Parameters[#](#id575) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. fit_predict(*X*, *y=None*)[#](#pyod.models.qmcd.QMCD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.qmcd.QMCD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.qmcd.QMCD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id576) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id577) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.qmcd.QMCD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id578) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id579) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.qmcd.QMCD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id581) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id582) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.qmcd.QMCD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id584) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id585) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.qmcd.QMCD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id586) self : object #### pyod.models.rgraph module[#](#module-pyod.models.rgraph) R-graph *class* pyod.models.rgraph.RGraph(*transition_steps=10*, *n_nonzero=10*, *gamma=50.0*, *gamma_nz=True*, *algorithm='lasso_lars'*, *tau=1.0*, *maxiter_lasso=1000*, *preprocessing=True*, *contamination=0.1*, *blocksize_test_data=10*, *support_init='L2'*, *maxiter=40*, *support_size=100*, *active_support=True*, *fit_intercept_LR=False*, *verbose=True*)[[source]](_modules/pyod/models/rgraph.html#RGraph)[#](#pyod.models.rgraph.RGraph) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Outlier Detection via R-graph. Paper: <https://openaccess.thecvf.com/content_cvpr_2017/papers/You_Provable_Self-Representation_Based_CVPR_2017_paper.pdf> See [[BYRV17](#id848)] for details. ##### Parameters[#](#id588) transition_stepsint, optional (default=20)Number of transition steps that are taken in the graph, after which the outlier scores are determined. gamma : float gamma_nzboolean, default Truegamma and gamma_nz together determines the parameter alpha. When `gamma_nz = False`, alpha = gamma. When `gamma_nz = True`, then alpha = gamma * alpha0, where alpha0 is the largest number such that the solution to the optimization problem with alpha = alpha0 is the zero vector (see Proposition 1 in [1]). Therefore, when `gamma_nz = True`, gamma should be a value greater than 1.0. A good choice is typically in the range [5, 500]. taufloat, default 1.0Parameter for elastic net penalty term. When tau = 1.0, the method reduces to sparse subspace clustering with basis pursuit (SSC-BP) [2]. When tau = 0.0, the method reduces to least squares regression (LSR). algorithmstring, default `lasso_lars`Algorithm for computing the representation. Either lasso_lars or lasso_cd. Note: `lasso_lars` and `lasso_cd` only support tau = 1. For cases tau << 1 linear regression is used. fit_intercept_LR: bool, optional (default=False)For `gamma` > 10000 linear regression is used instead of `lasso_lars` or `lasso_cd`. This parameter determines whether the intercept for the model is calculated. maxiter_lassoint, default 1000The maximum number of iterations for `lasso_lars` and `lasso_cd`. n_nonzeroint, default 50This is an upper bound on the number of nonzero entries of each representation vector. If there are more than n_nonzero nonzero entries, only the top n_nonzero number of entries with largest absolute value are kept. active_support: boolean, default TrueSet to True to use the active support algorithm in [1] for solving the optimization problem. This should significantly reduce the running time when n_samples is large. active_support_params: dictionary of string to any, optionalParameters (keyword arguments) and values for the active support algorithm. It may be used to set the parameters `support_init`, `support_size` and `maxiter`, see `active_support_elastic_net` for details. Example: active_support_params={‘support_size’:50, ‘maxiter’:100} Ignored when `active_support=False` preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)Verbosity mode. * 0 = silent * 1 = progress bar * 2 = one line per epoch. For verbose >= 1, model summary may be printed. random_staterandom_state: int, RandomState instance or None, optional(default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. blocksize_test_data: int, optional (default=10)Test set is splitted into blocks of the size `blocksize_test_data` to at least partially separate test - and train set ##### Attributes[#](#id589) [transition_matrix_](#id1142)numpy array of shape (n_samples,)Transition matrix from the last fitted data, this might include training + test data [decision_scores_](#id1144)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1146)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1148)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. active_support_elastic_net(*X*, *y*, *alpha*, *tau=1.0*, *algorithm='lasso_lars'*, *support_init='L2'*, *support_size=100*, *maxiter=40*, *maxiter_lasso=1000*)[[source]](_modules/pyod/models/rgraph.html#RGraph.active_support_elastic_net)[#](#pyod.models.rgraph.RGraph.active_support_elastic_net) Source: <https://github.com/ChongYou/subspace-clustering/blob/master/cluster/selfrepresentation.py>An active support based algorithm for solving the elastic net optimization problem min_{c} tau ||c||_1 + (1-tau)/2 ||c||_2^2 + alpha / 2 ||y - c X ||_2^2. ###### Parameters[#](#id590) X : array-like, shape (n_samples, n_features) y : array-like, shape (1, n_features) alpha : float tau : float, default 1.0 algorithmstring, default `spams`Algorithm for computing solving the subproblems. Either lasso_lars or lasso_cd or spams (installation of spams package is required). Note: `lasso_lars` and `lasso_cd` only support tau = 1. support_init: string, default `knn`This determines how the active support is initialized. It can be either `knn` or `L2`. support_size: int, default 100This determines the size of the working set. A small support_size decreases the runtime per iteration while increase the number of iterations. maxiter: int default 40Termination condition for active support update. ###### Returns[#](#id591) cshape n_samplesThe optimal solution to the optimization problem. decision_function(*X*)[[source]](_modules/pyod/models/rgraph.html#RGraph.decision_function)[#](#pyod.models.rgraph.RGraph.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id592) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id593) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. elastic_net_subspace_clustering(*X*, *gamma=50.0*, *gamma_nz=True*, *tau=1.0*, *algorithm='lasso_lars'*, *fit_intercept_LR=False*, *active_support=True*, *active_support_params=None*, *n_nonzero=50*, *maxiter_lasso=1000*)[[source]](_modules/pyod/models/rgraph.html#RGraph.elastic_net_subspace_clustering)[#](#pyod.models.rgraph.RGraph.elastic_net_subspace_clustering) Source: <https://github.com/ChongYou/subspace-clustering/blob/master/cluster/selfrepresentation.pyElastic net subspace clustering (EnSC) [1]. Compute self-representation matrix C from solving the following optimization problem min_{c_j} tau ||c_j||_1 + (1-tau)/2 ||c_j||_2^2 + alpha / 2 ||x_j - c_j X ||_2^2 s.t. c_jj = 0, where c_j and x_j are the j-th rows of C and X, respectively. Parameter `algorithm` specifies the algorithm for solving the optimization problem. `lasso_lars` and `lasso_cd` are algorithms implemented in sklearn, `spams` refers to the same algorithm as `lasso_lars` but is implemented in spams package available at <http://spams-devel.gforge.inria.fr/> (installation required) In principle, all three algorithms give the same result. For large scale data (e.g. with > 5000 data points), use any of these algorithms in conjunction with `active_support=True`. It adopts an efficient active support strategy that solves the optimization problem by breaking it into a sequence of small scale optimization problems as described in [1]. If tau = 1.0, the method reduces to sparse subspace clustering with basis pursuit (SSC-BP) [2]. If tau = 0.0, the method reduces to least squares regression (LSR) [3]. Note: `lasso_lars` and `lasso_cd` only support tau = 1. Parameters ———– X : array-like, shape (n_samples, n_features) > Input data to be clustered gamma : float gamma_nz : boolean, default True > gamma and gamma_nz together determines the parameter alpha. When `gamma_nz = False`, > alpha = gamma. When `gamma_nz = True`, then alpha = gamma * alpha0, where alpha0 is > the largest number such that the solution to the optimization problem with alpha = alpha0 > is the zero vector (see Proposition 1 in [1]). Therefore, when `gamma_nz = True`, gamma > should be a value greater than 1.0. A good choice is typically in the range [5, 500]. taufloat, default 1.0Parameter for elastic net penalty term. When tau = 1.0, the method reduces to sparse subspace clustering with basis pursuit (SSC-BP) [2]. When tau = 0.0, the method reduces to least squares regression (LSR) [3]. algorithmstring, default `lasso_lars`Algorithm for computing the representation. Either lasso_lars or lasso_cd or spams (installation of spams package is required). Note: `lasso_lars` and `lasso_cd` only support tau = 1. n_nonzeroint, default 50This is an upper bound on the number of nonzero entries of each representation vector. If there are more than n_nonzero nonzero entries, only the top n_nonzero number of entries with largest absolute value are kept. active_support: boolean, default TrueSet to True to use the active support algorithm in [1] for solving the optimization problem. This should significantly reduce the running time when n_samples is large. active_support_params: dictionary of string to any, optionalParameters (keyword arguments) and values for the active support algorithm. It may be used to set the parameters `support_init`, `support_size` and `maxiter`, see `active_support_elastic_net` for details. Example: active_support_params={‘support_size’:50, ‘maxiter’:100} Ignored when `active_support=False` ###### Returns[#](#id594) [representation_matrix_](#id1150)csr matrix, shape: n_samples by n_samplesThe self-representation matrix. ###### References[#](#id595) [1] <NAME>, <NAME>, <NAME>, <NAME>, Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering, CVPR 2016 [2] <NAME>, <NAME>, Sparse Subspace Clustering: Algorithm, Theory, and Applications, TPAMI 2013 [3] <NAME>, et al. Robust and efficient subspace segmentation via least squares regression, ECCV 2012 fit(*X*, *y=None*)[[source]](_modules/pyod/models/rgraph.html#RGraph.fit)[#](#pyod.models.rgraph.RGraph.fit) Fit detector. y is ignored in unsupervised methods. Parameters ———- X : numpy array of shape (n_samples, n_features) > The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id596) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.rgraph.RGraph.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.rgraph.RGraph.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.rgraph.RGraph.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id597) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id598) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.rgraph.RGraph.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id599) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id600) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.rgraph.RGraph.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id602) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id603) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.rgraph.RGraph.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id605) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id606) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.rgraph.RGraph.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id607) self : object #### pyod.models.rod module[#](#module-pyod.models.rod) Rotation-based Outlier Detector (ROD) *class* pyod.models.rod.ROD(*contamination=0.1*, *parallel_execution=False*)[[source]](_modules/pyod/models/rod.html#ROD)[#](#pyod.models.rod.ROD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Rotation-based Outlier Detection (ROD), is a robust and parameter-free algorithm that requires no statistical distribution assumptions and works intuitively in three-dimensional space, where the 3D-vectors, representing the data points, are rotated about the geometric median two times counterclockwise using Rodrigues rotation formula. The results of the rotation are parallelepipeds where their volumes are mathematically analyzed as cost functions and used to calculate the Median Absolute Deviations to obtain the outlying score. For high dimensions > 3, the overall score is calculated by taking the average of the overall 3D-subspaces scores, that were resulted from decomposing the original data space. See [[BABC20](#id835)] for details. ##### Parameters[#](#id609) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. parallel_execution: bool, optional (default=False).If set to True, the algorithm will run in parallel, for a better execution time. It is recommended to set this parameter to True ONLY for high dimensional data > 10, and if a proper hardware is available. ##### Attributes[#](#id610) [decision_scores_](#id1152)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1154)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1156)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/rod.html#ROD.decision_function)[#](#pyod.models.rod.ROD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id611) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id612) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/rod.html#ROD.fit)[#](#pyod.models.rod.ROD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id613) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id614) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.rod.ROD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.rod.ROD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.rod.ROD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id615) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id616) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.rod.ROD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id617) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id618) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.rod.ROD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id620) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id621) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.rod.ROD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id623) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id624) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.rod.ROD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id625) self : object pyod.models.rod.angle(*v1*, *v2*)[[source]](_modules/pyod/models/rod.html#angle)[#](#pyod.models.rod.angle) find the angle between two 3D vectors. ##### Parameters[#](#id626) v1 : list, first vector v2 : list, second vector ##### Returns[#](#id627) angle : float, the angle pyod.models.rod.euclidean(*v1*, *v2*, *c=False*)[[source]](_modules/pyod/models/rod.html#euclidean)[#](#pyod.models.rod.euclidean) Find the euclidean distance between two vectors or between a vector and a collection of vectors. ##### Parameters[#](#id628) v1 : list, first 3D vector or collection of vectors v2 : list, second 3D vector c : bool (default=False), if True, it means the v1 is a list of vectors. ##### Returns[#](#id629) list of list of euclidean distances if c==True. Otherwise float: the euclidean distance pyod.models.rod.geometric_median(*x*, *eps=1e-05*)[[source]](_modules/pyod/models/rod.html#geometric_median)[#](#pyod.models.rod.geometric_median) Find the multivariate geometric L1-median by applying Vardi and Zhang algorithm. ##### Parameters[#](#id630) x : array-like, the data points eps: float (default=1e-5), a threshold to indicate when to stop ##### Returns[#](#id631) gm : array, Geometric L1-median pyod.models.rod.mad(*costs*, *median=None*)[[source]](_modules/pyod/models/rod.html#mad)[#](#pyod.models.rod.mad) Apply the robust median absolute deviation (MAD) to measure the inconsistency/variability of the rotation costs. ##### Parameters[#](#id632) costs : list of rotation costs median: float (default=None), MAD median ##### Returns[#](#id633) zfloatthe modified z scores pyod.models.rod.process_sub(*subspace*, *gm*, *median*, *scaler1*, *scaler2*)[[source]](_modules/pyod/models/rod.html#process_sub)[#](#pyod.models.rod.process_sub) Apply ROD on a 3D subSpace then process it with sigmoid to compare apples to apples ##### Parameters[#](#id634) subspace : array-like, 3D subspace of the data gm: list, the geometric median median: float, MAD median scaler1: obj, MinMaxScaler of Angles group 1 scaler2: obj, MinMaxScaler of Angles group 2 ##### Returns[#](#id635) ROD decision scores with sigmoid applied, gm, scaler1, scaler2 pyod.models.rod.rod_3D(*x*, *gm=None*, *median=None*, *scaler1=None*, *scaler2=None*)[[source]](_modules/pyod/models/rod.html#rod_3D)[#](#pyod.models.rod.rod_3D) Find ROD scores for 3D Data. note that gm, scaler1 and scaler2 will be returned “as they are” and without being changed if the model has been fit already ##### Parameters[#](#id636) x : array-like, 3D data points. gm: list (default=None), the geometric median median: float (default=None), MAD median scaler1: obj (default=None), MinMaxScaler of Angles group 1 scaler2: obj (default=None), MinMaxScaler of Angles group 2 ##### Returns[#](#id637) decision_scores, gm, scaler1, scaler2 pyod.models.rod.rod_nD(*X*, *parallel*, *gm=None*, *median=None*, *data_scaler=None*, *angles_scalers1=None*, *angles_scalers2=None*)[[source]](_modules/pyod/models/rod.html#rod_nD)[#](#pyod.models.rod.rod_nD) Find ROD overall scores when Data is higher than 3D:# scale dataset using Robust Scaler # decompose the full space into a combinations of 3D subspaces, # Apply ROD on each combination, # squish scores per subspace, so we compare apples to apples, # calculate average of ROD scores of all subspaces per observation. Note that if gm, data_scaler, angles_scalers1, angles_scalers2 are None, that means it is a fit() process and they will be calculated and returned to the class to be saved for future prediction. Otherwise, if they are not None, then it is a prediction process. ##### Parameters[#](#id638) X : array-like, data points parallel: bool, True runs the algorithm in parallel gm: list (default=None), the geometric median median: list (default=None), MAD medians data_scaler: obj (default=None), RobustScaler of data angles_scalers1: list (default=None), MinMaxScalers of Angles group 1 angles_scalers2: list (default=None), MinMaxScalers of Angles group 2 ##### Returns[#](#id639) ROD decision scores, gm, median, data_scaler, angles_scalers1, angles_scalers2 pyod.models.rod.scale_angles(*gammas*, *scaler1=None*, *scaler2=None*)[[source]](_modules/pyod/models/rod.html#scale_angles)[#](#pyod.models.rod.scale_angles) Scale all angles in which angles <= 90 degree will be scaled within [0 - 54.7] and angles > 90 will be scaled within [90 - 126] ##### Parameters[#](#id640) gammas : list, angles scaler1: obj (default=None), MinMaxScaler of Angles group 1 scaler2: obj (default=None), MinMaxScaler of Angles group 2 ##### Returns[#](#id641) scaled angles, scaler1, scaler2 pyod.models.rod.sigmoid(*x*)[[source]](_modules/pyod/models/rod.html#sigmoid)[#](#pyod.models.rod.sigmoid) Implementation of Sigmoid function ##### Parameters[#](#id642) x : array-like, decision scores ##### Returns[#](#id643) array-like, x after applying sigmoid #### pyod.models.sampling module[#](#module-pyod.models.sampling) Outlier detection based on Sampling (SP) *class* pyod.models.sampling.Sampling(*contamination=0.1*, *subset_size=20*, *metric='minkowski'*, *metric_params=None*, *random_state=None*)[[source]](_modules/pyod/models/sampling.html#Sampling)[#](#pyod.models.sampling.Sampling) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Sampling class for outlier detection. <NAME>., <NAME>.: Rapid Distance-Based Outlier Detection via Sampling, Advances in Neural Information Processing Systems (NIPS 2013), 467-475, 2013. See [[BSB13](#id843)] for details. ##### Parameters[#](#id645) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. subset_sizefloat in (0., 1.0) or int (0, n_samples), optional (default=20)The size of subset of the data set. Sampling subset from the data set is performed only once. metricstring or callable, default ‘minkowski’metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: * from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] * from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. metric_paramsdict, optional (default = None)Additional keyword arguments for the metric function. random_stateint, RandomState instance or None, optional (default None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Attributes[#](#id646) [decision_scores_](#id1158)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1160)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1162)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/sampling.html#Sampling.decision_function)[#](#pyod.models.sampling.Sampling.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id647) Xnumpy array of shape (n_samples, n_features)The test input samples. ###### Returns[#](#id648) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/sampling.html#Sampling.fit)[#](#pyod.models.sampling.Sampling.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id649) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id650) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.sampling.Sampling.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.sampling.Sampling.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.sampling.Sampling.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id651) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id652) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.sampling.Sampling.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id653) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id654) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.sampling.Sampling.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id656) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id657) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.sampling.Sampling.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id659) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id660) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.sampling.Sampling.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id661) self : object #### pyod.models.sod module[#](#module-pyod.models.sod) Subspace Outlier Detection (SOD) *class* pyod.models.sod.SOD(*contamination=0.1*, *n_neighbors=20*, *ref_set=10*, *alpha=0.8*)[[source]](_modules/pyod/models/sod.html#SOD)[#](#pyod.models.sod.SOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Subspace outlier detection (SOD) schema aims to detect outlier in varying subspaces of a high dimensional feature space. For each data object, SOD explores the axis-parallel subspace spanned by the data object’s neighbors and determines how much the object deviates from the neighbors in this subspace. See [[BKKrogerSZ09](#id825)] for details. ##### Parameters[#](#id663) n_neighborsint, optional (default=20)Number of neighbors to use by default for k neighbors queries. ref_set: int, optional (default=10)specifies the number of shared nearest neighbors to create the reference set. Note that ref_set must be smaller than n_neighbors. alpha: float in (0., 1.), optional (default=0.8)specifies the lower limit for selecting subspace. 0.8 is set as default as suggested in the original paper. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. ##### Attributes[#](#id664) [decision_scores_](#id1164)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1166)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1168)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/sod.html#SOD.decision_function)[#](#pyod.models.sod.SOD.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id665) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id666) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/sod.html#SOD.fit)[#](#pyod.models.sod.SOD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id667) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id668) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.sod.SOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.sod.SOD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.sod.SOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id669) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id670) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.sod.SOD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id671) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id672) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.sod.SOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id674) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id675) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.sod.SOD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id677) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id678) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.sod.SOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id679) self : object #### pyod.models.so_gaal module[#](#module-pyod.models.so_gaal) Single-Objective Generative Adversarial Active Learning. Part of the codes are adapted from <https://github.com/leibinghe/GAAL-based-outlier-detection*class* pyod.models.so_gaal.SO_GAAL(*stop_epochs=20*, *lr_d=0.01*, *lr_g=0.0001*, *momentum=0.9*, *contamination=0.1*)[[source]](_modules/pyod/models/so_gaal.html#SO_GAAL)[#](#pyod.models.so_gaal.SO_GAAL) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Single-Objective Generative Adversarial Active Learning. SO-GAAL directly generates informative potential outliers to assist the classifier in describing a boundary that can separate outliers from normal data effectively. Moreover, to prevent the generator from falling into the mode collapsing problem, the network structure of SO-GAAL is expanded from a single generator (SO-GAAL) to multiple generators with different objectives (MO-GAAL) to generate a reasonable reference distribution for the whole dataset. Read more in the [[BLLZ+19](#id818)]. ##### Parameters[#](#id681) contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. stop_epochsint, optional (default=20)The number of epochs of training. The number of total epochs equals to three times of stop_epochs. lr_dfloat, optional (default=0.01)The learn rate of the discriminator. lr_gfloat, optional (default=0.0001)The learn rate of the generator. momentumfloat, optional (default=0.9)The momentum parameter for SGD. ##### Attributes[#](#id682) [decision_scores_](#id1170)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1172)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1174)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/so_gaal.html#SO_GAAL.decision_function)[#](#pyod.models.so_gaal.SO_GAAL.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id683) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id684) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/so_gaal.html#SO_GAAL.fit)[#](#pyod.models.so_gaal.SO_GAAL.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id685) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id686) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.so_gaal.SO_GAAL.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.so_gaal.SO_GAAL.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.so_gaal.SO_GAAL.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id687) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id688) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.so_gaal.SO_GAAL.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id689) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id690) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.so_gaal.SO_GAAL.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id692) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id693) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.so_gaal.SO_GAAL.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id695) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id696) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.so_gaal.SO_GAAL.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id697) self : object #### pyod.models.sos module[#](#module-pyod.models.sos) Stochastic Outlier Selection (SOS). Part of the codes are adapted from <https://github.com/jeroenjanssens/scikit-sos*class* pyod.models.sos.SOS(*contamination=0.1*, *perplexity=4.5*, *metric='euclidean'*, *eps=1e-05*)[[source]](_modules/pyod/models/sos.html#SOS)[#](#pyod.models.sos.SOS) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Stochastic Outlier Selection. SOS employs the concept of affinity to quantify the relationship from one data point to another data point. Affinity is proportional to the similarity between two data points. So, a data point has little affinity with a dissimilar data point. A data point is selected as an outlier when all the other data points have insufficient affinity with it. Read more in the [[BJHuszarPvdH12](#id815)]. ##### Parameters[#](#id699) contaminationfloat in (0., 0.5), optional (default=0.1) The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. perplexityfloat, optional (default=4.5)A smooth measure of the effective number of neighbours. The perplexity parameter is similar to the parameter k in kNN algorithm (the number of nearest neighbors). The range of perplexity can be any real number between 1 and n-1, where n is the number of samples. metric: str, default ‘euclidean’Metric used for the distance computation. Any metric from scipy.spatial.distance can be used. Valid values for metric are: * ‘euclidean’ * from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics: <http://docs.scipy.org/doc/scipy/reference/spatial.distance.htmlepsfloat, optional (default = 1e-5)Tolerance threshold for floating point errors. ##### Attributes[#](#id700) [decision_scores_](#id1176)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1178)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1180)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. ##### Examples[#](#id701) ``` >>> from pyod.models.sos import SOS >>> from pyod.utils.data import generate_data >>> n_train = 50 >>> n_test = 50 >>> contamination = 0.1 >>> X_train, y_train, X_test, y_test = generate_data( ... n_train=n_train, n_test=n_test, ... contamination=contamination, random_state=42) >>> >>> clf = SOS() >>> clf.fit(X_train) SOS(contamination=0.1, eps=1e-05, metric='euclidean', perplexity=4.5) ``` decision_function(*X*)[[source]](_modules/pyod/models/sos.html#SOS.decision_function)[#](#pyod.models.sos.SOS.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id702) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id703) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/sos.html#SOS.fit)[#](#pyod.models.sos.SOS.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id704) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id705) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.sos.SOS.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.sos.SOS.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.sos.SOS.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id706) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id707) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.sos.SOS.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id708) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id709) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.sos.SOS.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id711) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id712) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.sos.SOS.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id714) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id715) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.sos.SOS.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id716) self : object #### pyod.models.suod module[#](#module-pyod.models.suod) SUOD *class* pyod.models.suod.SUOD(*base_estimators=None*, *contamination=0.1*, *combination='average'*, *n_jobs=None*, *rp_clf_list=None*, *rp_ng_clf_list=None*, *rp_flag_global=True*, *target_dim_frac=0.5*, *jl_method='basic'*, *bps_flag=True*, *approx_clf_list=None*, *approx_ng_clf_list=None*, *approx_flag_global=True*, *approx_clf=None*, *cost_forecast_loc_fit=None*, *cost_forecast_loc_pred=None*, *verbose=False*)[[source]](_modules/pyod/models/suod.html#SUOD)[#](#pyod.models.suod.SUOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) SUOD (Scalable Unsupervised Outlier Detection) is an acceleration framework for large scale unsupervised outlier detector training and prediction. See [[BZHC+21](#id836)] for details. ##### Parameters[#](#id718) base_estimatorslist, length must be greater than 1A list of base estimators. Certain methods must be present, e.g., fit and predict. combinationstr, optional (default=’average’)Decide how to aggregate the results from multiple models: * “average” : average the results from all base detectors * “maximization” : output the max value across all base detectors contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. n_jobsoptional (default=1)The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the the number of jobs that can actually run in parallel. rp_clf_listlist, optional (default=None)The list of outlier detection models to use random projection. The detector name should be consistent with PyOD. rp_ng_clf_listlist, optional (default=None)The list of outlier detection models NOT to use random projection. The detector name should be consistent with PyOD. rp_flag_globalbool, optional (default=True)If set to False, random projection is turned off for all base models. target_dim_fracfloat in (0., 1), optional (default=0.5)The target compression ratio. jl_methodstring, optional (default = ‘basic’)The JL projection method: * “basic”: each component of the transformation matrix is taken at random in N(0,1). * “discrete”, each component of the transformation matrix is taken at random in {-1,1}. * “circulant”: the first row of the transformation matrix is taken at random in N(0,1), and each row is obtained from the previous one by a one-left shift. * “toeplitz”: the first row and column of the transformation matrix is taken at random in N(0,1), and each diagonal has a constant value taken from these first vector. bps_flagbool, optional (default=True)If set to False, balanced parallel scheduling is turned off. approx_clf_listlist, optional (default=None)The list of outlier detection models to use pseudo-supervised approximation. The detector name should be consistent with PyOD. approx_ng_clf_listlist, optional (default=None)The list of outlier detection models NOT to use pseudo-supervised approximation. The detector name should be consistent with PyOD. approx_flag_globalbool, optional (default=True)If set to False, pseudo-supervised approximation is turned off. approx_clfobject, optional (default: sklearn RandomForestRegressor)The supervised model used to approximate unsupervised models. cost_forecast_loc_fitstr, optionalThe location of the pretrained cost prediction forecast for training. cost_forecast_loc_predstr, optionalThe location of the pretrained cost prediction forecast for prediction. verboseint, optional (default=0)Controls the verbosity of the building process. ##### Attributes[#](#id719) [decision_scores_](#id1182)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1184)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1186)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/suod.html#SUOD.decision_function)[#](#pyod.models.suod.SUOD.decision_function) Predict raw anomaly score of X using the fitted detectors. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id720) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id721) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/suod.html#SUOD.fit)[#](#pyod.models.suod.SUOD.fit) Fit detector. y is ignored in unsupervised methods. ###### Parameters[#](#id722) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. ###### Returns[#](#id723) selfobjectFitted estimator. fit_predict(*X*, *y=None*)[#](#pyod.models.suod.SUOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.suod.SUOD.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.suod.SUOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id724) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id725) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.suod.SUOD.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id726) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id727) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.suod.SUOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id729) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id730) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.suod.SUOD.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id732) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id733) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). set_params(***params*)[#](#pyod.models.suod.SUOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id734) self : object #### pyod.models.thresholds module[#](#pyod-models-thresholds-module) pyod.models.thresholds.AUCP(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#AUCP)[#](#pyod.models.thresholds.AUCP) AUCP class for Area Under Curve Precentage thresholder. Use the area under the curve to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond where the auc of the kde is less than the (mean + abs(mean-median)) percent of the total kde auc. pyod.models.thresholds.BOOT(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#BOOT)[#](#pyod.models.thresholds.BOOT) BOOT class for Bootstrapping thresholder. Use a boostrapping based method to find a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the mean of the confidence intervals. ##### Parameters[#](#id735) random_stateint, optional (default=1234)Random seed for bootstrapping a confidence interval. Can also be set to None. pyod.models.thresholds.CHAU(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#CHAU)[#](#pyod.models.thresholds.CHAU) CHAU class for Chauvenet’s criterion thresholder. Use the Chauvenet’s criterion to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value below the Chauvenet’s criterion. ##### Parameters[#](#id736) method{‘mean’, ‘median’, ‘gmean’}, optional (default=’mean’)Calculate the area normal to distance using a scaler * ‘mean’: Construct a scaler with the mean of the scores * ‘median: Construct a scaler with the median of the scores * ‘gmean’: Construct a scaler with the geometric mean of the scores pyod.models.thresholds.CLF(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#CLF)[#](#pyod.models.thresholds.CLF) CLF class for Trained Classifier thresholder. Use the trained linear classifier to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond 0. ##### Parameters[#](#id737) method{‘simple’, ‘complex’}, optional (default=’complex’)Type of linear model * ‘simple’: Uses only the scores * ‘complex’: Uses the scores, log of the scores, and the scores’ PDF pyod.models.thresholds.CLUST(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#CLUST)[#](#pyod.models.thresholds.CLUST) CLUST class for clustering type thresholders. Use the clustering methods to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value not labelled as part of the main cluster. ##### Parameters[#](#id738) method{‘agg’, ‘birch’, ‘bang’, ‘bgm’, ‘bsas’, ‘dbscan’, ‘ema’, ‘kmeans’, ‘mbsas’, ‘mshift’, ‘optics’, ‘somsc’, ‘spec’, ‘xmeans’}, optional (default=’spec’)Clustering method * ‘agg’: Agglomerative * ‘birch’: Balanced Iterative Reducing and Clustering using Hierarchies * ‘bang’: BANG * ‘bgm’: Bayesian Gaussian Mixture * ‘bsas’: Basic Sequential Algorithmic Scheme * ‘dbscan’: Density-based spatial clustering of applications with noise * ‘ema’: Expectation-Maximization clustering algorithm for Gaussian Mixture Model * ‘kmeans’: K-means * ‘mbsas’: Modified Basic Sequential Algorithmic Scheme * ‘mshift’: Mean shift * ‘optics’: Ordering Points To Identify Clustering Structure * ‘somsc’: Self-organized feature map * ‘spec’: Clustering to a projection of the normalized Laplacian * ‘xmeans’: X-means random_stateint, optional (default=1234)Random seed for the BayesianGaussianMixture clustering (method=’bgm’). Can also be set to None. pyod.models.thresholds.CPD(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#CPD)[#](#pyod.models.thresholds.CPD) CPD class for Change Point Detection thresholder. Use change point detection to find a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the detected change point. ##### Parameters[#](#id739) method{‘Dynp’, ‘KernelCPD’, ‘Binseg’, ‘BottomUp’}, optional (default=’Dynp’)Method for change point detection * ‘Dynp’: Dynamic programming (optimal minimum sum of errors per partition) * ‘KernelCPD’: RBF kernel function (optimal minimum sum of errors per partition) * ‘Binseg’: Binary segmentation * ‘BottomUp’: Bottom-up segmentation transform{‘cdf’, ‘kde’}, optional (default=’cdf’)Data transformation method prior to fit * ‘cdf’: Use the cumulative distribution function * ‘kde’: Use the kernel density estimation pyod.models.thresholds.DECOMP(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#DECOMP)[#](#pyod.models.thresholds.DECOMP) DECOMP class for Decomposition based thresholders. Use decomposition to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the maximum of the decomposed matrix that results from decomposing the cumulative distribution function of the decision scores. ##### Parameters[#](#id740) method{‘NMF’, ‘PCA’, ‘GRP’, ‘SRP’}, optional (default=’PCA’)Method to use for decomposition * ‘NMF’: Non-Negative Matrix Factorization * ‘PCA’: Principal Component Analysis * ‘GRP’: Gaussian Random Projection * ‘SRP’: Sparse Random Projection random_stateint, optional (default=1234)Random seed for the decomposition algorithm. Can also be set to None. pyod.models.thresholds.DSN(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#DSN)[#](#pyod.models.thresholds.DSN) DSN class for Distance Shift from Normal thresholder. Use the distance shift from normal to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the distance calculated by the selected metric. ##### Parameters[#](#id741) metric{‘JS’, ‘WS’, ‘ENG’, ‘BHT’, ‘HLL’, ‘HI’, ‘LK’, ‘LP’, ‘MAH’, ‘TMT’, ‘RES’, ‘KS’, ‘INT’, ‘MMD’}, optional (default=’MAH’)Metric to use for distance computation * ‘JS’: Jensen-Shannon distance * ‘WS’: Wasserstein or Earth Movers distance * ‘ENG’: Energy distance * ‘BHT’: Bhattacharyya distance * ‘HLL’: Hellinger distance * ‘HI’: Histogram intersection distance * ‘LK’: Lukaszyk-Karmowski metric for normal distributions * ‘LP’: Levy-Prokhorov metric * ‘MAH’: Mahalanobis distance * ‘TMT’: Tanimoto distance * ‘RES’: Studentized residual distance * ‘KS’: Kolmogorov-Smirnov distance * ‘INT’: Weighted spline interpolated distance * ‘MMD’: Maximum Mean Discrepancy distance random_stateint, optional (default=1234)Random seed for the normal distribution. Can also be set to None. pyod.models.thresholds.EB(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#EB)[#](#pyod.models.thresholds.EB) EB class for Elliptical Boundary thresholder. Use pseudo-random elliptical boundaries to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond a pseudo-random elliptical boundary set between inliers and outliers. pyod.models.thresholds.FGD(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#FGD)[#](#pyod.models.thresholds.FGD) FGD class for Fixed Gradient Descent thresholder. Use the fixed gradient descent to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond where the first derivative of the kde with respect to the decision scores passes the mean of the first and second inflection points. pyod.models.thresholds.FILTER(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#FILTER)[#](#pyod.models.thresholds.FILTER) FILTER class for Filtering based thresholders. Use the filtering based methods to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the maximum filter value. See [] for details. ##### Parameters[#](#id743) method{‘gaussian’, ‘savgol’, ‘hilbert’, ‘wiener’, ‘medfilt’, ‘decimate’,’detrend’, ‘resample’}, optional (default=’savgol’)Method to filter the scores * ‘gaussian’: use a gaussian based filter * ‘savgol’: use the savgol based filter * ‘hilbert’: use the hilbert based filter * ‘wiener’: use the wiener based filter * ‘medfilt: use a median based filter * ‘decimate’: use a decimate based filter * ‘detrend’: use a detrend based filter * ‘resample’: use a resampling based filter sigmaint, optional (default=’auto’) Variable specific to each filter type, default sets sigma to len(scores)*np.std(scores) * ‘gaussian’: standard deviation for Gaussian kernel * ‘savgol’: savgol filter window size * ‘hilbert’: number of Fourier components * ‘medfilt: kernel size * ‘decimate’: downsampling factor * ‘detrend’: number of break points * ‘resample’: resampling window size pyod.models.thresholds.FWFM(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#FWFM)[#](#pyod.models.thresholds.FWFM) FWFM class for Full Width at Full Minimum thresholder. Use the full width at full minimum (aka base width) to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the base width. pyod.models.thresholds.GESD(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#GESD)[#](#pyod.models.thresholds.GESD) GESD class for Generalized Extreme Studentized Deviate thresholder. Use the generalized extreme studentized deviate to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any less than the smallest detected outlier. ##### Parameters[#](#id744) max_outliersint, optional (default=’auto’)mamiximum number of outliers that the dataset may have. Default sets max_outliers to be half the size of the dataset alphafloat, optional (default=0.05)significance level pyod.models.thresholds.HIST(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#HIST)[#](#pyod.models.thresholds.HIST) HIST class for Histogram based thresholders. Use histograms methods as described in scikit-image.filters to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set by histogram generated thresholds depending on the selected methods. ##### Parameters[#](#id745) nbinsint, optional (default=’auto’)Number of bins to use in the hostogram, default set to int(len(scores)**0.7) method{‘otsu’, ‘yen’, ‘isodata’, ‘li’, ‘minimum’, ‘triangle’}, optional (default=’triangle’)Histogram filtering based method * ‘otsu’: OTSU’s method for filtering * ‘yen’: Yen’s method for filtering * ‘isodata’: Ridler-Calvard or inter-means method for filtering * ‘li’: Li’s iterative Minimum Cross Entropy method for filtering * ‘minimum’: Minimum between two maxima via smoothing method for filtering * ‘triangle’: Triangle algorithm method for filtering pyod.models.thresholds.IQR(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#IQR)[#](#pyod.models.thresholds.IQR) IQR class for Inter-Qaurtile Region thresholder. Use the inter-quartile region to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the third quartile plus 1.5 times the inter-quartile region. pyod.models.thresholds.KARCH(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#KARCH)[#](#pyod.models.thresholds.KARCH) KARCH class for Riemannian Center of Mass thresholder. Use the Karcher mean (Riemannian Center of Mass) to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the Karcher mean + one standard deviation of the decision_scores. ##### Parameters[#](#id746) ndimint, optional (default=2)Number of dimensions to construct the Euclidean manifold method{‘simple’, ‘complex’}, optional (default=’complex’)Method for computing the Karcher mean * ‘simple’: Compute the Karcher mean using the 1D array of scores * ‘complex’: Compute the Karcher mean between a 2D array dot product of the scores and the sorted scores arrays pyod.models.thresholds.MAD(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#MAD)[#](#pyod.models.thresholds.MAD) MAD class for Median Absolute Deviation thresholder. Use the median absolute deviation to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the mean plus the median absolute deviation over the standard deviation. pyod.models.thresholds.MCST(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#MCST)[#](#pyod.models.thresholds.MCST) MCST class for Monte Carlo Shapiro Tests thresholder. Use uniform random sampling and statstical testing to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the minimum value left after iterative Shapiro-Wilk tests have occured. Note** accuracy decreases with array size. For good results the should be array<1000. However still this threshold method may fail at any array size. ##### Parameters[#](#id747) random_stateint, optional (default=1234)Random seed for the uniform distribution. Can also be set to None. pyod.models.thresholds.META(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#META)[#](#pyod.models.thresholds.META) META class for Meta-modelling thresholder. Use a trained meta-model to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set based on the trained meta-model classifier. ##### Parameters[#](#id748) method{‘LIN’, ‘GNB’, ‘GNBC’, ‘GNBM’}, optional (default=’GNBM’)select * ‘LIN’: RidgeCV trained linear classifier meta-model on true labels * ‘GNB’: Gaussian Naive Bayes trained classifier meta-model on true labels * ‘GNBC’: Gaussian Naive Bayes trained classifier meta-model on best contamination * ‘GNBM’: Gaussian Naive Bayes multivariate trained classifier meta-model pyod.models.thresholds.MOLL(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#MOLL)[#](#pyod.models.thresholds.MOLL) MOLL class for Friedrichs’ mollifier thresholder. Use the Friedrichs’ mollifier to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond one minus the maximum of the smoothed dataset via convolution. pyod.models.thresholds.MTT(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#MTT)[#](#pyod.models.thresholds.MTT) MTT class for Modified Thompson Tau test thresholder. Use the modified Thompson Tau test to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the smallest outlier detected by the test. ##### Parameters[#](#id749) strictness[1,2,3,4,5], optional (default=4)Level of strictness corresponding to the t-Student distribution map to sample pyod.models.thresholds.OCSVM(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#OCSVM)[#](#pyod.models.thresholds.OCSVM) OCSVM class for One-Class Support Vector Machine thresholder. Use a one-class svm to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are determined by the one-class svm using a polynomial kernel with the polynomial degree either set or determined by regression internally. ##### Parameters[#](#id750) model{‘poly’, ‘sgd’}, optional (default=’sgd’)OCSVM model to apply * ‘poly’: Use a polynomial kernel with a regular OCSVM * ‘sgd’: Used the Additive Chi2 kernel approximation with a SGDOneClassSVM degreeint, optional (default=’auto’)Polynomial degree to use for the one-class svm. Default ‘auto’ finds the optimal degree with linear regression gammafloat, optional (default=’auto’)Kernel coefficient for polynomial fit for the one-class svm. Default ‘auto’ uses 1 / n_features criterion{‘aic’, ‘bic’}, optional (default=’bic’)regression performance metric. AIC is the Akaike Information Criterion, and BIC is the Bayesian Information Criterion. This only applies when degree is set to ‘auto’ nufloat, optional (default=’auto’)An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Default ‘auto’ sets nu as the ratio between the any point that is less than or equal to the median plus the absolute difference between the mean and geometric mean over the the number of points in the entire dataset tolfloat, optional (default=1e-3)The stopping criterion for the one-class svm random_stateint, optional (default=1234)Random seed for the SVM’s data sampling. Can also be set to None. pyod.models.thresholds.QMCD(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#QMCD)[#](#pyod.models.thresholds.QMCD) QMCD class for Quasi-Monte Carlo Discreprancy thresholder. Use the quasi-Monte Carlo discreprancy to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond and percentile or quantile of one minus the descreperancy (Note** A discrepancy quantifies the distance between the continuous uniform distribution on a hypercube and the discrete uniform distribution on distinct sample points). ##### Parameters[#](#id751) method{‘CD’, ‘WD’, ‘MD’, ‘L2-star’}, optional (default=’WD’)Type of discrepancy * ‘CD’: Centered Discrepancy * ‘WD’: Wrap-around Discrepancy * ‘MD’: Mix between CD/WD * ‘L2-star’: L2-star discrepancy lim{‘Q’, ‘P’}, optional (default=’P’)Filtering method to threshold scores using 1 - discrepancy * ‘Q’: Use quntile limiting * ‘P’: Use percentile limiting pyod.models.thresholds.REGR(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#REGR)[#](#pyod.models.thresholds.REGR) REGR class for Regression based thresholder. Use the regression to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the y-intercept value of the linear fit. ##### Parameters[#](#id752) method{‘siegel’, ‘theil’}, optional (default=’siegel’)Regression based method to calculate the y-intercept * ‘siegel’: implements a method for robust linear regression using repeated medians * ‘theil’: implements a method for robust linear regression using paired values random_stateint, optional (default=1234)random seed for the normal distribution. Can also be set to None pyod.models.thresholds.VAE(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#VAE)[#](#pyod.models.thresholds.VAE) VAE class for Variational AutoEncoder thresholder. Use a VAE to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the maximum minus the minimum of the reconstructed distribution probabilities after encoding. ##### Parameters[#](#id753) verbosebool, optional (default=False)display training progress devicestr, optional (default=’cpu’)device for pytorch latent_dimsint, optional (default=’auto’)number of latent dimensions the encoder will map the scores to. Default ‘auto’ applies automatic dimensionality selection using a profile likelihood. random_stateint, optional (default=1234)random seed for the normal distribution. Can also be set to None epochsint, optional (default=100)number of epochs to train the VAE batch_sizeint, optional (default=64)batch size for the dataloader during training lossstr, optional (default=’kl’)Loss function during training * ‘kl’ : use the combined negative log likelihood and Kullback-Leibler divergence * ‘mmd’: use the combined negative log likelihood and maximum mean discrepancy ##### Attributes[#](#id754) [thresh_](#id1188) : threshold value that separates inliers from outliers pyod.models.thresholds.WIND(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#WIND)[#](#pyod.models.thresholds.WIND) WIND class for topological Winding number thresholder. Use the topological winding number (with respect to the origin) to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the mean intersection point calculated from the winding number. ##### Parameters[#](#id755) random_stateint, optional (default=1234)Random seed for the normal distribution. Can also be set to None. pyod.models.thresholds.YJ(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#YJ)[#](#pyod.models.thresholds.YJ) YJ class for Yeo-Johnson transformation thresholder. Use the Yeo-Johnson transformation to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond the max value in the YJ transformed data. pyod.models.thresholds.ZSCORE(***kwargs*)[[source]](_modules/pyod/models/thresholds.html#ZSCORE)[#](#pyod.models.thresholds.ZSCORE) ZSCORE class for ZSCORE thresholder. Use the zscore to evaluate a non-parametric means to threshold scores generated by the decision_scores where outliers are set to any value beyond a zscore of one. #### pyod.models.vae module[#](#module-pyod.models.vae) Variational Auto Encoder (VAE) and beta-VAE for Unsupervised Outlier Detection Reference:[[BKW13](#id830)] Kingma, <NAME> ‘Auto-Encodeing Variational Bayes’ <https://arxiv.org/abs/1312.6114[[BBHP+18](#id832)] Burges et al ‘Understanding disentangling in beta-VAE’ <https://arxiv.org/pdf/1804.03599.pdf*class* pyod.models.vae.VAE(*encoder_neurons=None*, *decoder_neurons=None*, *latent_dim=2*, *hidden_activation='relu'*, *output_activation='sigmoid'*, *loss=<function mean_squared_error>*, *optimizer='adam'*, *epochs=100*, *batch_size=32*, *dropout_rate=0.2*, *l2_regularizer=0.1*, *validation_size=0.1*, *preprocessing=True*, *verbose=1*, *random_state=None*, *contamination=0.1*, *gamma=1.0*, *capacity=0.0*)[[source]](_modules/pyod/models/vae.html#VAE)[#](#pyod.models.vae.VAE) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) Variational auto encoder Encoder maps X onto a latent space Z Decoder samples Z from N(0,1) VAE_loss = Reconstruction_loss + KL_loss Reference See [[BKW13](#id830)] Kingma, <NAME> ‘Auto-Encodeing Variational Bayes’ <https://arxiv.org/abs/1312.6114> for details. beta VAE In Loss, the emphasis is on KL_loss and capacity of a bottleneck: VAE_loss = Reconstruction_loss + gamma*KL_loss Reference See [[BBHP+18](#id832)] Burges et al ‘Understanding disentangling in beta-VAE’ <https://arxiv.org/pdf/1804.03599.pdf> for details. ##### Parameters[#](#id760) encoder_neuronslist, optional (default=[128, 64, 32])The number of neurons per hidden layer in encoder. decoder_neuronslist, optional (default=[32, 64, 128])The number of neurons per hidden layer in decoder. hidden_activationstr, optional (default=’relu’)Activation function to use for hidden layers. All hidden layers are forced to use the same type of activation. See <https://keras.io/activations/output_activationstr, optional (default=’sigmoid’)Activation function to use for output layer. See <https://keras.io/activations/lossstr or obj, optional (default=keras.losses.mean_squared_errorString (name of objective function) or objective function. See <https://keras.io/losses/gammafloat, optional (default=1.0)Coefficient of beta VAE regime. Default is regular VAE. capacityfloat, optional (default=0.0)Maximum capacity of a loss bottle neck. optimizerstr, optional (default=’adam’)String (name of optimizer) or optimizer instance. See <https://keras.io/optimizers/epochsint, optional (default=100)Number of epochs to train the model. batch_sizeint, optional (default=32)Number of samples per gradient update. dropout_ratefloat in (0., 1), optional (default=0.2)The dropout to be used across all layers. l2_regularizerfloat in (0., 1), optional (default=0.1)The regularization strength of activity_regularizer applied on each layer. By default, l2 regularizer is used. See <https://keras.io/regularizers/validation_sizefloat in (0., 1), optional (default=0.1)The percentage of data to be used for validation. preprocessingbool, optional (default=True)If True, apply standardization on the data. verboseint, optional (default=1)verbose mode. * 0 = silent * 1 = progress bar * 2 = one line per epoch. For verbose >= 1, model summary may be printed. random_staterandom_state: int, RandomState instance or None, opti(default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the r number generator; If None, the random number generator is the RandomState instance used by np.random. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is to define the threshold on the decision function. ##### Attributes[#](#id761) [encoding_dim_](#id1190)intThe number of neurons in the encoding layer. [compression_rate_](#id1192)floatThe ratio between the original feature and the number of neurons in the encoding layer. [model_](#id1194)Keras ObjectThe underlying AutoEncoder in Keras. [history_](#id1196): Keras ObjectThe AutoEncoder training history. [decision_scores_](#id1198)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [threshold_](#id1200)floatThe threshold is based on `contamination`. It is the `n_samples * contamination` most abnormal samples in `decision_scores_`. The threshold is calculated for generating binary outlier labels. [labels_](#id1202)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/vae.html#VAE.decision_function)[#](#pyod.models.vae.VAE.decision_function) Predict raw anomaly score of X using the fitted detector. The anomaly score of an input sample is computed based on different detector algorithms. For consistency, outliers are assigned with larger anomaly scores. ###### Parameters[#](#id762) Xnumpy array of shape (n_samples, n_features)The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id763) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y=None*)[[source]](_modules/pyod/models/vae.html#VAE.fit)[#](#pyod.models.vae.VAE.fit) Fit detector. y is optional for unsupervised methods. ###### Parameters[#](#id764) Xnumpy array of shape (n_samples, n_features)The input samples. ynumpy array of shape (n_samples,), optional (default=None)The ground truth of the input samples (labels). fit_predict(*X*, *y=None*)[#](#pyod.models.vae.VAE.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[#](#pyod.models.vae.VAE.fit_predict_score) DEPRECATED Fit the detector, predict on samples, and evaluate the model bypredefined metrics, e.g., ROC. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score score : float Deprecated since version 0.6.9: fit_predict_score will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. Scoring could be done by calling an evaluation method, e.g., AUC ROC. get_params(*deep=True*)[#](#pyod.models.vae.VAE.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id765) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id766) paramsmapping of string to anyParameter names mapped to their values. predict(*X*, *return_confidence=False*)[#](#pyod.models.vae.VAE.predict) Predict if a particular sample is an outlier or not. ###### Parameters[#](#id767) Xnumpy array of shape (n_samples, n_features)The input samples. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id768) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. confidencenumpy array of shape (n_samples,).Only if return_confidence is set to True. predict_confidence(*X*)[#](#pyod.models.vae.VAE.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id770) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id771) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*, *method='linear'*, *return_confidence=False*)[#](#pyod.models.vae.VAE.predict_proba) Predict the probability of a sample being outlier. Two approaches are possible: 1. simply use Min-max conversion to linearly transform the outlier scores into the range of [0,1]. The model must be fitted first. 2. use unifying scores, see [[BKKSZ11](#id800)]. ###### Parameters[#](#id773) Xnumpy array of shape (n_samples, n_features)The input samples. methodstr, optional (default=’linear’)probability conversion method. It must be one of ‘linear’ or ‘unify’. return_confidenceboolean, optional(default=False)If True, also return the confidence of prediction. ###### Returns[#](#id774) outlier_probabilitynumpy array of shape (n_samples, n_classes)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. Note it depends on the number of classes, which is by default 2 classes ([proba of normal, proba of outliers]). sampling(*args*)[[source]](_modules/pyod/models/vae.html#VAE.sampling)[#](#pyod.models.vae.VAE.sampling) Reparametrisation by sampling from Gaussian, N(0,I) To sample from epsilon = Norm(0,I) instead of from likelihood Q(z|X) with latent variables z: z = z_mean + sqrt(var) * epsilon ###### Parameters[#](#id775) argstensorMean and log of variance of Q(z|X). ###### Returns[#](#id776) ztensorSampled latent variable. set_params(***params*)[#](#pyod.models.vae.VAE.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id777) self : object vae_loss(*inputs*, *outputs*, *z_mean*, *z_log*)[[source]](_modules/pyod/models/vae.html#VAE.vae_loss)[#](#pyod.models.vae.VAE.vae_loss) Loss = Recreation loss + Kullback-Leibler loss for probability function divergence (ELBO). gamma > 1 and capacity != 0 for beta-VAE #### pyod.models.xgbod module[#](#module-pyod.models.xgbod) XGBOD: Improving Supervised Outlier Detection with Unsupervised Representation Learning. A semi-supervised outlier detection framework. *class* pyod.models.xgbod.XGBOD(*estimator_list=None*, *standardization_flag_list=None*, *max_depth=3*, *learning_rate=0.1*, *n_estimators=100*, *silent=True*, *objective='binary:logistic'*, *booster='gbtree'*, *n_jobs=1*, *nthread=None*, *gamma=0*, *min_child_weight=1*, *max_delta_step=0*, *subsample=1*, *colsample_bytree=1*, *colsample_bylevel=1*, *reg_alpha=0*, *reg_lambda=1*, *scale_pos_weight=1*, *base_score=0.5*, *random_state=0*, ***kwargs*)[[source]](_modules/pyod/models/xgbod.html#XGBOD)[#](#pyod.models.xgbod.XGBOD) Bases: [`BaseDetector`](index.html#pyod.models.base.BaseDetector) XGBOD class for outlier detection. It first uses the passed in unsupervised outlier detectors to extract richer representation of the data and then concatenates the newly generated features to the original feature for constructing the augmented feature space. An XGBoost classifier is then applied on this augmented feature space. Read more in the [[BZH18](#id810)]. ##### Parameters[#](#id779) estimator_listlist, optional (default=None)The list of pyod detectors passed in for unsupervised learning standardization_flag_listlist, optional (default=None)The list of boolean flags for indicating whether to perform standardization for each detector. max_depthintMaximum tree depth for base learners. learning_ratefloatBoosting learning rate (xgb’s “eta”) n_estimatorsintNumber of boosted trees to fit. silentboolWhether to print messages while running boosting. objectivestring or callableSpecify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). boosterstringSpecify which booster to use: gbtree, gblinear or dart. n_jobsintNumber of parallel threads used to run xgboost. (replaces `nthread`) gammafloatMinimum loss reduction required to make a further partition on a leaf node of the tree. min_child_weightintMinimum sum of instance weight(hessian) needed in a child. max_delta_stepintMaximum delta step we allow each tree’s weight estimation to be. subsamplefloatSubsample ratio of the training instance. colsample_bytreefloatSubsample ratio of columns when constructing each tree. colsample_bylevelfloatSubsample ratio of columns for each split, in each level. reg_alphafloat (xgb’s alpha)L1 regularization term on weights. reg_lambdafloat (xgb’s lambda)L2 regularization term on weights. scale_pos_weightfloatBalancing of positive and negative weights. base_score:The initial prediction score of all instances, global bias. random_stateintRandom number seed. (replaces seed) # missing : float, optional # Value in the data which needs to be present as a missing value. If # None, defaults to np.nan. importance_type: string, default “gain”The feature importance type for the `feature_importances_` property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”. **kwargsdict, optionalKeyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: <https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst>. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError. Note: **kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn. ##### Attributes[#](#id780) [n_detector_](#id1204)intThe number of unsupervised of detectors used. [clf_](#id1206)objectThe XGBoost classifier. [decision_scores_](#id1208)numpy array of shape (n_samples,)The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted. [labels_](#id1210)int, either 0 or 1The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies. It is generated by applying `threshold_` on `decision_scores_`. decision_function(*X*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.decision_function)[#](#pyod.models.xgbod.XGBOD.decision_function) Predict raw anomaly scores of X using the fitted detector. The anomaly score of an input sample is computed based on the fitted detector. For consistency, outliers are assigned with higher anomaly scores. ###### Parameters[#](#id781) Xnumpy array of shape (n_samples, n_features)The input samples. Sparse matrices are accepted only if they are supported by the base estimator. ###### Returns[#](#id782) anomaly_scoresnumpy array of shape (n_samples,)The anomaly score of the input samples. fit(*X*, *y*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.fit)[#](#pyod.models.xgbod.XGBOD.fit) Fit the model using X and y as training data. ###### Parameters[#](#id783) Xnumpy array of shape (n_samples, n_features)Training data. ynumpy array of shape (n_samples,)The ground truth (binary label) * 0 : inliers * 1 : outliers ###### Returns[#](#id784) self : object fit_predict(*X*, *y*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.fit_predict)[#](#pyod.models.xgbod.XGBOD.fit_predict) DEPRECATED Fit detector first and then predict whether a particular sampleis an outlier or not. y is ignored in unsupervised models. Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. Deprecated since version 0.6.9: fit_predict will be removed in pyod 0.8.0.; it will be replaced by calling fit function first and then accessing labels_ attribute for consistency. fit_predict_score(*X*, *y*, *scoring='roc_auc_score'*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.fit_predict_score)[#](#pyod.models.xgbod.XGBOD.fit_predict_score) Fit the detector, predict on samples, and evaluate the model by predefined metrics, e.g., ROC. ###### Parameters[#](#id785) Xnumpy array of shape (n_samples, n_features)The input samples. yIgnoredNot used, present for API consistency by convention. scoringstr, optional (default=’roc_auc_score’)Evaluation metric: * ‘roc_auc_score’: ROC score * ‘prc_n_score’: Precision @ rank n score ###### Returns[#](#id786) score : float get_params(*deep=True*)[#](#pyod.models.xgbod.XGBOD.get_params) Get parameters for this estimator. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Parameters[#](#id787) deepbool, optional (default=True)If True, will return the parameters for this estimator and contained subobjects that are estimators. ###### Returns[#](#id788) paramsmapping of string to anyParameter names mapped to their values. predict(*X*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.predict)[#](#pyod.models.xgbod.XGBOD.predict) Predict if a particular sample is an outlier or not. Calling xgboost predict function. ###### Parameters[#](#id789) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id790) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. 0 stands for inliers and 1 for outliers. predict_confidence(*X*)[#](#pyod.models.xgbod.XGBOD.predict_confidence) Predict the model’s confidence in making the same prediction under slightly different training sets. See [[BPVD20](#id839)]. ###### Parameters[#](#id792) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id793) confidencenumpy array of shape (n_samples,) For each observation, tells how consistently the model would make the same prediction if the training set was perturbed. Return a probability, ranging in [0,1]. predict_proba(*X*)[[source]](_modules/pyod/models/xgbod.html#XGBOD.predict_proba)[#](#pyod.models.xgbod.XGBOD.predict_proba) Predict the probability of a sample being outlier. Calling xgboost predict_proba function. ###### Parameters[#](#id794) Xnumpy array of shape (n_samples, n_features)The input samples. ###### Returns[#](#id795) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. set_params(***params*)[#](#pyod.models.xgbod.XGBOD.set_params) Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object. See <http://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html> and sklearn/base.py for more information. ###### Returns[#](#id796) self : object #### Module contents[#](#module-pyod.models) References BAgg15([1](#id50),[2](#id68),[3](#id228),[4](#id552)) Charu <NAME>. Outlier analysis. In *Data mining*, 75–79. Springer, 2015. BAS15([1](#id122),[2](#id133)) Charu <NAME> and <NAME>. Theoretical foundations and algorithms for outlier ensembles. *ACM SIGKDD Explorations Newsletter*, 17(1):24–47, 2015. [BABC20](#id608) <NAME>, <NAME>, and <NAME>. A novel outlier detection method for multivariate data. *IEEE Transactions on Knowledge and Data Engineering*, 2020. [BAP02](#id320) <NAME> and <NAME>. Fast outlier detection in high dimensional spaces. In *European Conference on Principles of Data Mining and Knowledge Discovery*, 15–27. Springer, 2002. [BAAR96](#id372) <NAME>, <NAME>, and <NAME>. A linear method for deviation detection in large databases. In *KDD*, volume 1141, 972–981. 1996. [BBTA+18](#id284) Tharindu <NAME>, Kai <NAME>, <NAME>, Fei <NAME>, Ye Zhu, and <NAME>. Isolation-based anomaly detection using nearest-neighbor ensembles. *Computational Intelligence*, 34(4):968–998, 2018. BBirgeR06([1](#id247),[2](#id391)) <NAME> and <NAME>. How many bins should be put in a regular histogram. *ESAIM: Probability and Statistics*, 10:24–45, 2006. [BBKNS00](#id409) Markus <NAME>, <NAME>, <NAME>, and <NAME>. Lof: identifying density-based local outliers. In *ACM sigmod record*, volume 29, 93–104. ACM, 2000. BBHP+18([1](#id757),[2](#id759)) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Understanding disentangling in betvae. *arXiv preprint arXiv:1804.03599*, 2018. [BCoo77](#id136) R <NAME>. Detection of influential observation in linear regression. *Technometrics*, 19(1):15–18, 1977. [BFM01](#id570) <NAME> and <NAME>. Wrap-around l2-discrepancy of random sampling, latin hypercube and uniform designs. *Journal of complexity*, 17(4):608–624, 2001. [BGD12](#id246) <NAME> and <NAME>. Histogram-based outlier score (hbos): a fast unsupervised anomaly detection algorithm. *KI-2012: Poster and Demo Track*, pages 59–63, 2012. [BGHNN22](#id445) <NAME>, <NAME>, <NAME>, and <NAME>. Lunar: unifying local outlier detection methods via graph neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, 6737–6745. 2022. [BHR04](#id498) <NAME> and David <NAME>. Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. *Computational Statistics & Data Analysis*, 44(4):625–638, 2004. [BHXD03](#id86) <NAME>, <NAME>, and <NAME>. Discovering cluster-based local outliers. *Pattern Recognition Letters*, 24(9-10):1641–1650, 2003. [BHof07](#id338) <NAME>. Kernel pca for novelty detection. *Pattern recognition*, 40(3):863–874, 2007. [BIH93](#id480) <NAME> and <NAME>. *How to detect and handle outliers*. Volume 16. Asq Press, 1993. [BJHuszarPvdH12](#id698) <NAME>, <NAME>, EO Postma, and <NAME>. Stochastic outlier selection. Technical Report, Technical report TiCC TR 2012-001, Tilburg University, Tilburg Center for Cognition and Communication, Tilburg, The Netherlands, 2012. BKW13([1](#id756),[2](#id758)) Diederik <NAME> and <NAME>. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. BKKSZ11([1](#id12),[2](#id28),[3](#id46),[4](#id64),[5](#id82),[6](#id100),[7](#id118),[8](#id150),[9](#id169),[10](#id187),[11](#id206),[12](#id224),[13](#id242),[14](#id261),[15](#id280),[16](#id298),[17](#id316),[18](#id334),[19](#id352),[20](#id386),[21](#id405),[22](#id423),[23](#id441),[24](#id457),[25](#id476),[26](#id494),[27](#id512),[28](#id530),[29](#id548),[30](#id566),[31](#id583),[32](#id604),[33](#id622),[34](#id658),[35](#id676),[36](#id694),[37](#id713),[38](#id731),[39](#id772)) <NAME>, <NAME>, <NAME>, and <NAME>. Interpreting and unifying outlier scores. In *Proceedings of the 2011 SIAM International Conference on Data Mining*, 13–24. SIAM, 2011. [BKKrogerSZ09](#id662) <NAME>, <NAME>, <NAME>, and <NAME>. Outlier detection in axis-parallel subspaces of high dimensional data. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, 831–838. Springer, 2009. [BKZ+08](#id1) <NAME>, <NAME>, and others. Angle-based outlier detection in high-dimensional data. In *Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining*, 444–452. ACM, 2008. [BLLP07](#id302) Longin <NAME>, <NAME>, and <NAME>. Outlier detection with kernel density functions. In *International Workshop on Machine Learning and Data Mining in Pattern Recognition*, 61–75. Springer, 2007. [BLK05](#id210) <NAME> and <NAME>. Feature bagging for outlier detection. In *Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining*, 157–166. ACM, 2005. [BLZB+20](#id154) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. COPOD: copula-based outlier detection. In *IEEE International Conference on Data Mining (ICDM)*. IEEE, 2020. [BLTZ08](#id265) Fei <NAME>, <NAME>, and <NAME>. Isolation forest. In *Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on*, 413–422. IEEE, 2008. [BLTZ12](#id265) Fei <NAME>, Kai <NAME>, and <NAME>. Isolation-based anomaly detection. *ACM Transactions on Knowledge Discovery from Data (TKDD)*, 6(1):3, 2012. BLLZ+19([1](#id516),[2](#id680)) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Generative adversarial active learning for unsupervised outlier detection. *IEEE Transactions on Knowledge and Data Engineering*, 2019. [BPKGF03](#id427) <NAME>, <NAME>, <NAME>, and <NAME>. Loci: fast outlier detection using the local correlation integral. In *Data Engineering, 2003. Proceedings. 19th International Conference on*, 315–326. IEEE, 2003. BPVD20([1](#id9),[2](#id25),[3](#id43),[4](#id61),[5](#id79),[6](#id97),[7](#id115),[8](#id147),[9](#id166),[10](#id184),[11](#id203),[12](#id221),[13](#id239),[14](#id258),[15](#id277),[16](#id295),[17](#id313),[18](#id331),[19](#id349),[20](#id383),[21](#id402),[22](#id420),[23](#id438),[24](#id454),[25](#id473),[26](#id491),[27](#id509),[28](#id527),[29](#id545),[30](#id563),[31](#id580),[32](#id601),[33](#id619),[34](#id655),[35](#id673),[36](#id691),[37](#id710),[38](#id728),[39](#id769),[40](#id791)) <NAME>, <NAME>, and <NAME>. Quantifying the confidence of anomaly detectors in their example-wise predictions. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*, 227–243. Springer, 2020. [BPevny16](#id390) <NAME>. Loda: lightweight on-line detector of anomalies. *Machine Learning*, 102(2):275–304, 2016. [BRRS00](#id320) <NAME>, <NAME>, and <NAME>. Efficient algorithms for mining outliers from large data sets. In *ACM Sigmod Record*, volume 29, 427–438. ACM, 2000. [BRD99](#id498) <NAME> and <NAME>. A fast algorithm for the minimum covariance determinant estimator. *Technometrics*, 41(3):212–223, 1999. [BRVG+18](#id173) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Deep one-class classification. *International conference on machine learning*, 2018. [BSSeebockW+17](#id32) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In *International conference on information processing in medical imaging*, 146–157. Springer, 2017. [BScholkopfPST+01](#id534) <NAME>, <NAME>, <NAME>, <NAME>, and Robert <NAME>. Estimating the support of a high-dimensional distribution. *Neural computation*, 13(7):1443–1471, 2001. [BSCSC03](#id552) <NAME>, <NAME>, <NAME>, and <NAME>. A novel anomaly detection scheme based on principal component classifier. Technical Report, MIAMI UNIV CORAL GABLES FL DEPT OF ELECTRICAL AND COMPUTER ENGINEERING, 2003. [BSB13](#id644) <NAME> and <NAME>. Rapid distance-based outlier detection via sampling. *Advances in neural information processing systems*, 2013. [BTCFC02](#id104) <NAME>, <NAME>, <NAME>-<NAME>, and <NAME>. Enhancing effectiveness of outlier detections for low density patterns. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, 535–548. Springer, 2002. [BYRV17](#id587) <NAME>, Daniel <NAME>, and <NAME>. Provable self-representation based outlier detection in a union of subspaces. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, 3395–3404. 2017. [BZRF+18](#id16) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Adversarially learned anomaly detection. In *2018 IEEE International conference on data mining (ICDM)*, 727–736. IEEE, 2018. [BZH18](#id778) <NAME> and Maciej <NAME>. Xgbod: improving supervised outlier detection with unsupervised representation learning. In *International Joint Conference on Neural Networks (IJCNN)*. IEEE, 2018. [BZHC+21](#id717) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Suod: accelerating large-scale unsupervised heterogeneous outlier detection. *Proceedings of Machine Learning and Systems*, 2021. [BZNHL19](#id461) <NAME>, <NAME>, <NAME>, and <NAME>. LSCP: locally selective combination in parallel outlier ensembles. In *Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019*, 585–593. Calgary, Canada, May 2019. SIAM. URL: <https://doi.org/10.1137/1.9781611975673.66>, [doi:10.1137/1.9781611975673.66](https://doi.org/10.1137/1.9781611975673.66). ### Utility Functions[#](#utility-functions) #### pyod.utils.data module[#](#module-pyod.utils.data) Utility functions for manipulating data pyod.utils.data.check_consistent_shape(*X_train*, *y_train*, *X_test*, *y_test*, *y_train_pred*, *y_test_pred*)[[source]](_modules/pyod/utils/data.html#check_consistent_shape)[#](#pyod.utils.data.check_consistent_shape) Internal shape to check input data shapes are consistent. ##### Parameters[#](#parameters) X_trainnumpy array of shape (n_samples, n_features)The training samples. y_trainlist or array of shape (n_samples,)The ground truth of training samples. X_testnumpy array of shape (n_samples, n_features)The test samples. y_testlist or array of shape (n_samples,)The ground truth of test samples. y_train_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the training samples. y_test_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the test samples. ##### Returns[#](#returns) X_trainnumpy array of shape (n_samples, n_features)The training samples. y_trainlist or array of shape (n_samples,)The ground truth of training samples. X_testnumpy array of shape (n_samples, n_features)The test samples. y_testlist or array of shape (n_samples,)The ground truth of test samples. y_train_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the training samples. y_test_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the test samples. pyod.utils.data.evaluate_print(*clf_name*, *y*, *y_pred*)[[source]](_modules/pyod/utils/data.html#evaluate_print)[#](#pyod.utils.data.evaluate_print) Utility function for evaluating and printing the results for examples. Default metrics include ROC and Precision @ n ##### Parameters[#](#id1) clf_namestrThe name of the detector. ylist or numpy array of shape (n_samples,)The ground truth. Binary (0: inliers, 1: outliers). y_predlist or numpy array of shape (n_samples,)The raw outlier scores as returned by a fitted model. pyod.utils.data.generate_data(*n_train=1000*, *n_test=500*, *n_features=2*, *contamination=0.1*, *train_only=False*, *offset=10*, *behaviour='new'*, *random_state=None*, *n_nan=0*, *n_inf=0*)[[source]](_modules/pyod/utils/data.html#generate_data)[#](#pyod.utils.data.generate_data) Utility function to generate synthesized data. Normal data is generated by a multivariate Gaussian distribution and outliers are generated by a uniform distribution. “X_train, X_test, y_train, y_test” are returned. ##### Parameters[#](#id2) n_trainint, (default=1000)The number of training points to generate. n_testint, (default=500)The number of test points to generate. n_featuresint, optional (default=2)The number of features (dimensions). contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Used when fitting to define the threshold on the decision function. train_onlybool, optional (default=False)If true, generate train data only. offsetint, optional (default=10)Adjust the value range of Gaussian and Uniform. behaviourstr, default=’new’Behaviour of the returned datasets which can be either ‘old’ or ‘new’. Passing `behaviour='new'` returns “X_train, X_test, y_train, y_test”, while passing `behaviour='old'` returns “X_train, y_train, X_test, y_test”. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. n_nanintThe number of values that are missing (np.NaN). Defaults to zero. n_infintThe number of values that are infinite. (np.infty). Defaults to zero. ##### Returns[#](#id3) X_trainnumpy array of shape (n_train, n_features)Training data. X_testnumpy array of shape (n_test, n_features)Test data. y_trainnumpy array of shape (n_train,)Training ground truth. y_testnumpy array of shape (n_test,)Test ground truth. pyod.utils.data.generate_data_categorical(*n_train=1000*, *n_test=500*, *n_features=2*, *n_informative=2*, *n_category_in=2*, *n_category_out=2*, *contamination=0.1*, *shuffle=True*, *random_state=None*)[[source]](_modules/pyod/utils/data.html#generate_data_categorical)[#](#pyod.utils.data.generate_data_categorical) Utility function to generate synthesized categorical data. ##### Parameters[#](#id4) n_trainint, (default=1000)The number of training points to generate. n_testint, (default=500)The number of test points to generate. n_featuresint, optional (default=2)The number of features for each sample. n_informativeint in (1, n_features), optional (default=2)The number of informative features in the outlier points. The higher the easier the outlier detection should be. Note that n_informative should not be less than or equal n_features. n_category_inint in (1, n_inliers), optional (default=2)The number of categories in the inlier points. n_category_outint in (1, n_outliers), optional (default=2)The number of categories in the outlier points. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. shuffle: bool, optional(default=True)If True, inliers will be shuffled which makes more noisy distribution. random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. ##### Returns[#](#id5) X_trainnumpy array of shape (n_train, n_features)Training data. y_trainnumpy array of shape (n_train,)Training ground truth. X_testnumpy array of shape (n_test, n_features)Test data. y_testnumpy array of shape (n_test,)Test ground truth. pyod.utils.data.generate_data_clusters(*n_train=1000*, *n_test=500*, *n_clusters=2*, *n_features=2*, *contamination=0.1*, *size='same'*, *density='same'*, *dist=0.25*, *random_state=None*, *return_in_clusters=False*)[[source]](_modules/pyod/utils/data.html#generate_data_clusters)[#](#pyod.utils.data.generate_data_clusters) Utility function to generate synthesized data in clusters.Generated data can involve the low density pattern problem and global outliers which are considered as difficult tasks for outliers detection algorithms. ##### Parameters[#](#id6) n_trainint, (default=1000)The number of training points to generate. n_testint, (default=500)The number of test points to generate. n_clustersint, optional (default=2)The number of centers (i.e. clusters) to generate. n_featuresint, optional (default=2)The number of features for each sample. contaminationfloat in (0., 0.5), optional (default=0.1)The amount of contamination of the data set, i.e. the proportion of outliers in the data set. sizestr, optional (default=’same’)Size of each cluster: ‘same’ generates clusters with same size, ‘different’ generate clusters with different sizes. densitystr, optional (default=’same’)Density of each cluster: ‘same’ generates clusters with same density, ‘different’ generate clusters with different densities. dist: float, optional (default=0.25)Distance between clusters. Should be between 0. and 1.0 It is used to avoid clusters overlapping as much as possible. However, if number of samples and number of clusters are too high, it is unlikely to separate them fully even if `dist` set to 1.0 random_stateint, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. return_in_clustersbool, optional (default=False)If True, the function returns x_train, y_train, x_test, y_test each as a list of numpy arrays where each index represents a cluster. If False, it returns x_train, y_train, x_test, y_test each as numpy array after joining the sequence of clusters arrays, ##### Returns[#](#id7) X_trainnumpy array of shape (n_train, n_features)Training data. y_trainnumpy array of shape (n_train,)Training ground truth. X_testnumpy array of shape (n_test, n_features)Test data. y_testnumpy array of shape (n_test,)Test ground truth. pyod.utils.data.get_outliers_inliers(*X*, *y*)[[source]](_modules/pyod/utils/data.html#get_outliers_inliers)[#](#pyod.utils.data.get_outliers_inliers) Internal method to separate inliers from outliers. ##### Parameters[#](#id8) Xnumpy array of shape (n_samples, n_features)The input samples ylist or array of shape (n_samples,)The ground truth of input samples. ##### Returns[#](#id9) X_outliersnumpy array of shape (n_samples, n_features)Outliers. X_inliersnumpy array of shape (n_samples, n_features)Inliers. #### pyod.utils.example module[#](#module-pyod.utils.example) Utility functions for running examples pyod.utils.example.data_visualize(*X_train*, *y_train*, *show_figure=True*, *save_figure=False*)[[source]](_modules/pyod/utils/example.html#data_visualize)[#](#pyod.utils.example.data_visualize) Utility function for visualizing the synthetic samples generated by generate_data_cluster function. ##### Parameters[#](#id10) X_trainnumpy array of shape (n_samples, n_features)The training samples. y_trainlist or array of shape (n_samples,)The ground truth of training samples. show_figurebool, optional (default=True)If set to True, show the figure. save_figurebool, optional (default=False)If set to True, save the figure to the local. pyod.utils.example.visualize(*clf_name*, *X_train*, *y_train*, *X_test*, *y_test*, *y_train_pred*, *y_test_pred*, *show_figure=True*, *save_figure=False*)[[source]](_modules/pyod/utils/example.html#visualize)[#](#pyod.utils.example.visualize) Utility function for visualizing the results in examples. Internal use only. ##### Parameters[#](#id11) clf_namestrThe name of the detector. X_trainnumpy array of shape (n_samples, n_features)The training samples. y_trainlist or array of shape (n_samples,)The ground truth of training samples. X_testnumpy array of shape (n_samples, n_features)The test samples. y_testlist or array of shape (n_samples,)The ground truth of test samples. y_train_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the training samples. y_test_prednumpy array of shape (n_samples, n_features)The predicted binary labels of the test samples. show_figurebool, optional (default=True)If set to True, show the figure. save_figurebool, optional (default=False)If set to True, save the figure to the local. #### pyod.utils.stat_models module[#](#module-pyod.utils.stat_models) A collection of statistical models pyod.utils.stat_models.column_ecdf(*matrix: ndarray*) → ndarray[[source]](_modules/pyod/utils/stat_models.html#column_ecdf)[#](#pyod.utils.stat_models.column_ecdf) Utility function to compute the column wise empirical cumulative distribution of a 2D feature matrix, where the rows are samples and the columns are features per sample. The accumulation is done in the positive direction of the sample axis. E.G. p(1) = 0.2, p(0) = 0.3, p(2) = 0.1, p(6) = 0.4 ECDF E(5) = p(x <= 5) ECDF E would be E(-1) = 0, E(0) = 0.3, E(1) = 0.5, E(2) = 0.6, E(3) = 0.6, E(4) = 0.6, E(5) = 0.6, E(6) = 1 Similar to and tested against: <https://www.statsmodels.org/stable/generated/statsmodels.distributions.empirical_distribution.ECDF.html##### Returns[#](#id12) pyod.utils.stat_models.ecdf_terminate_equals_inplace(*matrix: ndarray*, *probabilities: ndarray*)[[source]](_modules/pyod/utils/stat_models.html#ecdf_terminate_equals_inplace)[#](#pyod.utils.stat_models.ecdf_terminate_equals_inplace) This is a helper function for computing the ecdf of an array. It has been outsourced from the original function in order to be able to use the njit compiler of numpy for increased speeds, as it unfortunately needs a loop over all rows and columns of a matrix. It acts in place on the probabilities’ matrix. ##### Parameters[#](#id13) matrix : a feature matrix where the rows are samples and each column is a feature !(expected to be sorted)! probabilitiesa probability matrix that will be used building the ecdf. It has values between 0 and 1 andis also sorted. ##### Returns[#](#id14) pyod.utils.stat_models.pairwise_distances_no_broadcast(*X*, *Y*)[[source]](_modules/pyod/utils/stat_models.html#pairwise_distances_no_broadcast)[#](#pyod.utils.stat_models.pairwise_distances_no_broadcast) Utility function to calculate row-wise euclidean distance of two matrix. Different from pair-wise calculation, this function would not broadcast. For instance, X and Y are both (4,3) matrices, the function would return a distance vector with shape (4,), instead of (4,4). ##### Parameters[#](#id15) Xarray of shape (n_samples, n_features)First input samples Yarray of shape (n_samples, n_features)Second input samples ##### Returns[#](#id16) distancearray of shape (n_samples,)Row-wise euclidean distance of X and Y pyod.utils.stat_models.pearsonr_mat(*mat*, *w=None*)[[source]](_modules/pyod/utils/stat_models.html#pearsonr_mat)[#](#pyod.utils.stat_models.pearsonr_mat) Utility function to calculate pearson matrix (row-wise). ##### Parameters[#](#id17) matnumpy array of shape (n_samples, n_features)Input matrix. wnumpy array of shape (n_features,)Weights. ##### Returns[#](#id18) pear_matnumpy array of shape (n_samples, n_samples)Row-wise pearson score matrix. pyod.utils.stat_models.wpearsonr(*x*, *y*, *w=None*)[[source]](_modules/pyod/utils/stat_models.html#wpearsonr)[#](#pyod.utils.stat_models.wpearsonr) Utility function to calculate the weighted Pearson correlation of two samples. See <https://stats.stackexchange.com/questions/221246/such-thing-as-a-weighted-correlation> for more information ##### Parameters[#](#id19) xarray, shape (n,)Input x. yarray, shape (n,)Input y. warray, shape (n,)Weights w. ##### Returns[#](#id20) scoresfloat in range of [-1,1]Weighted Pearson Correlation between x and y. #### pyod.utils.utility module[#](#module-pyod.utils.utility) A set of utility functions to support outlier detection. pyod.utils.utility.argmaxn(*value_list*, *n*, *order='desc'*)[[source]](_modules/pyod/utils/utility.html#argmaxn)[#](#pyod.utils.utility.argmaxn) Return the index of top n elements in the list if order is set to ‘desc’, otherwise return the index of n smallest ones. ##### Parameters[#](#id21) value_listlist, array, numpy array of shape (n_samples,)A list containing all values. nintThe number of elements to select. orderstr, optional (default=’desc’)The order to sort {‘desc’, ‘asc’}: * ‘desc’: descending * ‘asc’: ascending ##### Returns[#](#id22) index_listnumpy array of shape (n,)The index of the top n elements. pyod.utils.utility.check_detector(*detector*)[[source]](_modules/pyod/utils/utility.html#check_detector)[#](#pyod.utils.utility.check_detector) Checks if fit and decision_function methods exist for given detector ##### Parameters[#](#id23) detectorpyod.modelsDetector instance for which the check is performed. pyod.utils.utility.check_parameter(*param*, *low=-2147483647*, *high=2147483647*, *param_name=''*, *include_left=False*, *include_right=False*)[[source]](_modules/pyod/utils/utility.html#check_parameter)[#](#pyod.utils.utility.check_parameter) Check if an input is within the defined range. ##### Parameters[#](#id24) paramint, floatThe input parameter to check. lowint, floatThe lower bound of the range. highint, floatThe higher bound of the range. param_namestr, optional (default=’’)The name of the parameter. include_leftbool, optional (default=False)Whether includes the lower bound (lower bound <=). include_rightbool, optional (default=False)Whether includes the higher bound (<= higher bound). ##### Returns[#](#id25) within_rangebool or raise errorsWhether the parameter is within the range of (low, high) pyod.utils.utility.generate_bagging_indices(*random_state*, *bootstrap_features*, *n_features*, *min_features*, *max_features*)[[source]](_modules/pyod/utils/utility.html#generate_bagging_indices)[#](#pyod.utils.utility.generate_bagging_indices) Randomly draw feature indices. Internal use only. Modified from sklearn/ensemble/bagging.py ##### Parameters[#](#id26) random_stateRandomStateA random number generator instance to define the state of the random permutations generator. bootstrap_featuresboolSpecifies whether to bootstrap indice generation n_featuresintSpecifies the population size when generating indices min_featuresintLower limit for number of features to randomly sample max_featuresintUpper limit for number of features to randomly sample ##### Returns[#](#id27) feature_indicesnumpy array, shape (n_samples,)Indices for features to bag pyod.utils.utility.generate_indices(*random_state*, *bootstrap*, *n_population*, *n_samples*)[[source]](_modules/pyod/utils/utility.html#generate_indices)[#](#pyod.utils.utility.generate_indices) Draw randomly sampled indices. Internal use only. See sklearn/ensemble/bagging.py ##### Parameters[#](#id28) random_stateRandomStateA random number generator instance to define the state of the random permutations generator. bootstrapboolSpecifies whether to bootstrap indice generation n_populationintSpecifies the population size when generating indices n_samplesintSpecifies number of samples to draw ##### Returns[#](#id29) indicesnumpy array, shape (n_samples,)randomly drawn indices pyod.utils.utility.get_diff_elements(*li1*, *li2*)[[source]](_modules/pyod/utils/utility.html#get_diff_elements)[#](#pyod.utils.utility.get_diff_elements) get the elements in li1 but not li2, and vice versa ##### Parameters[#](#id30) li1list or numpy arrayInput list 1. li2list or numpy arrayInput list 2. ##### Returns[#](#id31) differencelistThe difference between li1 and li2. pyod.utils.utility.get_intersection(*lst1*, *lst2*)[[source]](_modules/pyod/utils/utility.html#get_intersection)[#](#pyod.utils.utility.get_intersection) get the overlapping between two lists ##### Parameters[#](#id32) li1list or numpy arrayInput list 1. li2list or numpy arrayInput list 2. ##### Returns[#](#id33) differencelistThe overlapping between li1 and li2. pyod.utils.utility.get_label_n(*y*, *y_pred*, *n=None*)[[source]](_modules/pyod/utils/utility.html#get_label_n)[#](#pyod.utils.utility.get_label_n) Function to turn raw outlier scores into binary labels by assign 1 to top n outlier scores. ##### Parameters[#](#id34) ylist or numpy array of shape (n_samples,)The ground truth. Binary (0: inliers, 1: outliers). y_predlist or numpy array of shape (n_samples,)The raw outlier scores as returned by a fitted model. nint, optional (default=None)The number of outliers. if not defined, infer using ground truth. ##### Returns[#](#id35) labelsnumpy array of shape (n_samples,)binary labels 0: normal points and 1: outliers ##### Examples[#](#examples) ``` >>> from pyod.utils.utility import get_label_n >>> y = [0, 1, 1, 0, 0] >>> y_pred = [0.1, 0.5, 0.3, 0.2, 0.7] >>> get_label_n(y, y_pred) array([0, 1, 0, 0, 1]) ``` pyod.utils.utility.get_list_diff(*li1*, *li2*)[[source]](_modules/pyod/utils/utility.html#get_list_diff)[#](#pyod.utils.utility.get_list_diff) get the elements in li1 but not li2. li1-li2 ##### Parameters[#](#id36) li1list or numpy arrayInput list 1. li2list or numpy arrayInput list 2. ##### Returns[#](#id37) differencelistThe difference between li1 and li2. pyod.utils.utility.get_optimal_n_bins(*X*, *upper_bound=None*, *epsilon=1*)[[source]](_modules/pyod/utils/utility.html#get_optimal_n_bins)[#](#pyod.utils.utility.get_optimal_n_bins) Determine optimal number of bins for a histogram using the Birge Rozenblac method (see [[BBirgeR06](index.html#id838)] for details.) See <https://doi.org/10.1051/ps:2006001##### Parameters[#](#id39) Xarray-like of shape (n_samples, n_features) The samples to determine the optimal number of bins for. upper_boundint, default=None The maximum value of n_bins to be considered. If set to None, np.sqrt(X.shape[0]) will be used as upper bound. epsilonfloat, default = 1 A stabilizing term added to the logarithm to prevent division by zero. ##### Returns[#](#id40) optimal_n_binsint The optimal value of n_bins according to the Birge Rozenblac method pyod.utils.utility.invert_order(*scores*, *method='multiplication'*)[[source]](_modules/pyod/utils/utility.html#invert_order)[#](#pyod.utils.utility.invert_order) Invert the order of a list of values. The smallest value becomes the largest in the inverted list. This is useful while combining multiple detectors since their score order could be different. ##### Parameters[#](#id41) scoreslist, array or numpy array with shape (n_samples,)The list of values to be inverted methodstr, optional (default=’multiplication’)Methods used for order inversion. Valid methods are: * ‘multiplication’: multiply by -1 * ‘subtraction’: max(scores) - scores ##### Returns[#](#id42) inverted_scoresnumpy array of shape (n_samples,)The inverted list ##### Examples[#](#id43) ``` >>> scores1 = [0.1, 0.3, 0.5, 0.7, 0.2, 0.1] >>> invert_order(scores1) array([-0.1, -0.3, -0.5, -0.7, -0.2, -0.1]) >>> invert_order(scores1, method='subtraction') array([0.6, 0.4, 0.2, 0. , 0.5, 0.6]) ``` pyod.utils.utility.precision_n_scores(*y*, *y_pred*, *n=None*)[[source]](_modules/pyod/utils/utility.html#precision_n_scores)[#](#pyod.utils.utility.precision_n_scores) Utility function to calculate precision @ rank n. ##### Parameters[#](#id44) ylist or numpy array of shape (n_samples,)The ground truth. Binary (0: inliers, 1: outliers). y_predlist or numpy array of shape (n_samples,)The raw outlier scores as returned by a fitted model. nint, optional (default=None)The number of outliers. if not defined, infer using ground truth. ##### Returns[#](#id45) precision_at_rank_nfloatPrecision at rank n score. pyod.utils.utility.score_to_label(*pred_scores*, *outliers_fraction=0.1*)[[source]](_modules/pyod/utils/utility.html#score_to_label)[#](#pyod.utils.utility.score_to_label) Turn raw outlier outlier scores to binary labels (0 or 1). ##### Parameters[#](#id46) pred_scoreslist or numpy array of shape (n_samples,)Raw outlier scores. Outliers are assumed have larger values. outliers_fractionfloat in (0,1)Percentage of outliers. ##### Returns[#](#id47) outlier_labelsnumpy array of shape (n_samples,)For each observation, tells whether or not it should be considered as an outlier according to the fitted model. Return the outlier probability, ranging in [0,1]. pyod.utils.utility.standardizer(*X*, *X_t=None*, *keep_scalar=False*)[[source]](_modules/pyod/utils/utility.html#standardizer)[#](#pyod.utils.utility.standardizer) Conduct Z-normalization on data to turn input samples become zero-mean and unit variance. ##### Parameters[#](#id48) Xnumpy array of shape (n_samples, n_features)The training samples X_tnumpy array of shape (n_samples_new, n_features), optional (default=None)The data to be converted keep_scalarbool, optional (default=False)The flag to indicate whether to return the scalar ##### Returns[#](#id49) X_normnumpy array of shape (n_samples, n_features)X after the Z-score normalization X_t_normnumpy array of shape (n_samples, n_features)X_t after the Z-score normalization scalarsklearn scalar objectThe scalar used in conversion ### Module contents[#](#module-pyod) Known Issues & Warnings[#](#known-issues-warnings) --- This is the central place to track known issues. ### Installation[#](#installation) There are some known dependency issues/notes. Refer [installation](https://pyod.readthedocs.io/en/latest/install.html) for more information. ### Neural Networks[#](#neural-networks) SO_GAAL and MO_GAAL may only work under Python 3.5+. ### Differences between PyOD and scikit-learn[#](#differences-between-pyod-and-scikit-learn) Although PyOD is built on top of scikit-learn and inspired by its API design, some differences should be noted: * All models in PyOD follow the tradition that the outlying objects come with higher scores while the normal objects have lower scores. scikit-learn has an inverted design–lower scores stand for outlying objects. * PyOD uses “0” to represent inliers and “1” to represent outliers. Differently, scikit-learn returns “-1” for anomalies/outliers and “1” for inliers. * Although Isolation Forests, One-class SVM, and Local Outlier Factor are implemented in both PyOD and scikit-learn, users are not advised to mix the use of them, e.g., calling one model from PyOD and another model from scikit-learn. It is recommended to only use one library for consistency (for three models, the PyOD implementation is indeed a set of wrapper functions of scikit-learn). * PyOD models may not work with scikit-learn’s check_estimator function. Similarly, scikit-learn models would not work with PyOD’s check_estimator function. Outlier Detection 101[#](#outlier-detection-101) --- Outlier detection broadly refers to the task of identifying observations which may be considered anomalous given the distribution of a sample. Any observation belonging to the distribution is referred to as an inlier and any outlying point is referred to as an outlier. In the context of machine learning, there are three common approaches for this task: 1. Unsupervised Outlier Detection * Training data (unlabelled) contains both normal and anomalous observations. * The model identifies outliers during the fitting process. * This approach is taken when outliers are defined as points that exist in low-density regions in the data. * Any new observations that do not belong to high-density regions are considered outliers. 2. Semi-supervised Novelty Detection * Training data consists only of observations describing normal behavior. * The model is fit on training data and then used to evaluate new observations. * This approach is taken when outliers are defined as points differing from the distribution of the training data. * Any new observations differing from the training data within a threshold, even if they form a high-density region, are considered outliers. 3. Supervised Outlier Classification * The ground truth label (inlier vs outlier) for every observation is known. * The model is fit on imbalanced training data and then used to classify new observations. * This approach is taken when ground truth is available and it is assumed that outliers will follow the same distribution as in the training set. * Any new observations are classified using the model. The algorithms found in *PyOD* focus on the first two approaches which differ in terms of how the training data is defined and how the model’s outputs are interpreted. If interested in learning more, please refer to our [Anomaly Detection Resources](https://github.com/yzhao062/anomaly-detection-resources) page for relevant related books, papers, videos, and toolboxes. Citations & Achievements[#](#citations-achievements) --- --- ### Citing PyOD[#](#citing-pyod) [PyOD paper](http://www.jmlr.org/papers/volume20/19-011/19-011.pdf) is published in [JMLR](http://www.jmlr.org/) (machine learning open-source software track). If you use PyOD in a scientific publication, we would appreciate citations to the following paper: ``` @article{zhao2019pyod, author = {<NAME> and <NAME> and <NAME>}, title = {PyOD: A Python Toolbox for Scalable Outlier Detection}, journal = {Journal of Machine Learning Research}, year = {2019}, volume = {20}, number = {96}, pages = {1-7}, url = {http://jmlr.org/papers/v20/19-011.html} } ``` or: ``` <NAME>., <NAME>. and <NAME>., 2019. PyOD: A Python Toolbox for Scalable Outlier Detection. Journal of machine learning research (JMLR), 20(96), pp.1-7. ``` --- ### Scientific Work Using or Referencing PyOD[#](#scientific-work-using-or-referencing-pyod) We are appreciated that PyOD has been increasingly referred and cited in scientific works. Since its release, PyOD has been used in hundred of academic projects. See [an incomplete list here](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=3726241381117726876). --- ### Featured Posts & Achievements[#](#featured-posts-achievements) PyOD has been well acknowledged by the machine learning community with a few featured posts and tutorials. **Analytics Vidhya**: [An Awesome Tutorial to Learn Outlier Detection in Python using PyOD Library](https://www.analyticsvidhya.com/blog/2019/02/outlier-detection-python-pyod/) **KDnuggets**: [Intuitive Visualization of Outlier Detection Methods](https://www.kdnuggets.com/2019/02/outlier-detection-methods-cheat-sheet.html) **KDnuggets**: [An Overview of Outlier Detection Methods from PyOD](https://www.kdnuggets.com/2019/06/overview-outlier-detection-methods-pyod.html) **Towards Data Science**: [Anomaly Detection for Dummies](https://towardsdatascience.com/anomaly-detection-for-dummies-15f148e559c1) **Computer Vision News (March 2019)**: [Python Open Source Toolbox for Outlier Detection](https://rsipvision.com/ComputerVisionNews-2019March/18/) **FLOYDHUB**: [Introduction to Anomaly Detection in Python](https://blog.floydhub.com/introduction-to-anomaly-detection-in-python/) **awesome-machine-learning**: [General-Purpose Machine Learning](https://github.com/josephmisiti/awesome-machine-learning#python-general-purpose) **Lecture on anomaly detection with PyOD by Dr.<NAME>**: [Anomaly Detection Lecture](https://www.youtube.com/watch?v=sF2DeSPrGfc) **Workshop/Showcase using PyOD**: * [Detecting the Unexpected: An Introduction to Anomaly Detection Methods](http://www.kiss.caltech.edu/workshops/technosignatures/presentations/Wagstaff.pdf), *KISS Technosignatures Workshop* by Dr. <NAME> @ Jet Propulsion Laboratory, California Institute of Technology. [[Workshop Video](https://www.youtube.com/watch?v=brWqY4Wads4)] [[PDF](http://www.kiss.caltech.edu/workshops/technosignatures/presentations/Wagstaff.pdf)] **GitHub Python Trending**: * 2019: Jul 8th-9th, Apr 5th-6th, Feb 10th-11th, Jan 23th-24th, Jan 10th-14th * 2018: Jun 15, Dec 8th-9th **Miscellaneous**: * [PythonAwesome](https://pythonawesome.com/a-python-toolkit-for-scalable-outlier-detection/) * [awesome-python](https://github.com/uhub/awesome-python) * [PapersWithCode](https://paperswithcode.com/task/anomaly-detection) Frequently Asked Questions[#](#frequently-asked-questions) --- --- ### What is the Next?[#](#what-is-the-next) This is the central place to track important things to be fixed/added: * GPU support (it is noted that keras with TensorFlow backend will automatically run on GPU; auto_encoder_example.py takes around 96.95 seconds on a RTX 2060 GPU). * Installation efficiency improvement, such as using docker * Add contact channel with [Gitter](https://gitter.im) * Support additional languages, see [Manage Translations](https://docs.readthedocs.io/en/latest/guides/manage-translations.html) * Fix the bug that numba enabled function may be excluded from code coverage * Decide which Python interpreter should readthedocs use. 3.X invokes Python 3.7 which has no TF supported for now. Feel free to open on issue report if needed. See [Issues](https://github.com/yzhao062/pyod/issues). --- ### How to Contribute[#](#how-to-contribute) You are welcome to contribute to this exciting project: * Please first check Issue lists for “help wanted” tag and comment the one you are interested. We will assign the issue to you. * Fork the master branch and add your improvement/modification/fix. * Create a pull request to **development branch** and follow the pull request template [PR template](https://github.com/yzhao062/pyod/blob/master/PULL_REQUEST_TEMPLATE.md) * Automatic tests will be triggered. Make sure all tests are passed. Please make sure all added modules are accompanied with proper test functions. To make sure the code has the same style and standard, please refer to abod.py, hbos.py, or feature_bagging.py for example. You are also welcome to share your ideas by opening an issue or dropping me an email at [<EMAIL>](mailto:zhaoy%40cmu.edu) :) ### Inclusion Criteria[#](#inclusion-criteria) Similarly to [scikit-learn](https://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms), We mainly consider well-established algorithms for inclusion. A rule of thumb is at least two years since publication, 50+ citations, and usefulness. However, we encourage the author(s) of newly proposed models to share and add your implementation into PyOD for boosting ML accessibility and reproducibility. This exception only applies if you could commit to the maintenance of your model for at least two year period. About us[#](#about-us) --- ### Core Development Team[#](#core-development-team) <NAME> (Assistant Professor @ USC, Ph.D. @ CMU): * Initialized the project in 2017 * [Homepage](https://viterbi-web.usc.edu/~yzhao010/) * [LinkedIn (<NAME>)](https://www.linkedin.com/in/yzhao062/) <NAME> (Data Scientist at RBC; MSc in Computer Science from University of Toronto): * Joined in 2018 * [LinkedIn (<NAME>)](https://www.linkedin.com/in/zain-nasrullah-097a2b85) Winston (Zheng) Li (Founder of [arima](https://www.arimadata.com/), Part-time Instructor @ Northeastern University): * Joined in 2018 * [LinkedIn (Winston Li)](https://www.linkedin.com/in/winstonl) <NAME> (Senior AI/ML & Software Systems Engineer @ General Motors): * Joined in 2019 * [LinkedIn (<NAME>)](https://www.linkedin.com/in/yahya-almardeny/) <NAME> (DOE Joint Genome Institute) * Joined in 2020 (our Conda maintainer) * [GitHub (<NAME>)](https://github.com/apcamargo) Dr <NAME> (Research Associate @ University of Liverpool) * Joined in 2020 (implemented the VAE and extend to Beta-VAE) * [Homepage (Dr Andrij Vasylenko)](https://www.liverpool.ac.uk/chemistry/staff/andrij-vasylenko/) <NAME> (Ph.D. Student @ Radboud University): * Joined in 2021 * [LinkedIn (<NAME>)](https://nl.linkedin.com/in/roel-bouman-18b5b9167) <NAME> (Data Scientist): * Joined in 2021 (implemented DeepSVDD) * [LinkedIn (<NAME>)](https://pl.linkedin.com/in/rafalbodziony) Dr <NAME> (Associate Professor @ Aichi Institute of Technology): * Joined in 2022 (implemented multiple OD algorithms such as KDE, sampling, and more) * [Homepage (Dr Akira Tamamori)](https://researchmap.jp/tamamori?lang=en) <NAME> (PhD student @ Erasmus Medical Centre Metabolomics & Genetics): * Joined in 2022 (implemented AnoGAN and more) * [GitHub (<NAME>)](https://github.com/mbongaerts) <NAME> (PhD Researcher @ National University of Singapore): * Joined in 2022 (implemented LUNAR) * [LinkedIn (<NAME>)](https://www.linkedin.com/in/adam-goodge-33908691/) <NAME> (Machine Learning Developer; MSc Student @ University of the Free State): * Joined 2022 (implemented integration with PyThresh and more) * [LinkedIn (<NAME>)](https://www.linkedin.com/in/daniel-kulik-148256223) --- References AAgg15([1](#id14),[2](#id39)) Charu <NAME>. Outlier analysis. In *Data mining*, 75–79. Springer, 2015. AAS15([1](#id54),[2](#id55),[3](#id56),[4](#id57),[5](#id58),[6](#id59),[7](#id60)) Charu <NAME> and Saket Sathe. Theoretical foundations and algorithms for outlier ensembles. *ACM SIGKDD Explorations Newsletter*, 17(1):24–47, 2015. [AABC20](#id31) <NAME>, <NAME>, and <NAME>. A novel outlier detection method for multivariate data. *IEEE Transactions on Knowledge and Data Engineering*, 2020. AAP02([1](#id27),[2](#id28),[3](#id29)) <NAME> and <NAME>. Fast outlier detection in high dimensional spaces. In *European Conference on Principles of Data Mining and Knowledge Discovery*, 15–27. Springer, 2002. [AAAR96](#id20) <NAME>, <NAME>, and <NAME>. A linear method for deviation detection in large databases. In *KDD*, volume 1141, 972–981. 1996. [ABTA+18](#id33) Tharindu <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Isolation-based anomaly detection using nearest-neighbor ensembles. *Computational Intelligence*, 34(4):968–998, 2018. [ABKNS00](#id21) <NAME>, <NAME>, <NAME>, and <NAME>. Lof: identifying density-based local outliers. In *ACM sigmod record*, volume 29, 93–104. ACM, 2000. [ABHP+18](#id41) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Understanding disentangling in betvae. *arXiv preprint arXiv:1804.03599*, 2018. [ACoo77](#id18) R <NAME>. Detection of influential observation in linear regression. *Technometrics*, 19(1):15–18, 1977. [AFM01](#id11) Kai-<NAME> and Chang-<NAME>. Wrap-around l2-discrepancy of random sampling, latin hypercube and uniform designs. *Journal of complexity*, 17(4):608–624, 2001. [AGD12](#id26) <NAME> and <NAME>. Histogram-based outlier score (hbos): a fast unsupervised anomaly detection algorithm. *KI-2012: Poster and Demo Track*, pages 59–63, 2012. [AGHNN22](#id48) <NAME>, <NAME>, <NAME>, and <NAME>. Lunar: unifying local outlier detection methods via graph neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, 6737–6745. 2022. AHHH+22 <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Adbench: anomaly detection benchmark. *arXiv preprint arXiv:2206.09426*, 2022. [AHR04](#id17) <NAME> and <NAME>. Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. *Computational Statistics & Data Analysis*, 44(4):625–638, 2004. [AHXD03](#id24) <NAME>, <NAME>, and <NAME>. Discovering cluster-based local outliers. *Pattern Recognition Letters*, 24(9-10):1641–1650, 2003. [AHof07](#id16) <NAME>. Kernel pca for novelty detection. *Pattern recognition*, 40(3):863–874, 2007. [AIH93](#id9) <NAME> and <NAME>. *How to detect and handle outliers*. Volume 16. Asq Press, 1993. [AJHuszarPvdH12](#id10) <NAME>, <NAME>, EO Postma, and HJ van <NAME>. Stochastic outlier selection. Technical Report, Technical report TiCC TR 2012-001, Tilburg University, Tilburg Center for Cognition and Communication, Tilburg, The Netherlands, 2012. [AKW13](#id40) Diederik <NAME> and <NAME>. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. [AKKrogerSZ09](#id30) <NAME>, <NAME>, <NAME>, and <NAME>. Outlier detection in axis-parallel subspaces of high dimensional data. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, 831–838. Springer, 2009. AKZ+08([1](#id7),[2](#id8)) Hans-<NAME>, <NAME>, and others. Angle-based outlier detection in high-dimensional data. In *Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining*, 444–452. ACM, 2008. [ALLP07](#id12) Longin <NAME>, <NAME>, and <NAME>. Outlier detection with kernel density functions. In *International Workshop on Machine Learning and Data Mining in Pattern Recognition*, 61–75. Springer, 2007. ALK05([1](#id34),[2](#id49)) <NAME> and <NAME>umar. Feature bagging for outlier detection. In *Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining*, 157–166. ACM, 2005. [ALZB+20](#id6) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. COPOD: copula-based outlier detection. In *IEEE International Conference on Data Mining (ICDM)*. IEEE, 2020. [ALZH+22](#id5) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and H. <NAME>. Ecod: unsupervised outlier detection using empirical cumulative distribution functions. *IEEE Transactions on Knowledge and Data Engineering*, 2022. [ALTZ08](#id32) Fei <NAME>, <NAME>, and <NAME>. Isolation forest. In *Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on*, 413–422. IEEE, 2008. [ALTZ12](#id32) Fei <NAME>, <NAME>, and <NAME>. Isolation-based anomaly detection. *ACM Transactions on Knowledge Discovery from Data (TKDD)*, 6(1):3, 2012. ALLZ+19([1](#id42),[2](#id43)) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Generative adversarial active learning for unsupervised outlier detection. *IEEE Transactions on Knowledge and Data Engineering*, 2019. [APKGF03](#id25) <NAME>, <NAME>, <NAME>, and <NAME>. Loci: fast outlier detection using the local correlation integral. In *Data Engineering, 2003. Proceedings. 19th International Conference on*, 315–326. IEEE, 2003. APevny16([1](#id37),[2](#id52)) Tom<NAME>. Loda: lightweight on-line detector of anomalies. *Machine Learning*, 102(2):275–304, 2016. ARRS00([1](#id27),[2](#id28),[3](#id29)) <NAME>, <NAME>, and <NAME>. Efficient algorithms for mining outliers from large data sets. In *ACM Sigmod Record*, volume 29, 427–438. ACM, 2000. [ARD99](#id17) Peter <NAME> and <NAME>. A fast algorithm for the minimum covariance determinant estimator. *Technometrics*, 41(3):212–223, 1999. [ARVG+18](#id44) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Deep one-class classification. *International conference on machine learning*, 2018. [ASSeebockW+17](#id45) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In *International conference on information processing in medical imaging*, 146–157. Springer, 2017. [AScholkopfPST+01](#id19) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Estimating the support of a high-dimensional distribution. *Neural computation*, 13(7):1443–1471, 2001. [ASCSC03](#id15) <NAME>, <NAME>, <NAME>, and <NAME>. A novel anomaly detection scheme based on principal component classifier. Technical Report, MIAMI UNIV CORAL GABLES FL DEPT OF ELECTRICAL AND COMPUTER ENGINEERING, 2003. [ASB13](#id13) <NAME> and <NAME>. Rapid distance-based outlier detection via sampling. *Advances in neural information processing systems*, 2013. ATCFC02([1](#id22),[2](#id23)) <NAME>, <NAME>, <NAME>, and <NAME>. Enhancing effectiveness of outlier detections for low density patterns. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, 535–548. Springer, 2002. [AZRF+18](#id46) <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Adversarially learned anomaly detection. In *2018 IEEE International conference on data mining (ICDM)*, 727–736. IEEE, 2018. AZH18([1](#id36),[2](#id51)) <NAME> and <NAME>. Xgbod: improving supervised outlier detection with unsupervised representation learning. In *International Joint Conference on Neural Networks (IJCNN)*. IEEE, 2018. AZHC+21([1](#id2),[2](#id38),[3](#id53)) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Suod: accelerating large-scale unsupervised heterogeneous outlier detection. *Proceedings of Machine Learning and Systems*, 2021. AZNHL19([1](#id35),[2](#id50)) <NAME>, <NAME>, <NAME>, and <NAME>. LSCP: locally selective combination in parallel outlier ensembles. In *Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019*, 585–593. Calgary, Canada, May 2019. SIAM. URL: <https://doi.org/10.1137/1.9781611975673.66>, [doi:10.1137/1.9781611975673.66](https://doi.org/10.1137/1.9781611975673.66). [AZNL19](#id1) <NAME>, <NAME>, and <NAME>. PyOD: a python toolbox for scalable outlier detection. *Journal of Machine Learning Research*, 20(96):1–7, 2019.
django-staticfilesplus
readthedoc
JavaScript
# Installation and Configuration¶ ``` $ pip install django-staticfilesplus ``` In settings.py replace the default STATICFILES_FINDERS definition with this: ``` STATICFILES_FINDERS = ( 'staticfilesplus.finders.FileSystemFinder', 'staticfilesplus.finders.AppDirectoriesFinder', ) ``` And enable the default processors: ``` STATICFILESPLUS_PROCESSORS = ( 'staticfilesplus.processors.less.LESSProcessor', 'staticfilesplus.processors.js.JavaScriptProcessor', ) ``` Assuming that django.contrib.staticfiles is in your INSTALLED_APPS (which it is by default) you’re ready to go. Default : | () | | --- | --- | A list of active processors. See the processor documentation for details on how these work. Default : | os.path.join(STATIC_ROOT, 'staticfilesplus_tmp) | | --- | --- | A directory in which to write temporary working files. If it doesn’t exist it will be created. # JavaScript Processor¶ This adds support for Sprockets-like dependancy management for JavaScript files. Dependencies between files are specified by specially formatted comments (known as directives) at the top of the files. The processor compiles all these dependencies together into a single file. Ensure that the processor is in the list of enabled processors in settings.py: ``` STATICFILESPLUS_PROCESSORS = ( ... 'staticfilesplus.processors.js.JavaScriptProcessor', ... ) ``` The directive processor scans for comment lines beginning with = in comment blocks at the top of the file. ``` //= require jquery //= require lib/myplugin.js ``` The first word immediately following = specifies the directive name. Any words following the directive name are treated as arguments. Arguments may be placed in single or double quotes if they contain spaces, similar to commands in the Unix shell. Note: Non-directive comment lines will be preserved in the final asset, but directive comments are stripped after processing. The processor will not look for directives in comment blocks that occur after the first line of code. The directive processor understands comment blocks in three formats: ``` /* Multi-line comment blocks (CSS, SCSS, JavaScript) *= require foo */ // Single-line comment blocks (SCSS, JavaScript) //= require foo # Single-line comment blocks (CoffeeScript) #= require foo ``` Directives are comments of the form: //= <directive> <path> ``` Path arguments are parsed like shell arguments so they can be unquoted if they contain no special characters (like spaces) or surrounded with single or double quotes. To maintain compatibilty with Sprockets you can omit the .js extension from paths, but I prefer to be explicit and include the extension. //= require <filename> ``` Currently, we only support two of the standard Sprockets directives: ``` /* * *= require some-library *= require you-can-explicily-specify-extension.js */ //= require "quoting works just like in shell" //= require ./paths/starting/with-a-dot/are-relative.js ``` Files with the extension .djtmpl.js will be first processed by Django’s templating engine. You should use this feature sparingly (it’s quite a nasty hack) but it can help to avoid repeating configuration values (particularly your URL config) in both Python and JavaScript. In the example below, config.djtmp.js pulls in a couple of values from Django’s configuration and then application.js requires it and uses those values. ``` /* application.js */ //= require config.djtmpl.js $.ajax(URLS.my_endpoint); console.log(SETTINGS.title); ``` ``` /* config.djtmpl.js */ var URLS = { my_endpoint: "{% url 'my_endpoint '%}" }; var SETTINGS = { title: "{{ settings.SOME_TITLE }}" }; ```
@aws-sdk/client-sagemaker
npm
JavaScript
[@aws-sdk/client-sagemaker](#aws-sdkclient-sagemaker) === [Description](#description) --- AWS SDK for JavaScript SageMaker Client for Node.js, Browser and React Native. Provides APIs for creating and managing SageMaker resources. Other Resources: * [SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html#first-time-user) * [Amazon Augmented AI Runtime API Reference](https://docs.aws.amazon.com/augmented-ai/2019-11-07/APIReference/Welcome.html) [Installing](#installing) --- To install the this package, simply type add or install @aws-sdk/client-sagemaker using your favorite package manager: * `npm install @aws-sdk/client-sagemaker` * `yarn add @aws-sdk/client-sagemaker` * `pnpm add @aws-sdk/client-sagemaker` [Getting Started](#getting-started) --- ### [Import](#import) The AWS SDK is modulized by clients and commands. To send a request, you only need to import the `SageMakerClient` and the commands you need, for example `ListActionsCommand`: ``` // ES5 example const { SageMakerClient, ListActionsCommand } = require("@aws-sdk/client-sagemaker"); ``` ``` // ES6+ example import { SageMakerClient, ListActionsCommand } from "@aws-sdk/client-sagemaker"; ``` ### [Usage](#usage) To send a request, you: * Initiate client with configuration (e.g. credentials, region). * Initiate command with input parameters. * Call `send` operation on client with command object as input. * If you are using a custom http handler, you may call `destroy()` to close open connections. ``` // a client can be shared by different commands. const client = new SageMakerClient({ region: "REGION" }); const params = { /** input parameters */ }; const command = new ListActionsCommand(params); ``` #### [Async/await](#asyncawait) We recommend using [await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) operator to wait for the promise returned by send operation as follows: ``` // async/await. try { const data = await client.send(command); // process data. } catch (error) { // error handling. } finally { // finally. } ``` Async-await is clean, concise, intuitive, easy to debug and has better error handling as compared to using Promise chains or callbacks. #### [Promises](#promises) You can also use [Promise chaining](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining) to execute send operation. ``` client.send(command).then( (data) => { // process data. }, (error) => { // error handling. } ); ``` Promises can also be called using `.catch()` and `.finally()` as follows: ``` client .send(command) .then((data) => { // process data. }) .catch((error) => { // error handling. }) .finally(() => { // finally. }); ``` #### [Callbacks](#callbacks) We do not recommend using callbacks because of [callback hell](http://callbackhell.com/), but they are supported by the send operation. ``` // callbacks. client.send(command, (err, data) => { // process err and data. }); ``` #### [v2 compatible style](#v2-compatible-style) The client can also send requests using v2 compatible style. However, it results in a bigger bundle size and may be dropped in next major version. More details in the blog post on [modular packages in AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/modular-packages-in-aws-sdk-for-javascript/) ``` import * as AWS from "@aws-sdk/client-sagemaker"; const client = new AWS.SageMaker({ region: "REGION" }); // async/await. try { const data = await client.listActions(params); // process data. } catch (error) { // error handling. } // Promises. client .listActions(params) .then((data) => { // process data. }) .catch((error) => { // error handling. }); // callbacks. client.listActions(params, (err, data) => { // process err and data. }); ``` ### [Troubleshooting](#troubleshooting) When the service returns an exception, the error will include the exception information, as well as response metadata (e.g. request id). ``` try { const data = await client.send(command); // process data. } catch (error) { const { requestId, cfId, extendedRequestId } = error.$$metadata; console.log({ requestId, cfId, extendedRequestId }); /** * The keys within exceptions are also parsed. * You can access them by specifying exception names: * if (error.name === 'SomeServiceException') { * const value = error.specialKeyInException; * } */ } ``` [Getting Help](#getting-help) --- Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests, but have limited bandwidth to address them. * Visit [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) or [API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html). * Check out the blog posts tagged with [`aws-sdk-js`](https://aws.amazon.com/blogs/developer/tag/aws-sdk-js/) on AWS Developer Blog. * Ask a question on [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) and tag it with `aws-sdk-js`. * Join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js-v3). * If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js-v3/issues/new/choose). To test your universal JavaScript code in Node.js, browser and react-native environments, visit our [code samples repo](https://github.com/aws-samples/aws-sdk-js-tests). [Contributing](#contributing) --- This client code is generated automatically. Any modifications will be overwritten the next time the `@aws-sdk/client-sagemaker` package is updated. To contribute to client you can check our [generate clients scripts](https://github.com/aws/aws-sdk-js-v3/tree/main/scripts/generate-clients). [License](#license) --- This SDK is distributed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0), see LICENSE for more information. [Client Commands (Operations List)](#client-commands-operations-list) --- AddAssociation [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/addassociationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/addassociationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/addassociationcommandoutput.html) AddTags [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/addtagscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/addtagscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/addtagscommandoutput.html) AssociateTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/associatetrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/associatetrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/associatetrialcomponentcommandoutput.html) BatchDescribeModelPackage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/batchdescribemodelpackagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/batchdescribemodelpackagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/batchdescribemodelpackagecommandoutput.html) CreateAction [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createactioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createactioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createactioncommandoutput.html) CreateAlgorithm [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createalgorithmcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createalgorithmcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createalgorithmcommandoutput.html) CreateApp [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createappcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createappcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createappcommandoutput.html) CreateAppImageConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createappimageconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createappimageconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createappimageconfigcommandoutput.html) CreateArtifact [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createartifactcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createartifactcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createartifactcommandoutput.html) CreateAutoMLJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createautomljobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createautomljobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createautomljobcommandoutput.html) CreateAutoMLJobV2 [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createautomljobv2command.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createautomljobv2commandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createautomljobv2commandoutput.html) CreateCodeRepository [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createcoderepositorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcoderepositorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcoderepositorycommandoutput.html) CreateCompilationJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createcompilationjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcompilationjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcompilationjobcommandoutput.html) CreateContext [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createcontextcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcontextcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createcontextcommandoutput.html) CreateDataQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createdataqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdataqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdataqualityjobdefinitioncommandoutput.html) CreateDeviceFleet [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createdevicefleetcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdevicefleetcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdevicefleetcommandoutput.html) CreateDomain [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createdomaincommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdomaincommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createdomaincommandoutput.html) CreateEdgeDeploymentPlan [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createedgedeploymentplancommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgedeploymentplancommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgedeploymentplancommandoutput.html) CreateEdgeDeploymentStage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createedgedeploymentstagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgedeploymentstagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgedeploymentstagecommandoutput.html) CreateEdgePackagingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createedgepackagingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgepackagingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createedgepackagingjobcommandoutput.html) CreateEndpoint [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createendpointcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createendpointcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createendpointcommandoutput.html) CreateEndpointConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createendpointconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createendpointconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createendpointconfigcommandoutput.html) CreateExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createexperimentcommandoutput.html) CreateFeatureGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createfeaturegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createfeaturegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createfeaturegroupcommandoutput.html) CreateFlowDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createflowdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createflowdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createflowdefinitioncommandoutput.html) CreateHub [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createhubcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhubcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhubcommandoutput.html) CreateHumanTaskUi [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createhumantaskuicommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhumantaskuicommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhumantaskuicommandoutput.html) CreateHyperParameterTuningJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createhyperparametertuningjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhyperparametertuningjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createhyperparametertuningjobcommandoutput.html) CreateImage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createimagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createimagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createimagecommandoutput.html) CreateImageVersion [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createimageversioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createimageversioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createimageversioncommandoutput.html) CreateInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createinferenceexperimentcommandoutput.html) CreateInferenceRecommendationsJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createinferencerecommendationsjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createinferencerecommendationsjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createinferencerecommendationsjobcommandoutput.html) CreateLabelingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createlabelingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createlabelingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createlabelingjobcommandoutput.html) CreateModel [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcommandoutput.html) CreateModelBiasJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelbiasjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelbiasjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelbiasjobdefinitioncommandoutput.html) CreateModelCard [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelcardcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcardcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcardcommandoutput.html) CreateModelCardExportJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelcardexportjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcardexportjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelcardexportjobcommandoutput.html) CreateModelExplainabilityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelexplainabilityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelexplainabilityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelexplainabilityjobdefinitioncommandoutput.html) CreateModelPackage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelpackagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelpackagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelpackagecommandoutput.html) CreateModelPackageGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelpackagegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelpackagegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelpackagegroupcommandoutput.html) CreateModelQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmodelqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmodelqualityjobdefinitioncommandoutput.html) CreateMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createmonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createmonitoringschedulecommandoutput.html) CreateNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createnotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createnotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createnotebookinstancecommandoutput.html) CreateNotebookInstanceLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createnotebookinstancelifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createnotebookinstancelifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createnotebookinstancelifecycleconfigcommandoutput.html) CreatePipeline [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createpipelinecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpipelinecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpipelinecommandoutput.html) CreatePresignedDomainUrl [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createpresigneddomainurlcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpresigneddomainurlcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpresigneddomainurlcommandoutput.html) CreatePresignedNotebookInstanceUrl [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createpresignednotebookinstanceurlcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpresignednotebookinstanceurlcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createpresignednotebookinstanceurlcommandoutput.html) CreateProcessingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createprocessingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createprocessingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createprocessingjobcommandoutput.html) CreateProject [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createprojectcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createprojectcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createprojectcommandoutput.html) CreateSpace [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createspacecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createspacecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createspacecommandoutput.html) CreateStudioLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createstudiolifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createstudiolifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createstudiolifecycleconfigcommandoutput.html) CreateTrainingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createtrainingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrainingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrainingjobcommandoutput.html) CreateTransformJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createtransformjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtransformjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtransformjobcommandoutput.html) CreateTrial [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createtrialcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrialcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrialcommandoutput.html) CreateTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createtrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createtrialcomponentcommandoutput.html) CreateUserProfile [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createuserprofilecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createuserprofilecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createuserprofilecommandoutput.html) CreateWorkforce [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createworkforcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createworkforcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createworkforcecommandoutput.html) CreateWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/createworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/createworkteamcommandoutput.html) DeleteAction [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteactioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteactioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteactioncommandoutput.html) DeleteAlgorithm [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletealgorithmcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletealgorithmcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletealgorithmcommandoutput.html) DeleteApp [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteappcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteappcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteappcommandoutput.html) DeleteAppImageConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteappimageconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteappimageconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteappimageconfigcommandoutput.html) DeleteArtifact [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteartifactcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteartifactcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteartifactcommandoutput.html) DeleteAssociation [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteassociationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteassociationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteassociationcommandoutput.html) DeleteCodeRepository [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletecoderepositorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletecoderepositorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletecoderepositorycommandoutput.html) DeleteContext [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletecontextcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletecontextcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletecontextcommandoutput.html) DeleteDataQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletedataqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedataqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedataqualityjobdefinitioncommandoutput.html) DeleteDeviceFleet [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletedevicefleetcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedevicefleetcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedevicefleetcommandoutput.html) DeleteDomain [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletedomaincommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedomaincommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletedomaincommandoutput.html) DeleteEdgeDeploymentPlan [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteedgedeploymentplancommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteedgedeploymentplancommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteedgedeploymentplancommandoutput.html) DeleteEdgeDeploymentStage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteedgedeploymentstagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteedgedeploymentstagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteedgedeploymentstagecommandoutput.html) DeleteEndpoint [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteendpointcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteendpointcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteendpointcommandoutput.html) DeleteEndpointConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteendpointconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteendpointconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteendpointconfigcommandoutput.html) DeleteExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteexperimentcommandoutput.html) DeleteFeatureGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletefeaturegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletefeaturegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletefeaturegroupcommandoutput.html) DeleteFlowDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteflowdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteflowdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteflowdefinitioncommandoutput.html) DeleteHub [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletehubcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehubcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehubcommandoutput.html) DeleteHubContent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletehubcontentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehubcontentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehubcontentcommandoutput.html) DeleteHumanTaskUi [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletehumantaskuicommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehumantaskuicommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletehumantaskuicommandoutput.html) DeleteImage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteimagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteimagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteimagecommandoutput.html) DeleteImageVersion [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteimageversioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteimageversioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteimageversioncommandoutput.html) DeleteInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteinferenceexperimentcommandoutput.html) DeleteModel [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelcommandoutput.html) DeleteModelBiasJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelbiasjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelbiasjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelbiasjobdefinitioncommandoutput.html) DeleteModelCard [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelcardcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelcardcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelcardcommandoutput.html) DeleteModelExplainabilityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelexplainabilityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelexplainabilityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelexplainabilityjobdefinitioncommandoutput.html) DeleteModelPackage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelpackagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagecommandoutput.html) DeleteModelPackageGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelpackagegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagegroupcommandoutput.html) DeleteModelPackageGroupPolicy [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelpackagegrouppolicycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagegrouppolicycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelpackagegrouppolicycommandoutput.html) DeleteModelQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemodelqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemodelqualityjobdefinitioncommandoutput.html) DeleteMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletemonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletemonitoringschedulecommandoutput.html) DeleteNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletenotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletenotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletenotebookinstancecommandoutput.html) DeleteNotebookInstanceLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletenotebookinstancelifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletenotebookinstancelifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletenotebookinstancelifecycleconfigcommandoutput.html) DeletePipeline [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletepipelinecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletepipelinecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletepipelinecommandoutput.html) DeleteProject [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteprojectcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteprojectcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteprojectcommandoutput.html) DeleteSpace [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletespacecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletespacecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletespacecommandoutput.html) DeleteStudioLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletestudiolifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletestudiolifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletestudiolifecycleconfigcommandoutput.html) DeleteTags [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletetagscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetagscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetagscommandoutput.html) DeleteTrial [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletetrialcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetrialcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetrialcommandoutput.html) DeleteTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deletetrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deletetrialcomponentcommandoutput.html) DeleteUserProfile [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteuserprofilecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteuserprofilecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteuserprofilecommandoutput.html) DeleteWorkforce [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteworkforcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteworkforcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteworkforcecommandoutput.html) DeleteWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deleteworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deleteworkteamcommandoutput.html) DeregisterDevices [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/deregisterdevicescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deregisterdevicescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/deregisterdevicescommandoutput.html) DescribeAction [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeactioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeactioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeactioncommandoutput.html) DescribeAlgorithm [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describealgorithmcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describealgorithmcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describealgorithmcommandoutput.html) DescribeApp [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeappcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeappcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeappcommandoutput.html) DescribeAppImageConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeappimageconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeappimageconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeappimageconfigcommandoutput.html) DescribeArtifact [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeartifactcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeartifactcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeartifactcommandoutput.html) DescribeAutoMLJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeautomljobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeautomljobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeautomljobcommandoutput.html) DescribeAutoMLJobV2 [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeautomljobv2command.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeautomljobv2commandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeautomljobv2commandoutput.html) DescribeCodeRepository [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describecoderepositorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecoderepositorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecoderepositorycommandoutput.html) DescribeCompilationJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describecompilationjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecompilationjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecompilationjobcommandoutput.html) DescribeContext [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describecontextcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecontextcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describecontextcommandoutput.html) DescribeDataQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describedataqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedataqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedataqualityjobdefinitioncommandoutput.html) DescribeDevice [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describedevicecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedevicecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedevicecommandoutput.html) DescribeDeviceFleet [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describedevicefleetcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedevicefleetcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedevicefleetcommandoutput.html) DescribeDomain [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describedomaincommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedomaincommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describedomaincommandoutput.html) DescribeEdgeDeploymentPlan [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeedgedeploymentplancommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeedgedeploymentplancommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeedgedeploymentplancommandoutput.html) DescribeEdgePackagingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeedgepackagingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeedgepackagingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeedgepackagingjobcommandoutput.html) DescribeEndpoint [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeendpointcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeendpointcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeendpointcommandoutput.html) DescribeEndpointConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeendpointconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeendpointconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeendpointconfigcommandoutput.html) DescribeExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeexperimentcommandoutput.html) DescribeFeatureGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describefeaturegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describefeaturegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describefeaturegroupcommandoutput.html) DescribeFeatureMetadata [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describefeaturemetadatacommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describefeaturemetadatacommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describefeaturemetadatacommandoutput.html) DescribeFlowDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeflowdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeflowdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeflowdefinitioncommandoutput.html) DescribeHub [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describehubcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehubcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehubcommandoutput.html) DescribeHubContent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describehubcontentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehubcontentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehubcontentcommandoutput.html) DescribeHumanTaskUi [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describehumantaskuicommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehumantaskuicommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehumantaskuicommandoutput.html) DescribeHyperParameterTuningJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describehyperparametertuningjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehyperparametertuningjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describehyperparametertuningjobcommandoutput.html) DescribeImage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeimagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeimagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeimagecommandoutput.html) DescribeImageVersion [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeimageversioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeimageversioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeimageversioncommandoutput.html) DescribeInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeinferenceexperimentcommandoutput.html) DescribeInferenceRecommendationsJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeinferencerecommendationsjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeinferencerecommendationsjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeinferencerecommendationsjobcommandoutput.html) DescribeLabelingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describelabelingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describelabelingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describelabelingjobcommandoutput.html) DescribeLineageGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describelineagegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describelineagegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describelineagegroupcommandoutput.html) DescribeModel [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcommandoutput.html) DescribeModelBiasJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelbiasjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelbiasjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelbiasjobdefinitioncommandoutput.html) DescribeModelCard [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelcardcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcardcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcardcommandoutput.html) DescribeModelCardExportJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelcardexportjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcardexportjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelcardexportjobcommandoutput.html) DescribeModelExplainabilityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelexplainabilityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelexplainabilityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelexplainabilityjobdefinitioncommandoutput.html) DescribeModelPackage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelpackagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelpackagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelpackagecommandoutput.html) DescribeModelPackageGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelpackagegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelpackagegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelpackagegroupcommandoutput.html) DescribeModelQualityJobDefinition [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemodelqualityjobdefinitioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelqualityjobdefinitioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemodelqualityjobdefinitioncommandoutput.html) DescribeMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describemonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describemonitoringschedulecommandoutput.html) DescribeNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describenotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describenotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describenotebookinstancecommandoutput.html) DescribeNotebookInstanceLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describenotebookinstancelifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describenotebookinstancelifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describenotebookinstancelifecycleconfigcommandoutput.html) DescribePipeline [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describepipelinecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelinecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelinecommandoutput.html) DescribePipelineDefinitionForExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describepipelinedefinitionforexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelinedefinitionforexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelinedefinitionforexecutioncommandoutput.html) DescribePipelineExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describepipelineexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelineexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describepipelineexecutioncommandoutput.html) DescribeProcessingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeprocessingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeprocessingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeprocessingjobcommandoutput.html) DescribeProject [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeprojectcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeprojectcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeprojectcommandoutput.html) DescribeSpace [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describespacecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describespacecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describespacecommandoutput.html) DescribeStudioLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describestudiolifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describestudiolifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describestudiolifecycleconfigcommandoutput.html) DescribeSubscribedWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describesubscribedworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describesubscribedworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describesubscribedworkteamcommandoutput.html) DescribeTrainingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describetrainingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrainingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrainingjobcommandoutput.html) DescribeTransformJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describetransformjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetransformjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetransformjobcommandoutput.html) DescribeTrial [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describetrialcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrialcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrialcommandoutput.html) DescribeTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describetrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describetrialcomponentcommandoutput.html) DescribeUserProfile [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeuserprofilecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeuserprofilecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeuserprofilecommandoutput.html) DescribeWorkforce [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeworkforcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeworkforcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeworkforcecommandoutput.html) DescribeWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/describeworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/describeworkteamcommandoutput.html) DisableSagemakerServicecatalogPortfolio [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/disablesagemakerservicecatalogportfoliocommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/disablesagemakerservicecatalogportfoliocommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/disablesagemakerservicecatalogportfoliocommandoutput.html) DisassociateTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/disassociatetrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/disassociatetrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/disassociatetrialcomponentcommandoutput.html) EnableSagemakerServicecatalogPortfolio [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/enablesagemakerservicecatalogportfoliocommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/enablesagemakerservicecatalogportfoliocommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/enablesagemakerservicecatalogportfoliocommandoutput.html) GetDeviceFleetReport [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getdevicefleetreportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getdevicefleetreportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getdevicefleetreportcommandoutput.html) GetLineageGroupPolicy [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getlineagegrouppolicycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getlineagegrouppolicycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getlineagegrouppolicycommandoutput.html) GetModelPackageGroupPolicy [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getmodelpackagegrouppolicycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getmodelpackagegrouppolicycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getmodelpackagegrouppolicycommandoutput.html) GetSagemakerServicecatalogPortfolioStatus [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getsagemakerservicecatalogportfoliostatuscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getsagemakerservicecatalogportfoliostatuscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getsagemakerservicecatalogportfoliostatuscommandoutput.html) GetScalingConfigurationRecommendation [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getscalingconfigurationrecommendationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getscalingconfigurationrecommendationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getscalingconfigurationrecommendationcommandoutput.html) GetSearchSuggestions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/getsearchsuggestionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getsearchsuggestionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/getsearchsuggestionscommandoutput.html) ImportHubContent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/importhubcontentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/importhubcontentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/importhubcontentcommandoutput.html) ListActions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listactionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listactionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listactionscommandoutput.html) ListAlgorithms [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listalgorithmscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listalgorithmscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listalgorithmscommandoutput.html) ListAliases [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listaliasescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listaliasescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listaliasescommandoutput.html) ListAppImageConfigs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listappimageconfigscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listappimageconfigscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listappimageconfigscommandoutput.html) ListApps [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listappscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listappscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listappscommandoutput.html) ListArtifacts [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listartifactscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listartifactscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listartifactscommandoutput.html) ListAssociations [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listassociationscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listassociationscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listassociationscommandoutput.html) ListAutoMLJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listautomljobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listautomljobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listautomljobscommandoutput.html) ListCandidatesForAutoMLJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listcandidatesforautomljobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcandidatesforautomljobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcandidatesforautomljobcommandoutput.html) ListCodeRepositories [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listcoderepositoriescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcoderepositoriescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcoderepositoriescommandoutput.html) ListCompilationJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listcompilationjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcompilationjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcompilationjobscommandoutput.html) ListContexts [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listcontextscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcontextscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listcontextscommandoutput.html) ListDataQualityJobDefinitions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listdataqualityjobdefinitionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdataqualityjobdefinitionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdataqualityjobdefinitionscommandoutput.html) ListDeviceFleets [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listdevicefleetscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdevicefleetscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdevicefleetscommandoutput.html) ListDevices [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listdevicescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdevicescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdevicescommandoutput.html) ListDomains [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listdomainscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdomainscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listdomainscommandoutput.html) ListEdgeDeploymentPlans [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listedgedeploymentplanscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listedgedeploymentplanscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listedgedeploymentplanscommandoutput.html) ListEdgePackagingJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listedgepackagingjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listedgepackagingjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listedgepackagingjobscommandoutput.html) ListEndpointConfigs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listendpointconfigscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listendpointconfigscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listendpointconfigscommandoutput.html) ListEndpoints [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listendpointscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listendpointscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listendpointscommandoutput.html) ListExperiments [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listexperimentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listexperimentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listexperimentscommandoutput.html) ListFeatureGroups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listfeaturegroupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listfeaturegroupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listfeaturegroupscommandoutput.html) ListFlowDefinitions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listflowdefinitionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listflowdefinitionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listflowdefinitionscommandoutput.html) ListHubContents [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listhubcontentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubcontentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubcontentscommandoutput.html) ListHubContentVersions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listhubcontentversionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubcontentversionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubcontentversionscommandoutput.html) ListHubs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listhubscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhubscommandoutput.html) ListHumanTaskUis [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listhumantaskuiscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhumantaskuiscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhumantaskuiscommandoutput.html) ListHyperParameterTuningJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listhyperparametertuningjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhyperparametertuningjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listhyperparametertuningjobscommandoutput.html) ListImages [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listimagescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listimagescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listimagescommandoutput.html) ListImageVersions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listimageversionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listimageversionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listimageversionscommandoutput.html) ListInferenceExperiments [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listinferenceexperimentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferenceexperimentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferenceexperimentscommandoutput.html) ListInferenceRecommendationsJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listinferencerecommendationsjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferencerecommendationsjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferencerecommendationsjobscommandoutput.html) ListInferenceRecommendationsJobSteps [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listinferencerecommendationsjobstepscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferencerecommendationsjobstepscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listinferencerecommendationsjobstepscommandoutput.html) ListLabelingJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listlabelingjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlabelingjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlabelingjobscommandoutput.html) ListLabelingJobsForWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listlabelingjobsforworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlabelingjobsforworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlabelingjobsforworkteamcommandoutput.html) ListLineageGroups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listlineagegroupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlineagegroupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listlineagegroupscommandoutput.html) ListModelBiasJobDefinitions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelbiasjobdefinitionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelbiasjobdefinitionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelbiasjobdefinitionscommandoutput.html) ListModelCardExportJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelcardexportjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardexportjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardexportjobscommandoutput.html) ListModelCards [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelcardscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardscommandoutput.html) ListModelCardVersions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelcardversionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardversionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelcardversionscommandoutput.html) ListModelExplainabilityJobDefinitions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelexplainabilityjobdefinitionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelexplainabilityjobdefinitionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelexplainabilityjobdefinitionscommandoutput.html) ListModelMetadata [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelmetadatacommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelmetadatacommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelmetadatacommandoutput.html) ListModelPackageGroups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelpackagegroupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelpackagegroupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelpackagegroupscommandoutput.html) ListModelPackages [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelpackagescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelpackagescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelpackagescommandoutput.html) ListModelQualityJobDefinitions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelqualityjobdefinitionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelqualityjobdefinitionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelqualityjobdefinitionscommandoutput.html) ListModels [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmodelscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmodelscommandoutput.html) ListMonitoringAlertHistory [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmonitoringalerthistorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringalerthistorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringalerthistorycommandoutput.html) ListMonitoringAlerts [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmonitoringalertscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringalertscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringalertscommandoutput.html) ListMonitoringExecutions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmonitoringexecutionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringexecutionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringexecutionscommandoutput.html) ListMonitoringSchedules [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listmonitoringschedulescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringschedulescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listmonitoringschedulescommandoutput.html) ListNotebookInstanceLifecycleConfigs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listnotebookinstancelifecycleconfigscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listnotebookinstancelifecycleconfigscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listnotebookinstancelifecycleconfigscommandoutput.html) ListNotebookInstances [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listnotebookinstancescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listnotebookinstancescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listnotebookinstancescommandoutput.html) ListPipelineExecutions [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listpipelineexecutionscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineexecutionscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineexecutionscommandoutput.html) ListPipelineExecutionSteps [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listpipelineexecutionstepscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineexecutionstepscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineexecutionstepscommandoutput.html) ListPipelineParametersForExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listpipelineparametersforexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineparametersforexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelineparametersforexecutioncommandoutput.html) ListPipelines [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listpipelinescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelinescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listpipelinescommandoutput.html) ListProcessingJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listprocessingjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listprocessingjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listprocessingjobscommandoutput.html) ListProjects [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listprojectscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listprojectscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listprojectscommandoutput.html) ListResourceCatalogs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listresourcecatalogscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listresourcecatalogscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listresourcecatalogscommandoutput.html) ListSpaces [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listspacescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listspacescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listspacescommandoutput.html) ListStageDevices [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/liststagedevicescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/liststagedevicescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/liststagedevicescommandoutput.html) ListStudioLifecycleConfigs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/liststudiolifecycleconfigscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/liststudiolifecycleconfigscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/liststudiolifecycleconfigscommandoutput.html) ListSubscribedWorkteams [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listsubscribedworkteamscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listsubscribedworkteamscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listsubscribedworkteamscommandoutput.html) ListTags [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtagscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtagscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtagscommandoutput.html) ListTrainingJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtrainingjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrainingjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrainingjobscommandoutput.html) ListTrainingJobsForHyperParameterTuningJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtrainingjobsforhyperparametertuningjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrainingjobsforhyperparametertuningjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrainingjobsforhyperparametertuningjobcommandoutput.html) ListTransformJobs [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtransformjobscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtransformjobscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtransformjobscommandoutput.html) ListTrialComponents [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtrialcomponentscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrialcomponentscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrialcomponentscommandoutput.html) ListTrials [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listtrialscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrialscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listtrialscommandoutput.html) ListUserProfiles [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listuserprofilescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listuserprofilescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listuserprofilescommandoutput.html) ListWorkforces [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listworkforcescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listworkforcescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listworkforcescommandoutput.html) ListWorkteams [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/listworkteamscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listworkteamscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/listworkteamscommandoutput.html) PutModelPackageGroupPolicy [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/putmodelpackagegrouppolicycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/putmodelpackagegrouppolicycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/putmodelpackagegrouppolicycommandoutput.html) QueryLineage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/querylineagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/querylineagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/querylineagecommandoutput.html) RegisterDevices [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/registerdevicescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/registerdevicescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/registerdevicescommandoutput.html) RenderUiTemplate [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/renderuitemplatecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/renderuitemplatecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/renderuitemplatecommandoutput.html) RetryPipelineExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/retrypipelineexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/retrypipelineexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/retrypipelineexecutioncommandoutput.html) Search [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/searchcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/searchcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/searchcommandoutput.html) SendPipelineExecutionStepFailure [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/sendpipelineexecutionstepfailurecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/sendpipelineexecutionstepfailurecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/sendpipelineexecutionstepfailurecommandoutput.html) SendPipelineExecutionStepSuccess [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/sendpipelineexecutionstepsuccesscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/sendpipelineexecutionstepsuccesscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/sendpipelineexecutionstepsuccesscommandoutput.html) StartEdgeDeploymentStage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/startedgedeploymentstagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startedgedeploymentstagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startedgedeploymentstagecommandoutput.html) StartInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/startinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startinferenceexperimentcommandoutput.html) StartMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/startmonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startmonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startmonitoringschedulecommandoutput.html) StartNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/startnotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startnotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startnotebookinstancecommandoutput.html) StartPipelineExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/startpipelineexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startpipelineexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/startpipelineexecutioncommandoutput.html) StopAutoMLJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopautomljobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopautomljobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopautomljobcommandoutput.html) StopCompilationJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopcompilationjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopcompilationjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopcompilationjobcommandoutput.html) StopEdgeDeploymentStage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopedgedeploymentstagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopedgedeploymentstagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopedgedeploymentstagecommandoutput.html) StopEdgePackagingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopedgepackagingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopedgepackagingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopedgepackagingjobcommandoutput.html) StopHyperParameterTuningJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stophyperparametertuningjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stophyperparametertuningjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stophyperparametertuningjobcommandoutput.html) StopInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopinferenceexperimentcommandoutput.html) StopInferenceRecommendationsJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopinferencerecommendationsjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopinferencerecommendationsjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopinferencerecommendationsjobcommandoutput.html) StopLabelingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stoplabelingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoplabelingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoplabelingjobcommandoutput.html) StopMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopmonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopmonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopmonitoringschedulecommandoutput.html) StopNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopnotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopnotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopnotebookinstancecommandoutput.html) StopPipelineExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stoppipelineexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoppipelineexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoppipelineexecutioncommandoutput.html) StopProcessingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stopprocessingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopprocessingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stopprocessingjobcommandoutput.html) StopTrainingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stoptrainingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoptrainingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoptrainingjobcommandoutput.html) StopTransformJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/stoptransformjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoptransformjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/stoptransformjobcommandoutput.html) UpdateAction [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateactioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateactioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateactioncommandoutput.html) UpdateAppImageConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateappimageconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateappimageconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateappimageconfigcommandoutput.html) UpdateArtifact [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateartifactcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateartifactcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateartifactcommandoutput.html) UpdateCodeRepository [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatecoderepositorycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatecoderepositorycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatecoderepositorycommandoutput.html) UpdateContext [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatecontextcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatecontextcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatecontextcommandoutput.html) UpdateDeviceFleet [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatedevicefleetcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedevicefleetcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedevicefleetcommandoutput.html) UpdateDevices [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatedevicescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedevicescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedevicescommandoutput.html) UpdateDomain [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatedomaincommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedomaincommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatedomaincommandoutput.html) UpdateEndpoint [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateendpointcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateendpointcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateendpointcommandoutput.html) UpdateEndpointWeightsAndCapacities [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateendpointweightsandcapacitiescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateendpointweightsandcapacitiescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateendpointweightsandcapacitiescommandoutput.html) UpdateExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateexperimentcommandoutput.html) UpdateFeatureGroup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatefeaturegroupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatefeaturegroupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatefeaturegroupcommandoutput.html) UpdateFeatureMetadata [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatefeaturemetadatacommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatefeaturemetadatacommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatefeaturemetadatacommandoutput.html) UpdateHub [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatehubcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatehubcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatehubcommandoutput.html) UpdateImage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateimagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateimagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateimagecommandoutput.html) UpdateImageVersion [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateimageversioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateimageversioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateimageversioncommandoutput.html) UpdateInferenceExperiment [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateinferenceexperimentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateinferenceexperimentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateinferenceexperimentcommandoutput.html) UpdateModelCard [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatemodelcardcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemodelcardcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemodelcardcommandoutput.html) UpdateModelPackage [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatemodelpackagecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemodelpackagecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemodelpackagecommandoutput.html) UpdateMonitoringAlert [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatemonitoringalertcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemonitoringalertcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemonitoringalertcommandoutput.html) UpdateMonitoringSchedule [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatemonitoringschedulecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemonitoringschedulecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatemonitoringschedulecommandoutput.html) UpdateNotebookInstance [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatenotebookinstancecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatenotebookinstancecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatenotebookinstancecommandoutput.html) UpdateNotebookInstanceLifecycleConfig [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatenotebookinstancelifecycleconfigcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatenotebookinstancelifecycleconfigcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatenotebookinstancelifecycleconfigcommandoutput.html) UpdatePipeline [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatepipelinecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatepipelinecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatepipelinecommandoutput.html) UpdatePipelineExecution [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatepipelineexecutioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatepipelineexecutioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatepipelineexecutioncommandoutput.html) UpdateProject [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateprojectcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateprojectcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateprojectcommandoutput.html) UpdateSpace [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatespacecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatespacecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatespacecommandoutput.html) UpdateTrainingJob [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatetrainingjobcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrainingjobcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrainingjobcommandoutput.html) UpdateTrial [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatetrialcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrialcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrialcommandoutput.html) UpdateTrialComponent [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updatetrialcomponentcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrialcomponentcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updatetrialcomponentcommandoutput.html) UpdateUserProfile [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateuserprofilecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateuserprofilecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateuserprofilecommandoutput.html) UpdateWorkforce [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateworkforcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateworkforcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateworkforcecommandoutput.html) UpdateWorkteam [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/classes/updateworkteamcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateworkteamcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sagemaker/interfaces/updateworkteamcommandoutput.html) Readme --- ### Keywords none
GITHUB_papers-we-love_papers-we-love.zip_unzipped_principal-type-schemes-for-functional-programs.pdf
free_programming_book
Unknown
Principal type-schemes for functional programs <NAME> First published in POPL 82: Proceedings of the 9th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, ACM, pp. 207212 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of its publication and date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1982 ACM 0-89791-065-6/82/001/0207 $00.75 1 Introduction This paper is concerned with the polymorphic type discipline of ML, which is a general purpose functional programming language, although it was first introduced as a metalanguage (whence its name) for constructing proofs in the LCF proof system.[4] The type discipline was studied in [5] where it was shown to be semantically sound, in a sense made precise below, but where one important question was left open: does the type-checking algorithm or more precisely the type assignment algorithm (since types are assigned by the compiler, and need not be mentioned by the programmer) find the most general type possible for every expression and declaration? Here we answer the question in the affirmative, for the purely applicative part of ML. It follows immediately that it is decidable whether a program is well-typed, in contrast with the elegant and slightly more permissive type discipline of Coppo. [1] After several years of successful use of the language, both in LCF and other research, and in teaching to undergraduates, it has become important to answer these questions particularly because the combination of flexibility (due to polymorphism), robustness (due to semantic soundness) and detection of errors at compile time has proved to be one of the strongest aspects of ML. The discipline can be well illustrated by a small example. Let us define in ML the function map, which maps a given function over a given list that is map f [x1; ...; xn] = [f(x1),...,f(xn)] The required declaration is Re-keyed 12 October 2010 by <NAME> <EMAIL> The work of this author is supported by the Portuguese Instituto Nacional de Investigacao Cientifica 1 letrec map f s = if null s then nil else cons(f(hd s)) (map f (tl s)) The type checker will deduce a type-scheme for map from existing type-schemes for null, nil, cons, hd and tl; the term type-scheme is appropriate since all these objects are polymorphic. In fact from null nil cons hd tl : ( list bool) : ( list) : ( ( list list)) : ( list ) : ( list list) map : (( ) ( list list)). will be deduced Types are built from type constants (bool . . .) and type variables (, , . . .) using type operators (such as infixed for functions and postfixed list for lists); a type-scheme is a type with (possibly) quantification of type variables at the outermost. Thus, the main result of this paper is that the type-scheme deduced for such a declaration (and more generally, for any ML expression) is a principal type-scheme, i.e. that any other type-scheme for the declaration is a generic instance of it. This is a generalisation of Hindleys result for Combinatory Logic [3]. ML may be contrasted with Algol 68, in which there is no polymorphism, and with Russell [2], in which parametric types appear explicitly as arguments to polymorphic functions. The generic types of Ada may be compared with type-schemes. For simplicity, our definitions and results here are formulated for a skeletal language, since their extension to ML is a routine matter. For example recursion is omitted since it can be introduced by simply adding the polymorphic fixed-point operator fix : (( ) ) and likewise for conditional expressions. 2 The language Assuming a set Id of identifiers x the language Exp of expressions e is given by the syntax e ::= x | e e 0 | x.e | let x = e in e 0 (where parentheses may be used to avoid ambiguity). Only the last clause extends the -calculus. Indeed for type checking purposes every let expression could be eliminated (by replacing x by e everywhere in e 0 ), except for the important consideration that in on-line use of ML declarations let x = e 2 are allowed, whose scope (e 0 ) is the remainder of the on-line session. As illustrated in the introduction, it must be possible to assign type-schemes to the identifiers thus declared. Note that types are absent from the language Exp. Assuming a set of type variables and of primitive types , the syntax of types and of type-schemes is given by ::= | | ::= | A type-scheme 1 . . . n (which we may write 1 . . . n ) has generic type variables 1 . . . n . A monotype is a type containing no type variables. 3 Type instantiation If S is a substitution of types for type variables, often written [1 /1 , . . . , n /n ] or [i /i ], and is a type-scheme, then S is the type-scheme obtained by replacing each free occurrence of i in by i , renaming the generic variables of if necessary. Then S is called an instance of ; the notions of substitution and instance extend naturally to larger syntactic constructs containing type-schemes. By contrast a type-scheme = 1 . . . m has a generic instance 0 = 1 . . . n 0 if 0 = [i /i ] for some types 1 , . . . , m and the j are not free in . In this case we shall write > 0 . Note that instantiation acts on free variables, while generic instantiation acts on bound variables. It follows that > 0 implies S > S 0 . 4 Semantics The semantic domain V for Exp is a complete partial order satisfying the following equations up to isomorphism, where Bi is a cpo corresponding to primitive type i : V = B0 + B1 + + F + W (disjoint sum) F =V V (function space) W = {} (error element) To each monotype corresponds a subset V , as detailed in [5]; if v V is in the subset for we write v : . Further we write v : if v : for every monotype instance of , and we write v : if v : for every which is a generic instance of . Now let Env = Id V be the domain of environments . The semantic function " : Exp Env V is given in [5]. Using it, we wish to attach meaning to assertions of the form A |= e : where e Exp and A is a set of assumptions of the form x : , x Id. If the assertion is closed, i.e. if A and contain no free type variables, then the sentence is said to hold iff, for every environment , whenever [[x]] : 0 for each member x : 0 of A, it follows that "[[e]] : . Further, an assertion holds iff all its closed instances hold. 3 Thus, to verify the assertion x : , f : ( ) |= ( f x) : it is enough to verify it for every monotype in place of . This example illustrates that free type variables in an assertion are implicitly quantified over the whole assertion, while explicit quantification in a type scheme has restricted scope. The remainder of this paper proceeds as follows. First we present an inference system for inferring valid assertions. Next we present an algorithm W for computing a typescheme for any expression, under assumptions A. We then show that W is sound, in the sense that any type-scheme it derives is derivable in the inference system. Finally we show that W is complete, in the sense that [any] derivable type-scheme is an instance of that computed by W . 5 Type inference From now on we shall assume that A contains at most one assumption about each identifier x. Ax stands for removing any assumption about x from A. For assumptions A, expressions e and type-scheme we write A` e : if this instance may be derived from the following inference rules: TAUT: GEN: ABS: A` e : A ` e : (x : A) A` x : ( not free in A) Ax {x : 0 } ` e : INST: COMB: LET: A ` (x.e) : 0 A` e : A ` e : 0 ( > 0 ) A ` e : 0 A ` e 0 : 0 A ` (e e 0 ) : A` e : Ax {x : } ` e 0 : A ` (let x = e in e 0 ) : The following example of a derivation is organised as a tree, in which each node follows from those immediately above it by an inference rule. TAUT: x : ` x : ` (x.x) : ` (x.x) : ( ) ABS: GEN: TAUT: INST: COMB: TAUT: i : ( ) ` i : ( ) i : ( ) ` i : ( ) INST: i : ( ) ` i :( ) ( ) i : ( ) ` i : i : ( ) ` i i : 4 ` (x.x) : ( ) i : ( ) ` i i : LET: ` (let i = (x.x) in i i ) : The following proposition, stating the semantic soundness of inference, can be proved by induction on e. Proposition 1 (Soundness of inference). If A ` e : then A |= e : . We will also require later the following two properties of the inference system. Proposition 2. If S is a substitution and A ` e : then SA ` e : S. Moreover if there is a derivation of A ` e : of height n then there is also a derivation of SA ` e : S of height less [than] or equal to n. Proof. By induction on n. Lemma 1. If > 0 and Ax {x : 0 } ` e : 0 then also Ax {x : } ` e : 0 . Proof. We construct a derivation of Ax {x : } ` e : 0 from that of Ax {x : 0 } ` e : 0 by substituting each use of TAUT for x : 0 with x : , followed by an INST step to derive x : 0 . Note that GEN steps remain valid since if occurs free in then it also occurs free in 0 . 6 The type assignment algorithm W The type inference system itself does not provide an easy method for finding, given A and e, a type-scheme such that A ` e : . We now present an algorithm W for this purpose. In fact, W goes a step further. Given A and e, if W succeeds it finds a substitution S and a type , which are most general in a sense to be made precise below, such that SA ` e : . To define W we require the unification algorithm of Robinson [6]. Proposition 3 (Robinson). There is an algorithm U which, given a pair of types, either returns a substitution V or fails; further (i) If U (, 0 ) returns V , then V unifies and 0 , i.e. V = 0 . (ii) If S unifies and 0 then U (, 0 ) returns some V and there is another substitution R such that S = RV . Moreover, V involves only variables in and 0 . We also need to define the closure of a type with respect to assumptions A; A() = 1 , . . . , n where 1 , . . . , n are the type variables occurring free in but not in A. 5 Algorithm W . W (A, e) = (S, ) where1 (i) If e is x and there is an assumption x : 1 , . . . , n 0 in A then S = Id 2 and = [i /i ] 0 where the i s are new. (ii) If e is e1 e2 then let W (A, e2 ) = (S1 , 2 ) and W (S1 A, e2 ) = (S2 , 2 ) and U (S2 1 , 2 ) = V where is new; then S = V S2 S1 and = V . (iii) If e is x.e1 then let be a new type variable and W (Ax {x : }, e1 ) = (S1 , 1 ); then S = S1 and = S1 1 . (iv) If e is let x = e1 in e2 then let W (A, e1 ) = (S1 , 2 ) and W (S1 Ax {x : S1 A(1 )}, e2 ) = (S2 , 2 ); then S = S2 S1 and = 2 . NOTE: When any of the conditions above is not met W fails. The following proposition proves that W meets our requirements. Proposition 4 (Soundness of W ). If W (A, e) succeeds with (S, ) then there is a derivation of SA ` e : . Proof. By induction on e using proposition 2. It follows that there is also a derivation of SA ` e : SA(). We refer to SA() as a type-scheme computed by W for e under A. 7 Completeness of W Given A and e, we will call P a principal type-scheme of e under assumptions A iff (i) A ` e : P (ii) Any other for which A ` e : is a generic instance of P . Our main result, restricted to the simple case where A contains no free type variables, may be stated as follows: If A ` e : for some , then W computes a principal type scheme for e under A. This is a direct corollary of the following general theorem which is a stronger result suited to inductive proof: Theorem (Completeness of W ). Given A and e, let A0 be an instance of A and a type-scheme such that A0 ` e : 1 [There are obvious typographic errors in parts (ii) and (iv) which are in the original publication. I have left the correction of these as an easy exercise for the reader.] 2 [Of course this is the identity (empty) substitution, not the set Id of identifiers.] 6 then (i) W (A, e) succeeds. (ii) If W (A, e) = (S, ) then, for some substitution R, A0 = RSA and R SA() > . In fact, from the theorem one also derives as corollaries that it is decidable whether e has any type at all under the assumptions A, and that, if so, it has a principal typescheme under A. The detailed proofs of results in this paper, and related results, will appear in the first authors forthcoming Ph.D. Thesis. References [1] <NAME>. An extended polymorphic type system for applicative languages. In Lecture Notes in Computer Science, volume 88, pages 192204. Springer, 1980. [2] <NAME> and <NAME>. Report on the programming language russell. Technical Report TR-79-371, Computer Science Department, Cornell University, 1979. [3] <NAME>. The principal type-scheme of an object in combinatory logic. Transactions of the AMS, 146:2960, 1969. [4] <NAME> <NAME> and <NAME>. Edinburgh LCF. In Lecture Notes in Computer Science, volume 78. Springer, 1979. [5] <NAME>. A theory of type polymorphism in programming. JCSS, 17(3):348 375, 1978. [6] <NAME>. A machine-oriented logic based on the resolution principle. Journal of the ACM, 12(1):2341, 1965. 7
nima
readthedoc
SQL
nima 0.7.4 documentation [Skip to main content](#main-content) Back to top `Ctrl`+`K` [None](#) Site Navigation * [Command-line tool](#) * [Tutorials](#) * [API references](#) * [Development references](#) Site Navigation * [Command-line tool](#) * [Tutorials](#) * [API references](#) * [Development references](#) Welcome to the documentation for our project! Here you will find information on how to use our software, how to contribute to the project, and how to track changes and updates. * Getting Started If you are new to the project, start by reading our [README.md](https://github.com/darosio/nima/blob/main/README.md) file, which provides an overview of the project and its goals. * Tracking Changes We use a changelog file to track changes and updates to our project. You can find our [CHANGELOG.md](https://github.com/darosio/nima/blob/main/CHANGELOG.md) file here. Our changelog follows the [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format, and we tag releases according to the [Semantic Versioning](https://semver.org/) scheme. * Our GitHub repository: [darosio/nima](https://github.com/darosio/nima) Command-line tool[#](#command-line-tool) === * [nima](#nima) * [bias](#bias) + [bias](#bias-bias) + [dark](#bias-dark) + [flat](#bias-flat) + [mflat](#bias-mflat) + [plot](#bias-plot) [nima](#id5)[#](#nima) --- Analyze multichannel (default:[“G”, “R”, “C”]) tiff time-lapse stack. TIFFSTK : Image file. CHANNELS : Channel names. Save: (1) representation of image channels and segmentation `BN_dim.png`, (2) plot of ratios and channel intensities for each label and bg vs. time `BN_meas.png`, (3) table of bg values `*/bg.csv`, (4) representation of bg image and histogram at all time points for each channel `BN/bg-[C1,C2,⋯]-method.pdf`, and for each label: (5) table of ratios and measured properties `BN/label[1,2,⋯].csv` and (6) ratio images `BN/label[1,2,⋯]_r[cl,pH].tif`. ``` nima [OPTIONS] TIFFSTK [CHANNELS]... ``` Options --version[#](#cmdoption-nima-version) Show the version and exit. --silent[#](#cmdoption-nima-silent) Do not print; verbose=0. -o, --output <output>[#](#cmdoption-nima-o) Output path [default: ./nima/]. --hotpixels[#](#cmdoption-nima-hotpixels) Median filter (rad=0.5) to remove hotpixels. -f, --flat <flat_f>[#](#cmdoption-nima-f) Dark for shading correction. -d, --dark <dark_f>[#](#cmdoption-nima-d) Flat for shading correction. --bg-method <bg_method>[#](#cmdoption-nima-bg-method) Background estimation algorithm [default:li_adaptive]. Options: li_adaptive | entropy | arcsinh | adaptive | li_li --bg-downscale <bg_downscale>[#](#cmdoption-nima-bg-downscale) Binning Y X. --bg-radius <bg_radius>[#](#cmdoption-nima-bg-radius) Radius for entropy or arcsinh methods [def:10]. --bg-adaptive-radius <bg_adaptive_radius>[#](#cmdoption-nima-bg-adaptive-radius) Radius for adaptive methods [def:X/2]. --bg-percentile <bg_percentile>[#](#cmdoption-nima-bg-percentile) Percentile for entropy or arcsinh methods [def:10]. --bg-percentile-filter <bg_percentile_filter>[#](#cmdoption-nima-bg-percentile-filter) Percentile filter for arcsinh method [def:80]. --fg-method <fg_method>[#](#cmdoption-nima-fg-method) Segmentation algorithm [default:yen]. Options: yen | li --min-size <min_size>[#](#cmdoption-nima-min-size) Minimum size labeled objects [def:2000]. --clear-border[#](#cmdoption-nima-clear-border) Remove labels touching image borders. --wiener[#](#cmdoption-nima-wiener) Wiener filter before segmentation. --watershed[#](#cmdoption-nima-watershed) Watershed binary mask (to label cells). --randomwalk[#](#cmdoption-nima-randomwalk) Randomwalk binary mask (to label cells). --image-ratios, --no-image-ratios[#](#cmdoption-nima-image-ratios) Compute ratio images? [default:True] --ratio-median-radii <ratio_median_radii>[#](#cmdoption-nima-ratio-median-radii) Median filter ratio images with radii [def:(7,3)]. --channels-cl <channels_cl>[#](#cmdoption-nima-channels-cl) Channels for Cl ratio [default:C/R]. --channels-ph <channels_ph>[#](#cmdoption-nima-channels-ph) Channels for pH ratio [default:G/C]. Arguments TIFFSTK[#](#cmdoption-nima-arg-TIFFSTK) Required argument CHANNELS[#](#cmdoption-nima-arg-CHANNELS) Optional argument(s) [bias](#id6)[#](#bias) --- Compute bias, dark and flat. ``` bias [OPTIONS] COMMAND [ARGS]... ``` Options --version[#](#cmdoption-bias-version) Show the version and exit. -o, --output <output>[#](#cmdoption-bias-o) Output path [default: [*](#id1).tif, [*](#id3).png]. ### [bias](#id7)[#](#bias-bias) Compute BIAS frame and estimate read noise. FPATH: the bias stack (Light Off - 0 acquisition time). Output: * .tif BIAS image = median projection * .png plot (histograms, median, projection, hot pixels) * [.csv coordinates and values of hot pixels] if detected ``` bias bias [OPTIONS] FPATH ``` Arguments FPATH[#](#cmdoption-bias-bias-arg-FPATH) Required argument ### [dark](#id8)[#](#bias-dark) Compute DARK. FPATH: the bias stack (Light Off - Long acquisition time). ``` bias dark [OPTIONS] FPATH ``` Options --bias <bias>[#](#cmdoption-bias-dark-bias) --time <time>[#](#cmdoption-bias-dark-time) Arguments FPATH[#](#cmdoption-bias-dark-arg-FPATH) Required argument ### [flat](#id9)[#](#bias-flat) Flat from (.tf8) file. ``` bias flat [OPTIONS] FPATH ``` Options --bias <bias>[#](#cmdoption-bias-flat-bias) Arguments FPATH[#](#cmdoption-bias-flat-arg-FPATH) Required argument ### [mflat](#id10)[#](#bias-mflat) Flat from a collection of (.tif) files. ``` bias mflat [OPTIONS] GLOBPATH ``` Options --bias <bias>[#](#cmdoption-bias-mflat-bias) Arguments GLOBPATH[#](#cmdoption-bias-mflat-arg-GLOBPATH) Required argument ### [plot](#id11)[#](#bias-plot) Plot profiles of 2D (Bias-Flat) image. ``` bias plot [OPTIONS] FPATH ``` Arguments FPATH[#](#cmdoption-bias-plot-arg-FPATH) Required argument Tutorials[#](#tutorials) === This part of the documentation guides you through all of the library’s usage patterns. API references[#](#api-references) === This part of the documentation lists the full API reference of all public classes and functions. nima.generat[#](#module-nima.generat) --- Generate mock images. **Functions:** | | | | --- | --- | | [`gen_bias`](#nima.generat.gen_bias)([nrows, ncols]) | Generate a bias frame. | | [`gen_flat`](#nima.generat.gen_flat)([nrows, ncols]) | Generate a flat frame. | | [`gen_object`](#nima.generat.gen_object)([nrows, ncols, min_radius, ...]) | Mimic <http://scipy-lectures.org/packages/scikit-image/index.html>. | | [`gen_objs`](#nima.generat.gen_objs)([max_fluor, max_n_obj]) | Generate a frame with ellipsoid objects; random n, shape, position and I. | | [`gen_frame`](#nima.generat.gen_frame)(objs[, bias, flat, dark, sky, ...]) | Simulate an acquired frame [bias + noise + dark + flat * (sky + obj)]. | nima.generat.gen_bias(*nrows=128*, *ncols=128*)[#](#nima.generat.gen_bias) Generate a bias frame. Return type: `ndarray`[`Any`, `dtype`[`float64`]] Parameters: * **nrows** (*int*) – * **ncols** (*int*) – nima.generat.gen_flat(*nrows=128*, *ncols=128*)[#](#nima.generat.gen_flat) Generate a flat frame. Return type: `ndarray`[`Any`, `dtype`[`float64`]] Parameters: * **nrows** (*int*) – * **ncols** (*int*) – nima.generat.gen_object(*nrows=128*, *ncols=128*, *min_radius=6*, *max_radius=12*)[#](#nima.generat.gen_object) Mimic <http://scipy-lectures.org/packages/scikit-image/index.html>. Return type: `ndarray`[`Any`, `dtype`[`bool_`]] Parameters: * **nrows** (*int*) – * **ncols** (*int*) – * **min_radius** (*int*) – * **max_radius** (*int*) – nima.generat.gen_objs(*max_fluor=20*, *max_n_obj=8*, ***kwargs*)[#](#nima.generat.gen_objs) Generate a frame with ellipsoid objects; random n, shape, position and I. Return type: `ndarray`[`Any`, `dtype`[`float64`]] Parameters: * **max_fluor** (*float*) – * **max_n_obj** (*int*) – * **kwargs** (*int*) – nima.generat.gen_frame(*objs*, *bias=None*, *flat=None*, *dark=0*, *sky=2*, *noise_sd=1*)[#](#nima.generat.gen_frame) Simulate an acquired frame [bias + noise + dark + flat * (sky + obj)]. Return type: `ndarray`[`Any`, `dtype`[`float64`]] Parameters: * **objs** (*ndarray**[**Any**,* *dtype**[**float64**]**]*) – * **bias** (*ndarray**[**Any**,* *dtype**[**float64**]**]* *|* *None*) – * **flat** (*ndarray**[**Any**,* *dtype**[**float64**]**]* *|* *None*) – * **dark** (*float*) – * **sky** (*float*) – * **noise_sd** (*float*) – nima.nima[#](#module-nima.nima) --- Main library module. Contains functions for the analysis of multichannel timelapse images. It can be used to apply dark, flat correction; segment cells from bg; label cells; obtain statistics for each label; compute ratio and ratio images between channels. **Functions:** | | | | --- | --- | | [`myhist`](#nima.nima.myhist)(im[, bins, log, nf]) | Plot image intensity as histogram. | | [`read_tiff`](#nima.nima.read_tiff)(fp, channels) | Read multichannel tif timelapse image. | | [`d_show`](#nima.nima.d_show)(d_im, **kws) | Imshow for dictionary of image (d_im). | | [`d_median`](#nima.nima.d_median)(d_im) | Median filter on dictionary of image (d_im). | | [`d_shading`](#nima.nima.d_shading)(d_im, dark, flat[, clip]) | Shading correction on d_im. | | [`bg`](#nima.nima.bg)(im[, kind, perc, radius, ...]) | Bg segmentation. | | [`d_bg`](#nima.nima.d_bg)(d_im[, downscale, kind, clip]) | Bg segmentation for d_im. | | [`d_mask_label`](#nima.nima.d_mask_label)(d_im[, min_size, channels, ...]) | Label cells in d_im. | | [`d_ratio`](#nima.nima.d_ratio)(d_im[, name, channels, radii]) | Ratio image between 2 channels in d_im. | | [`d_meas_props`](#nima.nima.d_meas_props)(d_im[, channels, channels_cl, ...]) | Calculate pH and cl ratios and labelprops. | | [`d_plot_meas`](#nima.nima.d_plot_meas)(bgs, meas, channels) | Plot meas object. | | [`plt_img_profile`](#nima.nima.plt_img_profile)(img[, title, hpix, vmin, vmax]) | Summary graphics for Flat-Bias images. | | [`plt_img_profile_2`](#nima.nima.plt_img_profile_2)(img[, title]) | Summary graphics for Flat-Bias images. | | [`hotpixels`](#nima.nima.hotpixels)(bias[, n_sd]) | Identify hot pixels in a bias-dark frame. | | [`correct_hotpixel`](#nima.nima.correct_hotpixel)(img, y, x) | Correct hot pixels in a frame. | nima.nima.myhist(*im*, *bins=60*, *log=False*, *nf=False*)[#](#nima.nima.myhist) Plot image intensity as histogram. ..note:: Consider deprecation. Return type: `None` Parameters: * **im** (*ImArray*) – * **bins** (*int*) – * **log** (*bool*) – * **nf** (*bool*) – nima.nima.read_tiff(*fp*, *channels*)[#](#nima.nima.read_tiff) Read multichannel tif timelapse image. Parameters: * **fp** (*Path*) – File (TIF format) to be opened. * **channels** (*list* *of* *string*) – List a name for each channel. Return type: `tuple`[`dict`[`str`, `TypeVar`(`ImArray`, `ndarray`[`Any`, `dtype`[`int64`]], `ndarray`[`Any`, `dtype`[`float64`]], `ndarray`[`Any`, `dtype`[`bool_`]])], `int`, `int`] Returns: * **d_im** (*dict*) – Dictionary of images. Each keyword represents a channel, named according to channels string list. * **n_channels** (*int*) – Number of channels. * **n_times** (*int*) – Number of timepoints. Examples ``` >>> d_im, n_channels, n_times = read_tiff('tests/data/1b_c16_15.tif', channels=['G', 'R', 'C']) >>> n_channels, n_times (3, 4) ``` nima.nima.d_show(*d_im*, ***kws*)[#](#nima.nima.d_show) Imshow for dictionary of image (d_im). Support plt.imshow kws. Return type: `Figure` Parameters: * **d_im** (*dict**[**str**,* *ImArray**]*) – * **kws** (*Any*) – nima.nima.d_median(*d_im*)[#](#nima.nima.d_median) Median filter on dictionary of image (d_im). Same to skimage.morphology.disk(1) and to median filter of Fiji/ImageJ with radius=0.5. Parameters: **d_im** (*dict* *of* *images*) – Returns: **d_im** – preserve dtype of input Return type: dict of images nima.nima.d_shading(*d_im*, *dark*, *flat*, *clip=True*)[#](#nima.nima.d_shading) Shading correction on d_im. Subtract dark; then divide by flat. Works either with flat or d_flat Need also dark for each channel because it can be different when using different acquisition times. Parameters: * **d_im** (`dict`[`str`, `TypeVar`(`ImArray`, `ndarray`[`Any`, `dtype`[`int64`]], `ndarray`[`Any`, `dtype`[`float64`]], `ndarray`[`Any`, `dtype`[`bool_`]])]) – Dictionary of images. * **dark** (*2D image* *or* *(**2D**)* *d_im*) – Dark image. * **flat** (*2D image* *or* *(**2D**)* *d_im*) – Flat image. * **clip** (*bool*) – Boolean for clipping values >=0. Returns: Corrected d_im. Return type: d_im nima.nima.bg(*im*, *kind='arcsinh'*, *perc=10.0*, *radius=10*, *adaptive_radius=None*, *arcsinh_perc=80*)[#](#nima.nima.bg) Bg segmentation. Return median, whole vector, figures (in a [list]) Parameters: * **im** (*Im*) – An image stack. * **kind** (*str*) – Method {‘arcsinh’, ‘entropy’, ‘adaptive’, ‘li_adaptive’, ‘li_li’} used for the segmentation. * **perc** (*float*) – Perc % of max-min (default=10) for thresholding *entropy* and *arcsinh* methods. * **radius** (*int**,* *optional*) – Radius (default=10) used in *entropy* and *arcsinh* (percentile_filter) methods. * **adaptive_radius** (*int**,* *optional*) – Size for the adaptive filter of skimage (default is im.shape[1]/2). * **arcsinh_perc** (*int**,* *optional*) – Perc (default=80) used in the percentile_filter (scipy) within *arcsinh* method. Return type: `tuple`[`float`, `ndarray`[`Any`, `dtype`[`int64`]] | `ndarray`[`Any`, `dtype`[`float64`]], `list`[`Figure`]] Returns: * **median** (*float*) – Median of the bg masked pixels. * **pixel_values** (*list ?*) – Values of all bg masked pixels. * **figs** (*{[f1], [f1, f2]}*) – List of fig(s). Only entropy and arcsinh methods have 2 elements. nima.nima.d_bg(*d_im*, *downscale=None*, *kind='li_adaptive'*, *clip=True*)[#](#nima.nima.d_bg) Bg segmentation for d_im. Parameters: * **d_im** (*d_im*) – desc * **downscale** (*{None**,* *tupla}*) – Tupla, x, y are downscale factors for rows, cols. * **kind** (*str*) – Bg method among {‘li_adaptive’, ‘arcsinh’, ‘entropy’, ‘adaptive’, ‘li_li’}. * **clip** (*bool*) – Boolean (default=True) for clipping values >=0. Return type: `tuple`[`dict`[`str`, `TypeVar`(`Im`, `ndarray`[`Any`, `dtype`[`int64`]], `ndarray`[`Any`, `dtype`[`float64`]])], `DataFrame`, `dict`[`str`, `list`[`list`[`Figure`]]], `dict`[`str`, `list`[`ndarray`[`Any`, `dtype`[`int64`]] | `ndarray`[`Any`, `dtype`[`float64`]]]]] Returns: * **d_cor** (*d_im*) – Dictionary of images subtracted for the estimated bg. * **bgs** (*pd.DataFrame*) – Median of the estimated bg; columns for channels and index for time points. * **figs** (*list*) – List of (list ?) of figures. * **d_bg_values** (*dict*) – Background values keys are channels containing a list (for each time point) of list of values. nima.nima.d_mask_label(*d_im*, *min_size=640*, *channels=('C', 'G', 'R')*, *threshold_method='yen'*, *wiener=False*, *watershed=False*, *clear_border=False*, *randomwalk=False*)[#](#nima.nima.d_mask_label) Label cells in d_im. Add two keys, mask and label. Perform plane-by-plane (2D image): * geometric average of all channels; * optional wiener filter (3,3); * mask using threshold_method; * remove objects smaller than **min_size**; * binary closing; * optionally remove any object on borders; * label each ROI; * optionally perform watershed on labels. Parameters: * **d_im** (*d_im*) – desc * **min_size** (*type**,* *optional*) – Objects smaller than min_size (default=640 pixels) are discarded from mask. * **channels** (*list* *of* *string*) – List a name for each channel. * **threshold_method** (*{'yen'**,* *'li'}*) – Method for thresholding (skimage) the geometric average plane-by-plane. * **wiener** (*bool**,* *optional*) – Boolean (default=False) for wiener filter. * **watershed** (*bool**,* *optional*) – Boolean (default=False) for watershed on labels. * **clear_border** (*bool**,* *optional*) – Boolean (default=False) for removing objects that are touching the image (2D) border. * **randomwalk** (*bool**,* *optional*) – Boolean (default=False) for using random_walker in place of watershed (skimage) algorithm after ndimage.distance_transform_edt() calculation. Return type: `None` Notes Side effects:Add a ‘label’ key to the d_im. nima.nima.d_ratio(*d_im*, *name='r_cl'*, *channels=('C', 'R')*, *radii=(7, 3)*)[#](#nima.nima.d_ratio) Ratio image between 2 channels in d_im. Add masked (bg=0; fg=ratio) median-filtered ratio for 2 channels. So, d_im must (already) contain keys for mask and the two channels. After ratio computation any -inf, nan and inf values are replaced with 0. These values should be generated (upon ratio) only in the bg. You can check: r_cl[d_im[‘labels’]==4].min() Parameters: * **d_im** (*d_im*) – desc * **name** (*str*) – Name (default=’r_cl’) for the new key. * **channels** (*list* *of* *string*) – Names (default=[‘C’, ‘R’]) for the two channels [Numerator, Denominator]. * **radii** (*tupla* *of* *int**,* *optional*) – Each element contain a radius value for a median filter cycle. Return type: `None` Notes Add a key named “name” and containing the calculated ratio to d_im. nima.nima.d_meas_props(*d_im*, *channels=('C', 'G', 'R')*, *channels_cl=('C', 'R')*, *channels_ph=('G', 'C')*, *ratios_from_image=True*, *radii=None*)[#](#nima.nima.d_meas_props) Calculate pH and cl ratios and labelprops. Parameters: * **d_im** (*d_im*) – desc * **channels** (*list* *of* *string*) – All d_im channels (default=[‘C’, ‘G’, ‘R’]). * **channels_cl** (*tuple* *of* *string*) – Names (default=(‘C’, ‘R’)) of the numerator and denominator channels for cl ratio. * **channels_ph** (*tuple* *of* *string*) – Names (default=(‘G’, ‘C’)) of the numerator and denominator channels for pH ratio. * **ratios_from_image** (*bool**,* *optional*) – Boolean (default=True) for executing d_ratio i.e. compute ratio images. * **radii** (*(**int**,* *int**)**,* *Optional*) – Radii of the optional median average performed on ratio images. Return type: `tuple`[`dict`[`int32`, `DataFrame`], `dict`[`str`, `list`[`list`[`Any`]]]] Returns: * **meas** (*dict of pd.DataFrame*) – For each label in labels: {‘label’: df}. DataFrame columns are: mean intensity of all channels, ‘equivalent_diameter’, ‘eccentricity’, ‘area’, ratios from the mean intensities and optionally ratios from ratio-image. * **pr** (*dict of list of list*) – For each channel: {‘channel’: [props]} i.e. {‘channel’: [time][label]}. nima.nima.d_plot_meas(*bgs*, *meas*, *channels*)[#](#nima.nima.d_plot_meas) Plot meas object. Plot r_pH, r_cl, mean intensity for each channel and estimated bg over timepoints for each label (color coded). Parameters: * **bgs** (*pd.DataFrame*) – Estimated bg returned from d_bg() * **meas** (*dict* *of* *pd.DataFrame*) – meas object returned from d_meas_props(). * **channels** (*list* *of* *string*) – All bgs and meas channels (default=[‘C’, ‘G’, ‘R’]). Returns: **fig** – Figure. Return type: plt.Figure nima.nima.plt_img_profile(*img*, *title=None*, *hpix=None*, *vmin=None*, *vmax=None*)[#](#nima.nima.plt_img_profile) Summary graphics for Flat-Bias images. Parameters: * **img** (*ImArray*) – Image of Flat or Bias. * **title** (*Optional**[**str**]*) – Title of the figure. * **hpix** (*pd.DataFrame**,* *optional*) – Identified hot pixels (as empty or not empty df). * **vmin** (*float**,* *optional*) – Minimum value. * **vmax** (*float**,* *optional*) – Maximum value. Return type: plt.Figure nima.nima.plt_img_profile_2(*img*, *title=None*)[#](#nima.nima.plt_img_profile_2) Summary graphics for Flat-Bias images. Parameters: * **img** (*ImArray*) – Image of Flat or Bias. * **title** (*Optional**[**str**]*) – Title of the figure. Return type: plt.Figure nima.nima.hotpixels(*bias*, *n_sd=20*)[#](#nima.nima.hotpixels) Identify hot pixels in a bias-dark frame. After identification of first outliers recompute masked average and std until convergence. Parameters: * **bias** (*ImArray*) – Usually the median over a stack of 100 frames. * **n_sd** (*int*) – Number of SD above mean (masked out of hot pixels) value. Returns: y, x positions and values of hot pixels. Return type: pd.DataFrame nima.nima.correct_hotpixel(*img*, *y*, *x*)[#](#nima.nima.correct_hotpixel) Correct hot pixels in a frame. Substitute indicated position y, x with the median value of the 4 neighbor pixels. Parameters: * **img** (*ImArray*) – Frame (2D) image. * **y** (*int* *|* *list**(**int**)*) – y-coordinate(s). * **x** (*int* *|* *list**(**int**)*) – x-coordinate(s). Return type: `None` Development references[#](#development-references) === Descriptions[#](#descriptions) --- ### nima[#](#nima) Here is the UML for the implementation class. Development[#](#development) --- You need the following requirements: * `hatch` for test automation and package dependency managements. If you don’thave hatch, you can use `pipx run hatch` to run it without installing, or `pipx install hatch`. Dependencies and their versions are specified in the pyproject.toml file and updated in GitHub with Dependabot. You can run `hatch env show` to list available environments and scripts. ``` hatch env create hatch run init # init repo with pre-commit hooks hatch run lint hatch run tests.py3.11:all ``` Hatch handles everything for you, including setting up an temporary virtual environment for each run. * `pre-commit` for all style and consistency checking. While you can run it with nox, this is such an important tool that it deserves to be installed on its own. If pre-commit fails during pushing upstream then stage changes, Commit Extend (into previous commit), and repeat pushing. `pip` and `hatch` are pinned in .github/workflows/constraints.txt for consistency with CI/CD. If you like install hatch and required deps in archlinux with: ``` pacman -S python-hatch python-hyperlink python-httpx ``` While `pre-commit` is listed as a ’dev’ dependency, you also have the option to install it globally using pipx. This can be done with the following command: ``` pipx install pre-commit ``` ### Setting up a development with direnv[#](#setting-up-a-development-with-direnv) ``` echo "layout hatch" > .envrc hatch run init ``` ### Setting up a development environment manually[#](#setting-up-a-development-environment-manually) You can set up a development environment by running: ``` python3 -m venv .venv source ./.venv/bin/activate pip install -v -e .[dev,tests,docs] ``` With direnv for using [Jupyter](https://jupyter.org/) during development: ``` jupiter notebook ``` And only in case you need a system wide easy accessible kernel: ``` python -m ipykernel install --user --name="nima" ``` ### Testing and coverage[#](#testing-and-coverage) Use pytest to run the unit checks: ``` pytest ``` Use `coverage` to generate coverage reports: ``` coverage run --branch -p -m pytest ``` Or use hatch: ``` hatch run tests:all (hatch run tests:cov) ``` ### Building docs[#](#building-docs) You can build the docs using: ``` hatch run docs ``` You can see a preview with: ``` hatch run docserve ``` When needed (e.g. API updates): ``` sphinx-apidoc -f -o docs/api/ src/nima/ ``` ### Bump and releasing[#](#bump-and-releasing) To bump version and upload build to test.pypi using: ``` hatch run bump hatch run bump "--increment PATCH" "--files-only" \ ["--no-verify" to bypass pre-commit and commit-msg hooks] git push ``` while to update only the CHANGELOG.md file: ``` hatch run ch ``` Release will automatically occur after pushing. (Otherwise) ``` pipx run --spec commitizen cz bump --changelog-to-stdout --files-only \ (--prerelease alpha) --increment MINOR ``` To keep clean development history use branches and pr: ``` gh pr create --fill gh pr merge --squash --delete-branch [-t “fix|ci|feat: msg”] ``` ### Configuration files[#](#configuration-files) Manually updated pinned dependencies for CI/CD: * .github/workflows/constraints.txt (testing dependabot) Configuration files: * pre-commit configured in .pre-commit-config.yaml; * bandit (sys) configured in bandit.yml; * pylint (sys) configured in pyproject.toml; * isort (sys) configured in pyproject.toml; * black configured in pyproject.toml (pinned in pre-commit); * ruff configured in pyproject.toml (pinned in pre-commit); * darglint configured in .darglint (pinned in pre-commit); * codespell configured in .codespellrc (pinned in pre-commit); * coverage configured in pyproject.toml (tests deps); * mypy configured in pyproject.toml (tests deps); * commitizen in pyproject.toml (dev deps and pinned in pre-commit). While the exact dependencies and their versions are specified in the pyproject.toml file and updated in GitHub with Dependabot, a complete list of all the required packages and their versions (including transitive dependencies) can be generated by pip-deepfreeze in the requirements-[dev,docs,tests].txt files. This makes it possible to maintain a clear and detailed record of the project’s dependency requirements, aiding in maintaining a stable and predictable environment across different setups. Other manual actions: ``` pylint src/ tests/ bandit -r src/ ``` Contributing[#](#contributing) --- Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: ### Report Bugs[#](#report-bugs) Report bugs at [darosio/nima#issues](https://github.com/darosio/nima/issues). If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. ### Fix Bugs[#](#fix-bugs) Look through the GitHub issues for bugs. Anything tagged with “bug” is open to whoever wants to implement it. ### Implement Features[#](#implement-features) Look through the GitHub issues for features. Anything tagged with “feature” is open to whoever wants to implement it. ### Write Documentation[#](#write-documentation) NImA could always use more documentation, whether as part of the official NImA docs, in docstrings, or even on the web in blog posts, articles, and such. ### Submit Feedback[#](#submit-feedback) The best way to send feedback is to file an issue at [darosio/nima#issues](https://github.com/darosio/nima/issues). If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) On this page * [Command-line tool](index.html#document-click) + [nima](index.html#nima) + [bias](index.html#bias) - [bias](index.html#bias-bias) - [dark](index.html#bias-dark) - [flat](index.html#bias-flat) - [mflat](index.html#bias-mflat) - [plot](index.html#bias-plot) * [Tutorials](index.html#document-tutorials/tutorials) * [API references](index.html#document-api/api) + [nima.generat](index.html#document-api/generat) - [`gen_bias()`](index.html#nima.generat.gen_bias) - [`gen_flat()`](index.html#nima.generat.gen_flat) - [`gen_object()`](index.html#nima.generat.gen_object) - [`gen_objs()`](index.html#nima.generat.gen_objs) - [`gen_frame()`](index.html#nima.generat.gen_frame) + [nima.nima](index.html#document-api/nima) - [`myhist()`](index.html#nima.nima.myhist) - [`read_tiff()`](index.html#nima.nima.read_tiff) - [`d_show()`](index.html#nima.nima.d_show) - [`d_median()`](index.html#nima.nima.d_median) - [`d_shading()`](index.html#nima.nima.d_shading) - [`bg()`](index.html#nima.nima.bg) - [`d_bg()`](index.html#nima.nima.d_bg) - [`d_mask_label()`](index.html#nima.nima.d_mask_label) - [`d_ratio()`](index.html#nima.nima.d_ratio) - [`d_meas_props()`](index.html#nima.nima.d_meas_props) - [`d_plot_meas()`](index.html#nima.nima.d_plot_meas) - [`plt_img_profile()`](index.html#nima.nima.plt_img_profile) - [`plt_img_profile_2()`](index.html#nima.nima.plt_img_profile_2) - [`hotpixels()`](index.html#nima.nima.hotpixels) - [`correct_hotpixel()`](index.html#nima.nima.correct_hotpixel) * [Development references](index.html#document-references/references) + [Descriptions](index.html#document-references/description) - [nima](index.html#document-references/nima) + [Development](index.html#document-references/development) - [Setting up a development with direnv](index.html#setting-up-a-development-with-direnv) - [Setting up a development environment manually](index.html#setting-up-a-development-environment-manually) - [Testing and coverage](index.html#testing-and-coverage) - [Building docs](index.html#building-docs) - [Bump and releasing](index.html#bump-and-releasing) - [Configuration files](index.html#configuration-files) + [Contributing](index.html#document-references/contributing) - [Report Bugs](index.html#report-bugs) - [Fix Bugs](index.html#fix-bugs) - [Implement Features](index.html#implement-features) - [Write Documentation](index.html#write-documentation) - [Submit Feedback](index.html#submit-feedback) {# Load FontAwesome icons #} {% macro head_pre_icons() %} {% endmacro %} {% macro head_pre_assets() %} {% endmacro %} {% macro head_js_preload() %} {% endmacro %} {% macro body_post() %} {% endmacro %}
Android.pdf
free_programming_book
Unknown
Android Android Notes for Professionals Notes for Professionals 1000+ pages of professional hints and tricks GoalKicker.com Free Programming Books Disclaimer This is an unocial free book created for educational purposes and is not aliated with ocial Android group(s) or company(s). All trademarks and registered trademarks are the property of their respective owners Contents About ... 1 Chapter 1: Getting started with Android ... 2 Section 1.1: Creating a New Project ... 2 Section 1.2: Setting up Android Studio ... 13 Section 1.3: Android programming without an IDE ... 14 Section 1.4: Application Fundamentals ... 18 Section 1.5: Setting up an AVD (Android Virtual Device) ... 19 Chapter 2: Android Studio ... 23 Section 2.1: Setup Android Studio ... 23 Section 2.2: View And Add Shortcuts in Android Studio ... 23 Section 2.3: Android Studio useful shortcuts ... 24 Section 2.4: Android Studio Improve performance tip ... 25 Section 2.5: Gradle build project takes forever ... 26 Section 2.6: Enable/Disable blank line copy ... 26 Section 2.7: Custom colors of logcat message based on message importance ... 27 Section 2.8: Filter logs from UI ... 28 Section 2.9: Create lters conguration ... 29 Section 2.10: Create assets folder ... 30 Chapter 3: Instant Run in Android Studio ... 32 Section 3.1: Enabling or disabling Instant Run ... 32 Section 3.2: Types of code Swaps in Instant Run ... 32 Section 3.3: Unsupported code changes when using Instant Run ... 33 Chapter 4: TextView ... 34 Section 4.1: Spannable TextView ... 34 Section 4.2: Strikethrough TextView ... 35 Section 4.3: TextView with image ... 36 Section 4.4: Make RelativeSizeSpan align to top ... 36 Section 4.5: Pinchzoom on TextView ... 38 Section 4.6: Textview with dierent Textsize ... 39 Section 4.7: Theme and Style customization ... 39 Section 4.8: TextView customization ... 41 Section 4.9: Single TextView with two dierent colors ... 44 Chapter 5: AutoCompleteTextView ... 46 Section 5.1: AutoComplete with CustomAdapter, ClickListener and Filter ... 46 Section 5.2: Simple, hard-coded AutoCompleteTextView ... 49 Chapter 6: Autosizing TextViews ... 50 Section 6.1: Granularity ... 50 Section 6.2: Preset Sizes ... 50 Chapter 7: ListView ... 52 Section 7.1: Custom ArrayAdapter ... 52 Section 7.2: A basic ListView with an ArrayAdapter ... 53 Section 7.3: Filtering with CursorAdapter ... 53 Chapter 8: Layouts ... 55 Section 8.1: LayoutParams ... 55 Section 8.2: Gravity and layout gravity ... 58 Section 8.3: CoordinatorLayout Scrolling Behavior ... 60 Section 8.4: Percent Layouts ... 62 Section 8.5: View Weight ... 63 Section 8.6: Creating LinearLayout programmatically ... 64 Section 8.7: LinearLayout ... 65 Section 8.8: RelativeLayout ... 66 Section 8.9: FrameLayout ... 68 Section 8.10: GridLayout ... 69 Section 8.11: CoordinatorLayout ... 71 Chapter 9: ConstraintLayout ... 73 Section 9.1: Adding ConstraintLayout to your project ... 73 Section 9.2: Chains ... 74 Chapter 10: TextInputLayout ... 75 Section 10.1: Basic usage ... 75 Section 10.2: Password Visibility Toggles ... 75 Section 10.3: Adding Character Counting ... 75 Section 10.4: Handling Errors ... 76 Section 10.5: Customizing the appearance of the TextInputLayout ... 76 Section 10.6: TextInputEditText ... 77 Chapter 11: CoordinatorLayout and Behaviors ... 79 Section 11.1: Creating a simple Behavior ... 79 Section 11.2: Using the SwipeDismissBehavior ... 80 Section 11.3: Create dependencies between Views ... 80 Chapter 12: TabLayout ... 82 Section 12.1: Using a TabLayout without a ViewPager ... 82 Chapter 13: ViewPager ... 83 Section 13.1: ViewPager with a dots indicator ... 83 Section 13.2: Basic ViewPager usage with fragments ... 85 Section 13.3: ViewPager with PreferenceFragment ... 86 Section 13.4: Adding a ViewPager ... 87 Section 13.5: Setup OnPageChangeListener ... 88 Section 13.6: ViewPager with TabLayout ... 89 Chapter 14: CardView ... 92 Section 14.1: Getting Started with CardView ... 92 Section 14.2: Adding Ripple animation ... 93 Section 14.3: Customizing the CardView ... 93 Section 14.4: Using Images as Background in CardView (Pre-Lollipop device issues) ... 94 Section 14.5: Animate CardView background color with TransitionDrawable ... 96 Chapter 15: NavigationView ... 97 Section 15.1: How to add the NavigationView ... 97 Section 15.2: Add underline in menu elements ... 101 Section 15.3: Add seperators to menu ... 102 Section 15.4: Add menu Divider using default DividerItemDecoration ... 103 Chapter 16: RecyclerView ... 105 Section 16.1: Adding a RecyclerView ... 105 Section 16.2: Smoother loading of items ... 106 Section 16.3: RecyclerView with DataBinding ... 107 Section 16.4: Animate data change ... 108 Section 16.5: Popup menu with recyclerView ... 112 Section 16.6: Using several ViewHolders with ItemViewType ... 114 Section 16.7: Filter items inside RecyclerView with a SearchView ... 115 Section 16.8: Drag&Drop and Swipe with RecyclerView ... 116 Section 16.9: Show default view till items load or when data is not available ... 117 Section 16.10: Add header/footer to a RecyclerView ... 119 Section 16.11: Endless Scrolling in Recycleview ... 122 Section 16.12: Add divider lines to RecyclerView items ... 122 Chapter 17: RecyclerView Decorations ... 125 Section 17.1: Add divider to RecyclerView ... 125 Section 17.2: Drawing a Separator ... 127 Section 17.3: How to add dividers using and DividerItemDecoration ... 128 Section 17.4: Per-item margins with ItemDecoration ... 128 Section 17.5: ItemOsetDecoration for GridLayoutManager in RecycleView ... 129 Chapter 18: RecyclerView onClickListeners ... 131 Section 18.1: Kotlin and RxJava example ... 131 Section 18.2: RecyclerView Click listener ... 132 Section 18.3: Another way to implement Item Click Listener ... 133 Section 18.4: New Example ... 135 Section 18.5: Easy OnLongClick and OnClick Example ... 136 Section 18.6: Item Click Listeners ... 139 Chapter 19: RecyclerView and LayoutManagers ... 141 Section 19.1: Adding header view to recyclerview with gridlayout manager ... 141 Section 19.2: GridLayoutManager with dynamic span count ... 142 Section 19.3: Simple list with LinearLayoutManager ... 144 Section 19.4: StaggeredGridLayoutManager ... 148 Chapter 20: Pagination in RecyclerView ... 151 Section 20.1: MainActivity.java ... 151 Chapter 21: ImageView ... 156 Section 21.1: Set tint ... 156 Section 21.2: Set alpha ... 157 Section 21.3: Set Scale Type ... 157 Section 21.4: ImageView ScaleType - Center ... 162 Section 21.5: ImageView ScaleType - CenterCrop ... 164 Section 21.6: ImageView ScaleType - CenterInside ... 166 Section 21.7: ImageView ScaleType - FitStart and FitEnd ... 168 Section 21.8: ImageView ScaleType - FitCenter ... 172 Section 21.9: Set Image Resource ... 174 Section 21.10: ImageView ScaleType - FitXy ... 175 Section 21.11: MLRoundedImageView.java ... 177 Chapter 22: VideoView ... 180 Section 22.1: Play video from URL with using VideoView ... 180 Section 22.2: VideoView Create ... 180 Chapter 23: Optimized VideoView ... 181 Section 23.1: Optimized VideoView in ListView ... 181 Chapter 24: WebView ... 193 Section 24.1: Troubleshooting WebView by printing console messages or by remote debugging ... 193 Section 24.2: Communication from Javascript to Java (Android) ... 194 Section 24.3: Communication from Java to Javascript ... 195 Section 24.4: Open dialer example ... 195 Section 24.5: Open Local File / Create dynamic content in Webview ... 196 Section 24.6: JavaScript alert dialogs in WebView - How to make them work ... 196 Chapter 25: SearchView ... 198 Section 25.1: Setting Theme for SearchView ... 198 Section 25.2: SearchView in Toolbar with Fragment ... 198 Section 25.3: Appcompat SearchView with RxBindings watcher ... 200 Chapter 26: BottomNavigationView ... 203 Section 26.1: Basic implemetation ... 203 Section 26.2: Customization of BottomNavigationView ... 204 Section 26.3: Handling Enabled / Disabled states ... 204 Section 26.4: Allowing more than 3 menus ... 205 Chapter 27: Canvas drawing using SurfaceView ... 207 Section 27.1: SurfaceView with drawing thread ... 207 Chapter 28: Creating Custom Views ... 212 Section 28.1: Creating Custom Views ... 212 Section 28.2: Adding attributes to views ... 214 Section 28.3: CustomView performance tips ... 216 Section 28.4: Creating a compound view ... 217 Section 28.5: Compound view for SVG/VectorDrawable as drawableRight ... 220 Section 28.6: Responding to Touch Events ... 223 Chapter 29: Getting Calculated View Dimensions ... 224 Section 29.1: Calculating initial View dimensions in an Activity ... 224 Chapter 30: Adding a FuseView to an Android Project ... 225 Section 30.1: hikr app, just another android.view.View ... 225 Chapter 31: Supporting Screens With Dierent Resolutions, Sizes ... 232 Section 31.1: Using conguration qualiers ... 232 Section 31.2: Converting dp and sp to pixels ... 232 Section 31.3: Text size and dierent android screen sizes ... 233 Chapter 32: ViewFlipper ... 234 Section 32.1: ViewFlipper with image sliding ... 234 Chapter 33: Design Patterns ... 235 Section 33.1: Observer pattern ... 235 Section 33.2: Singleton Class Example ... 235 Chapter 34: Activity ... 237 Section 34.1: Activity launchMode ... 237 Section 34.2: Exclude an activity from back-stack history ... 238 Section 34.3: Android Activity LifeCycle Explained ... 238 Section 34.4: End Application with exclude from Recents ... 241 Section 34.5: Presenting UI with setContentView ... 242 Section 34.6: Up Navigation for Activities ... 243 Section 34.7: Clear your current Activity stack and launch a new Activity ... 244 Chapter 35: Activity Recognition ... 246 Section 35.1: Google Play ActivityRecognitionAPI ... 246 Section 35.2: PathSense Activity Recognition ... 248 Chapter 36: Split Screen / Multi-Screen Activities ... 250 Section 36.1: Split Screen introduced in Android Nougat implemented ... 250 Chapter 37: Material Design ... 251 Section 37.1: Adding a Toolbar ... 251 Section 37.2: Buttons styled with Material Design ... 252 Section 37.3: Adding a FloatingActionButton (FAB) ... 253 Section 37.4: RippleDrawable ... 254 Section 37.5: Adding a TabLayout ... 259 Section 37.6: Bottom Sheets in Design Support Library ... 261 Section 37.7: Apply an AppCompat theme ... 264 Section 37.8: Add a Snackbar ... 265 Section 37.9: Add a Navigation Drawer ... 266 Section 37.10: How to use TextInputLayout ... 269 Chapter 38: Resources ... 270 Section 38.1: Dene colors ... 270 Section 38.2: Color Transparency(Alpha) Level ... 271 Section 38.3: Dene String Plurals ... 271 Section 38.4: Dene strings ... 272 Section 38.5: Dene dimensions ... 273 Section 38.6: String formatting in strings.xml ... 273 Section 38.7: Dene integer array ... 274 Section 38.8: Dene a color state list ... 274 Section 38.9: 9 Patches ... 275 Section 38.10: Getting resources without "deprecated" warnings ... 278 Section 38.11: Working with strings.xml le ... 278 Section 38.12: Dene string array ... 279 Section 38.13: Dene integers ... 280 Section 38.14: Dene a menu resource and use it inside Activity/Fragment ... 280 Chapter 39: Data Binding Library ... 282 Section 39.1: Basic text eld binding ... 282 Section 39.2: Built-in two-way Data Binding ... 283 Section 39.3: Custom event using lambda expression ... 284 Section 39.4: Default value in Data Binding ... 286 Section 39.5: Databinding in Dialog ... 286 Section 39.6: Binding with an accessor method ... 286 Section 39.7: Pass widget as reference in BindingAdapter ... 287 Section 39.8: Click listener with Binding ... 288 Section 39.9: Data binding in RecyclerView Adapter ... 289 Section 39.10: Databinding in Fragment ... 290 Section 39.11: DataBinding with custom variables(int,boolean) ... 291 Section 39.12: Referencing classes ... 291 Chapter 40: SharedPreferences ... 293 Section 40.1: Implementing a Settings screen using SharedPreferences ... 293 Section 40.2: Commit vs. Apply ... 295 Section 40.3: Read and write values to SharedPreferences ... 295 Section 40.4: Retrieve all stored entries from a particular SharedPreferences le ... 296 Section 40.5: Reading and writing data to SharedPreferences with Singleton ... 297 Section 40.6: getPreferences(int) VS getSharedPreferences(String, int) ... 301 Section 40.7: Listening for SharedPreferences changes ... 301 Section 40.8: Store, Retrieve, Remove and Clear Data from SharedPreferences ... 302 Section 40.9: Add lter for EditTextPreference ... 302 Section 40.10: Supported data types in SharedPreferences ... 303 Section 40.11: Dierent ways of instantiating an object of SharedPreferences ... 303 Section 40.12: Removing keys ... 304 Section 40.13: Support pre-Honeycomb with StringSet ... 304 Chapter 41: Intent ... 306 Section 41.1: Getting a result from another Activity ... 306 Section 41.2: Passing data between activities ... 308 Section 41.3: Open a URL in a browser ... 309 Section 41.4: Starter Pattern ... 310 Section 41.5: Clearing an activity stack ... 311 Section 41.6: Start an activity ... 311 Section 41.7: Sending emails ... 312 Section 41.8: CustomTabsIntent for Chrome Custom Tabs ... 312 Section 41.9: Intent URI ... 313 Section 41.10: Start the dialer ... 314 Section 41.11: Broadcasting Messages to Other Components ... 314 Section 41.12: Passing custom object between activities ... 315 Section 41.13: Open Google map with specied latitude, longitude ... 317 Section 41.14: Passing dierent data through Intent in Activity ... 317 Section 41.15: Share intent ... 319 Section 41.16: Showing a File Chooser and Reading the Result ... 319 Section 41.17: Sharing Multiple Files through Intent ... 321 Section 41.18: Start Unbound Service using an Intent ... 321 Section 41.19: Getting a result from Activity to Fragment ... 322 Chapter 42: Fragments ... 324 Section 42.1: Pass data from Activity to Fragment using Bundle ... 324 Section 42.2: The newInstance() pattern ... 324 Section 42.3: Navigation between fragments using backstack and static fabric pattern ... 325 Section 42.4: Sending events back to an activity with callback interface ... 326 Section 42.5: Animate the transition between fragments ... 327 Section 42.6: Communication between Fragments ... 328 Chapter 43: Button ... 333 Section 43.1: Using the same click event for one or more Views in the XML ... 333 Section 43.2: Dening external Listener ... 333 Section 43.3: inline onClickListener ... 334 Section 43.4: Customizing Button style ... 334 Section 43.5: Custom Click Listener to prevent multiple fast clicks ... 338 Section 43.6: Using the layout to dene a click action ... 338 Section 43.7: Listening to the long click events ... 339 Chapter 44: Emulator ... 340 Section 44.1: Taking screenshots ... 340 Section 44.2: Simulate call ... 345 Section 44.3: Open the AVD Manager ... 345 Section 44.4: Resolving Errors while starting emulator ... 345 Chapter 45: Service ... 347 Section 45.1: Lifecycle of a Service ... 347 Section 45.2: Dening the process of a service ... 348 Section 45.3: Creating an unbound service ... 348 Section 45.4: Starting a Service ... 351 Section 45.5: Creating Bound Service with help of Binder ... 351 Section 45.6: Creating Remote Service (via AIDL) ... 352 Chapter 46: The Manifest File ... 354 Section 46.1: Declaring Components ... 354 Section 46.2: Declaring permissions in your manifest le ... 354 Chapter 47: Gradle for Android ... 356 Section 47.1: A basic build.gradle le ... 356 Section 47.2: Dene and use Build Conguration Fields ... 358 Section 47.3: Centralizing dependencies via "dependencies.gradle" le ... 361 Section 47.4: Sign APK without exposing keystore password ... 362 Section 47.5: Adding product avor-specic dependencies ... 364 Section 47.6: Specifying dierent application IDs for build types and product avors ... 364 Section 47.7: Versioning your builds via "version.properties" le ... 365 Section 47.8: Dening product avors ... 366 Section 47.9: Changing output apk name and add version name: ... 366 Section 47.10: Adding product avor-specic resources ... 367 Section 47.11: Why are there two build.gradle les in an Android Studio project? ... 367 Section 47.12: Directory structure for avor-specic resources ... 368 Section 47.13: Enable Proguard using gradle ... 368 Section 47.14: Ignoring build variant ... 369 Section 47.15: Enable experimental NDK plugin support for Gradle and AndroidStudio ... 369 Section 47.16: Display signing information ... 371 Section 47.17: Seeing dependency tree ... 372 Section 47.18: Disable image compression for a smaller APK le size ... 373 Section 47.19: Delete "unaligned" apk automatically ... 373 Section 47.20: Executing a shell script from gradle ... 373 Section 47.21: Show all gradle project tasks ... 374 Section 47.22: Debugging your Gradle errors ... 375 Section 47.23: Use gradle.properties for central versionnumber/buildcongurations ... 376 Section 47.24: Dening build types ... 377 Chapter 48: FileIO with Android ... 378 Section 48.1: Obtaining the working folder ... 378 Section 48.2: Writing raw array of bytes ... 378 Section 48.3: Serializing the object ... 378 Section 48.4: Writing to external storage (SD card) ... 379 Section 48.5: Solving "Invisible MTP les" problem ... 379 Section 48.6: Working with big les ... 379 Chapter 49: FileProvider ... 381 Section 49.1: Sharing a le ... 381 Chapter 50: Storing Files in Internal & External Storage ... 383 Section 50.1: Android: Internal and External Storage - Terminology Clarication ... 383 Section 50.2: Using External Storage ... 387 Section 50.3: Using Internal Storage ... 388 Section 50.4: Fetch Device Directory : ... 388 Section 50.5: Save Database on SD Card (Backup DB on SD) ... 390 Chapter 51: Zip le in android ... 392 Section 51.1: Zip le on android ... 392 Chapter 52: Unzip File in Android ... 393 Section 52.1: Unzip le ... 393 Chapter 53: Camera and Gallery ... 394 Section 53.1: Take photo ... 394 Section 53.2: Taking full-sized photo from camera ... 396 Section 53.3: Decode bitmap correctly rotated from the uri fetched with the intent ... 399 Section 53.4: Set camera resolution ... 401 Section 53.5: How to start camera or gallery and save camera result to storage ... 401 Chapter 54: Camera 2 API ... 405 Section 54.1: Preview the main camera in a TextureView ... 405 Chapter 55: Fingerprint API in android ... 414 Section 55.1: How to use Android Fingerprint API to save user passwords ... 414 Section 55.2: Adding the Fingerprint Scanner in Android application ... 421 Chapter 56: Bluetooth and Bluetooth LE API ... 424 Section 56.1: Permissions ... 424 Section 56.2: Check if bluetooth is enabled ... 424 Section 56.3: Find nearby Bluetooth Low Energy devices ... 424 Section 56.4: Make device discoverable ... 429 Section 56.5: Connect to Bluetooth device ... 429 Section 56.6: Find nearby bluetooth devices ... 431 Chapter 57: Runtime Permissions in API-23 + ... 432 Section 57.1: Android 6.0 multiple permissions ... 432 Section 57.2: Multiple Runtime Permissions From Same Permission Groups ... 433 Section 57.3: Using PermissionUtil ... 434 Section 57.4: Include all permission-related code to an abstract base class and extend the activity of this base class to achieve cleaner/reusable code ... 435 Section 57.5: Enforcing Permissions in Broadcasts, URI ... 437 Chapter 58: Android Places API ... 439 Section 58.1: Getting Current Places by Using Places API ... 439 Section 58.2: Place Autocomplete Integration ... 440 Section 58.3: Place Picker Usage Example ... 441 Section 58.4: Setting place type lters for PlaceAutocomplete ... 442 Section 58.5: Adding more than one google auto complete activity ... 443 Chapter 59: Android NDK ... 445 Section 59.1: How to log in ndk ... 445 Section 59.2: Building native executables for Android ... 445 Section 59.3: How to clean the build ... 446 Section 59.4: How to use a makele other than Android.mk ... 446 Chapter 60: DayNight Theme (AppCompat v23.2 / API 14+) ... 447 Section 60.1: Adding the DayNight theme to an app ... 447 Chapter 61: Glide ... 448 Section 61.1: Loading an image ... 448 Section 61.2: Add Glide to your project ... 449 Section 61.3: Glide circle transformation (Load image in a circular ImageView) ... 449 Section 61.4: Default transformations ... 450 Section 61.5: Glide rounded corners image with custom Glide target ... 451 Section 61.6: Placeholder and Error handling ... 451 Section 61.7: Preloading images ... 452 Section 61.8: Handling Glide image load failed ... 452 Section 61.9: Load image in a circular ImageView without custom transformations ... 453 Chapter 62: Dialog ... 454 Section 62.1: Adding Material Design AlertDialog to your app using Appcompat ... 454 Section 62.2: A Basic Alert Dialog ... 454 Section 62.3: ListView in AlertDialog ... 455 Section 62.4: Custom Alert Dialog with EditText ... 456 Section 62.5: DatePickerDialog ... 457 Section 62.6: DatePicker ... 457 Section 62.7: Alert Dialog ... 458 Section 62.8: Alert Dialog with Multi-line Title ... 459 Section 62.9: Date Picker within DialogFragment ... 461 Section 62.10: Fullscreen Custom Dialog with no background and no title ... 463 Chapter 63: Enhancing Alert Dialogs ... 465 Section 63.1: Alert dialog containing a clickable link ... 465 Chapter 64: Animated AlertDialog Box ... 466 Section 64.1: Put Below code for Animated dialog... 466 Chapter 65: GreenDAO ... 469 Section 65.1: Helper methods for SELECT, INSERT, DELETE, UPDATE queries ... 469 Section 65.2: Creating an Entity with GreenDAO 3.X that has a Composite Primary Key ... 471 Section 65.3: Getting started with GreenDao v3.X ... 472 Chapter 66: Tools Attributes ... 474 Section 66.1: Designtime Layout Attributes ... 474 Chapter 67: Formatting Strings ... 475 Section 67.1: Format a string resource ... 475 Section 67.2: Formatting data types to String and vise versa ... 475 Section 67.3: Format a timestamp to string ... 475 Chapter 68: SpannableString ... 476 Section 68.1: Add styles to a TextView ... 476 Section 68.2: Multi string , with multi color ... 478 Chapter 69: Notications ... 480 Section 69.1: Heads Up Notication with Ticker for older devices ... 480 Section 69.2: Creating a simple Notication ... 484 Section 69.3: Set custom notication - show full content text ... 484 Section 69.4: Dynamically getting the correct pixel size for the large icon ... 485 Section 69.5: Ongoing notication with Action button ... 485 Section 69.6: Setting Dierent priorities in notication ... 486 Section 69.7: Set custom notication icon using `Picasso` library ... 487 Section 69.8: Scheduling notications ... 488 Chapter 70: AlarmManager ... 490 Section 70.1: How to Cancel an Alarm ... 490 Section 70.2: Creating exact alarms on all Android versions ... 490 Section 70.3: API23+ Doze mode interferes with AlarmManager ... 491 Section 70.4: Run an intent at a later time ... 491 Chapter 71: Handler ... 492 Section 71.1: HandlerThreads and communication between Threads ... 492 Section 71.2: Use Handler to create a Timer (similar to javax.swing.Timer) ... 492 Section 71.3: Using a Handler to execute code after a delayed amount of time ... 493 Section 71.4: Stop handler from execution ... 494 Chapter 72: BroadcastReceiver ... 495 Section 72.1: Using LocalBroadcastManager ... 495 Section 72.2: BroadcastReceiver Basics ... 495 Section 72.3: Introduction to Broadcast receiver ... 496 Section 72.4: Using ordered broadcasts ... 496 Section 72.5: Sticky Broadcast ... 497 Section 72.6: Enabling and disabling a Broadcast Receiver programmatically ... 497 Section 72.7: Example of a LocalBroadcastManager ... 498 Section 72.8: Android stopped state ... 499 Section 72.9: Communicate two activities through custom Broadcast receiver ... 499 Section 72.10: BroadcastReceiver to handle BOOT_COMPLETED events ... 500 Section 72.11: Bluetooth Broadcast receiver ... 501 Chapter 73: UI Lifecycle ... 502 Section 73.1: Saving data on memory trimming ... 502 Chapter 74: HttpURLConnection ... 503 Section 74.1: Creating an HttpURLConnection ... 503 Section 74.2: Sending an HTTP GET request ... 503 Section 74.3: Reading the body of an HTTP GET request ... 504 Section 74.4: Sending an HTTP POST request with parameters ... 504 Section 74.5: A multi-purpose HttpURLConnection class to handle all types of HTTP requests ... 506 Section 74.6: Use HttpURLConnection for multipart/form-data ... 508 Section 74.7: Upload (POST) le using HttpURLConnection ... 511 Chapter 75: Callback URL ... 513 Section 75.1: Callback URL example with Instagram OAuth ... 513 Chapter 76: Snackbar ... 514 Section 76.1: Creating a simple Snackbar ... 514 Section 76.2: Custom Snack Bar ... 514 Section 76.3: Custom Snackbar (no need view) ... 515 Section 76.4: Snackbar with Callback ... 516 Section 76.5: Snackbar vs Toasts: Which one should I use? ... 516 Section 76.6: Custom Snackbar ... 517 Chapter 77: Widgets ... 518 Section 77.1: Manifest Declaration - ... 518 Section 77.2: Metadata ... 518 Section 77.3: AppWidgetProvider Class ... 518 Section 77.4: Create/Integrate Basic Widget using Android Studio ... 519 Section 77.5: Two widgets with dierent layouts declaration ... 520 Chapter 78: Toast ... 522 Section 78.1: Creating a custom Toast ... 522 Section 78.2: Set position of a Toast ... 523 Section 78.3: Showing a Toast Message ... 523 Section 78.4: Show Toast Message Above Soft Keyboard ... 524 Section 78.5: Thread safe way of displaying Toast (Application Wide) ... 524 Section 78.6: Thread safe way of displaying a Toast Message (For AsyncTask) ... 525 Chapter 79: Create Singleton Class for Toast Message ... 526 Section 79.1: Create own singleton class for toast massages ... 526 Chapter 80: Interfaces ... 528 Section 80.1: Custom Listener ... 528 Section 80.2: Basic Listener ... 529 Chapter 81: Animators ... 531 Section 81.1: TransitionDrawable animation ... 531 Section 81.2: Fade in/out animation ... 531 Section 81.3: ValueAnimator ... 532 Section 81.4: Expand and Collapse animation of View ... 533 Section 81.5: ObjectAnimator ... 534 Section 81.6: ViewPropertyAnimator ... 534 Section 81.7: Shake animation of an ImageView ... 535 Chapter 82: Location ... 537 Section 82.1: Fused location API ... 537 Section 82.2: Get Address From Location using Geocoder ... 541 Section 82.3: Requesting location updates using LocationManager ... 542 Section 82.4: Requesting location updates on a separate thread using LocationManager ... 543 Section 82.5: Getting location updates in a BroadcastReceiver ... 544 Section 82.6: Register geofence ... 545 Chapter 83: Theme, Style, Attribute ... 549 Section 83.1: Dene primary, primary dark, and accent colors ... 549 Section 83.2: Multiple Themes in one App ... 549 Section 83.3: Navigation Bar Color (API 21+) ... 551 Section 83.4: Use Custom Theme Per Activity ... 551 Section 83.5: Light Status Bar (API 23+) ... 552 Section 83.6: Use Custom Theme Globally ... 552 Section 83.7: Overscroll Color (API 21+) ... 552 Section 83.8: Ripple Color (API 21+) ... 552 Section 83.9: Translucent Navigation and Status Bars (API 19+) ... 553 Section 83.10: Theme inheritance ... 553 Chapter 84: MediaPlayer ... 554 Section 84.1: Basic creation and playing ... 554 Section 84.2: Media Player with Buer progress and play position ... 554 Section 84.3: Getting system ringtones ... 556 Section 84.4: Asynchronous prepare ... 557 Section 84.5: Import audio into androidstudio and play it ... 557 Section 84.6: Getting and setting system volume ... 559 Chapter 85: Android Sound and Media ... 561 Section 85.1: How to pick image and video for api >19 ... 561 Section 85.2: Play sounds via SoundPool ... 562 Chapter 86: MediaSession ... 563 Section 86.1: Receiving and handling button events ... 563 Chapter 87: MediaStore ... 566 Section 87.1: Fetch Audio/MP3 les from specic folder of device or fetch all les ... 566 Chapter 88: Multidex and the Dex Method Limit ... 569 Section 88.1: Enabling Multidex ... 569 Section 88.2: Multidex by extending Application ... 570 Section 88.3: Multidex by extending MultiDexApplication ... 570 Section 88.4: Multidex by using MultiDexApplication directly ... 571 Section 88.5: Counting Method References On Every Build (Dexcount Gradle Plugin) ... 571 Chapter 89: Data Synchronization with Sync Adapter ... 573 Section 89.1: Dummy Sync Adapter with Stub Provider ... 573 Chapter 90: PorterDu Mode ... 579 Section 90.1: Creating a PorterDu ColorFilter ... 579 Section 90.2: Creating a PorterDu XferMode ... 579 Section 90.3: Apply a radial mask (vignette) to a bitmap using PorterDuXfermode ... 579 Chapter 91: Menu ... 581 Section 91.1: Options menu with dividers ... 581 Section 91.2: Apply custom font to Menu ... 581 Section 91.3: Creating a Menu in an Activity ... 582 Chapter 92: Picasso ... 585 Section 92.1: Adding Picasso Library to your Android Project ... 585 Section 92.2: Circular Avatars with Picasso ... 585 Section 92.3: Placeholder and Error Handling ... 587 Section 92.4: Re-sizing and Rotating ... 587 Section 92.5: Disable cache in Picasso ... 588 Section 92.6: Using Picasso as ImageGetter for Html.fromHtml ... 588 Section 92.7: Cancelling Image Requests using Picasso ... 589 Section 92.8: Loading Image from external Storage ... 590 Section 92.9: Downloading image as Bitmap using Picasso ... 590 Section 92.10: Try oine disk cache rst, then go online and fetch the image ... 590 Chapter 93: RoboGuice ... 592 Section 93.1: Simple example ... 592 Section 93.2: Installation for Gradle Projects ... 592 Section 93.3: @ContentView annotation ... 592 Section 93.4: @InjectResource annotation ... 592 Section 93.5: @InjectView annotation ... 593 Section 93.6: Introduction to RoboGuice ... 593 Chapter 94: ACRA ... 596 Section 94.1: ACRAHandler ... 596 Section 94.2: Example manifest ... 596 Section 94.3: Installation ... 597 Chapter 95: Parcelable ... 598 Section 95.1: Making a custom object Parcelable ... 598 Section 95.2: Parcelable object containing another Parcelable object ... 599 Section 95.3: Using Enums with Parcelable ... 600 Chapter 96: Retrot2 ... 602 Section 96.1: A Simple GET Request ... 602 Section 96.2: Debugging with Stetho ... 604 Section 96.3: Add logging to Retrot2 ... 605 Section 96.4: A simple POST request with GSON ... 605 Section 96.5: Download a le from Server using Retrot2 ... 607 Section 96.6: Upload multiple le using Retrot as multipart ... 609 Section 96.7: Retrot with OkHttp interceptor ... 612 Section 96.8: Header and Body: an Authentication Example ... 612 Section 96.9: Uploading a le via Multipart ... 613 Section 96.10: Retrot 2 Custom Xml Converter ... 613 Section 96.11: Reading XML form URL with Retrot 2 ... 615 Chapter 97: ButterKnife ... 618 Section 97.1: Conguring ButterKnife in your project ... 618 Section 97.2: Unbinding views in ButterKnife ... 620 Section 97.3: Binding Listeners using ButterKnife ... 620 Section 97.4: Android Studio ButterKnife Plugin ... 621 Section 97.5: Binding Views using ButterKnife ... 622 Chapter 98: Volley ... 625 Section 98.1: Using Volley for HTTP requests ... 625 Section 98.2: Basic StringRequest using GET method ... 626 Section 98.3: Adding custom design time attributes to NetworkImageView ... 627 Section 98.4: Adding custom headers to your requests [e.g. for basic auth] ... 628 Section 98.5: Remote server authentication using StringRequest through POST method ... 629 Section 98.6: Cancel a request ... 631 Section 98.7: Request JSON ... 631 Section 98.8: Use JSONArray as request body ... 631 Section 98.9: Boolean variable response from server with json request in volley ... 632 Section 98.10: Helper Class for Handling Volley Errors ... 633 Chapter 99: Date and Time Pickers ... 635 Section 99.1: Date Picker Dialog ... 635 Section 99.2: Material DatePicker ... 635 Chapter 100: Localized Date/Time in Android ... 638 Section 100.1: Custom localized date format with DateUtils.formatDateTime() ... 638 Section 100.2: Standard date/time formatting in Android ... 638 Section 100.3: Fully customized date/time ... 638 Chapter 101: Time Utils ... 639 Section 101.1: To check within a period ... 639 Section 101.2: Convert Date Format into Milliseconds ... 639 Section 101.3: GetCurrentRealTime ... 640 Chapter 102: In-app Billing ... 641 Section 102.1: Consumable In-app Purchases ... 641 Section 102.2: (Third party) In-App v3 Library ... 645 Chapter 103: FloatingActionButton ... 647 Section 103.1: How to add the FAB to the layout ... 647 Section 103.2: Show and Hide FloatingActionButton on Swipe ... 648 Section 103.3: Show and Hide FloatingActionButton on Scroll ... 650 Section 103.4: Setting behaviour of FloatingActionButton ... 652 Chapter 104: Touch Events ... 653 Section 104.1: How to vary between child and parent view group touch events ... 653 Chapter 105: Handling touch and motion events ... 656 Section 105.1: Buttons ... 656 Section 105.2: Surface ... 657 Section 105.3: Handling multitouch in a surface ... 658 Chapter 106: Detect Shake Event in Android ... 659 Section 106.1: Shake Detector in Android Example ... 659 Section 106.2: Using Seismic shake detection ... 660 Chapter 107: Hardware Button Events/Intents (PTT, LWP, etc.) ... 661 Section 107.1: Sonim Devices ... 661 Section 107.2: RugGear Devices ... 661 Chapter 108: GreenRobot EventBus ... 662 Section 108.1: Passing a Simple Event ... 662 Section 108.2: Receiving Events ... 663 Section 108.3: Sending Events ... 663 Chapter 109: Otto Event Bus ... 664 Section 109.1: Passing an event ... 664 Section 109.2: Receiving an event ... 664 Chapter 110: Vibration ... 666 Section 110.1: Getting Started with Vibration ... 666 Section 110.2: Vibrate Indenitely ... 666 Section 110.3: Vibration Patterns ... 666 Section 110.4: Stop Vibrate ... 667 Section 110.5: Vibrate for one time ... 667 Chapter 111: ContentProvider ... 668 Section 111.1: Implementing a basic content provider class ... 668 Chapter 112: Dagger 2 ... 672 Section 112.1: Component setup for Application and Activity injection ... 672 Section 112.2: Custom Scopes ... 673 Section 112.3: Using @Subcomponent instead of @Component(dependencies={...}) ... 674 Section 112.4: Creating a component from multiple modules ... 674 Section 112.5: How to add Dagger 2 in build.gradle ... 675 Section 112.6: Constructor Injection ... 676 Chapter 113: Realm ... 678 Section 113.1: Sorted queries ... 678 Section 113.2: Using Realm with RxJava ... 678 Section 113.3: Basic Usage ... 679 Section 113.4: List of primitives (RealmList<Integer/String/...>) ... 682 Section 113.5: Async queries ... 683 Section 113.6: Adding Realm to your project ... 683 Section 113.7: Realm Models ... 683 Section 113.8: try-with-resources ... 684 Chapter 114: Android Versions ... 685 Section 114.1: Checking the Android Version on device at runtime ... 685 Chapter 115: Wi-Fi Connections ... 686 Section 115.1: Connect with WEP encryption ... 686 Section 115.2: Connect with WPA2 encryption ... 686 Section 115.3: Scan for access points ... 687 Chapter 116: SensorManager ... 689 Section 116.1: Decide if your device is static or not, using the accelerometer ... 689 Section 116.2: Retrieving sensor events ... 689 Section 116.3: Sensor transformation to world coordinate system ... 690 Chapter 117: ProgressBar ... 692 Section 117.1: Material Linear ProgressBar ... 692 Section 117.2: Tinting ProgressBar ... 694 Section 117.3: Customized progressbar ... 694 Section 117.4: Creating Custom Progress Dialog ... 696 Section 117.5: Indeterminate ProgressBar ... 698 Section 117.6: Determinate ProgressBar ... 699 Chapter 118: Custom Fonts ... 701 Section 118.1: Custom font in canvas text ... 701 Section 118.2: Working with fonts in Android O ... 701 Section 118.3: Custom font to whole activity ... 702 Section 118.4: Putting a custom font in your app ... 702 Section 118.5: Initializing a font ... 702 Section 118.6: Using a custom font in a TextView ... 702 Section 118.7: Apply font on TextView by xml (Not required Java code) ... 703 Section 118.8: Ecient Typeface loading ... 704 Chapter 119: Getting system font names and using the fonts ... 705 Section 119.1: Getting system font names ... 705 Section 119.2: Applying a system font to a TextView ... 705 Chapter 120: Text to Speech(TTS) ... 706 Section 120.1: Text to Speech Base ... 706 Section 120.2: TextToSpeech implementation across the APIs ... 707 Chapter 121: Spinner ... 711 Section 121.1: Basic Spinner Example ... 711 Section 121.2: Adding a spinner to your activity ... 712 Chapter 122: Data Encryption/Decryption ... 714 Section 122.1: AES encryption of data using password in a secure way ... 714 Chapter 123: OkHttp ... 716 Section 123.1: Basic usage example ... 716 Section 123.2: Setting up OkHttp ... 716 Section 123.3: Logging interceptor ... 716 Section 123.4: Synchronous Get Call ... 717 Section 123.5: Asynchronous Get Call ... 717 Section 123.6: Posting form parameters ... 718 Section 123.7: Posting a multipart request ... 718 Section 123.8: Rewriting Responses ... 718 Chapter 124: Handling Deep Links ... 720 Section 124.1: Retrieving query parameters ... 720 Section 124.2: Simple deep link ... 720 Section 124.3: Multiple paths on a single domain ... 721 Section 124.4: Multiple domains and multiple paths ... 721 Section 124.5: Both http and https for the same domain ... 722 Section 124.6: Using pathPrex ... 722 Chapter 125: Crash Reporting Tools ... 723 Section 125.1: Fabric - Crashlytics ... 723 Section 125.2: Capture crashes using Sherlock ... 728 Section 125.3: Force a Test Crash With Fabric ... 729 Section 125.4: Crash Reporting with ACRA ... 730 Chapter 126: Check Internet Connectivity ... 732 Section 126.1: Check if device has internet connectivity ... 732 Section 126.2: How to check network strength in android? ... 732 Section 126.3: How to check network strength ... 733 Chapter 127: Creating your own libraries for Android applications ... 736 Section 127.1: Create a library available on Jitpack.io ... 736 Section 127.2: Creating library project ... 736 Section 127.3: Using library in project as a module ... 737 Chapter 128: Device Display Metrics ... 738 Section 128.1: Get the screens pixel dimensions ... 738 Section 128.2: Get screen density ... 738 Section 128.3: Formula px to dp, dp to px conversation ... 738 Chapter 129: Building Backwards Compatible Apps ... 739 Section 129.1: How to handle deprecated API ... 739 Chapter 130: Loader ... 741 Section 130.1: Basic AsyncTaskLoader ... 741 Section 130.2: AsyncTaskLoader with cache ... 742 Section 130.3: Reloading ... 743 Section 130.4: Pass parameters using a Bundle ... 744 Chapter 131: ProGuard - Obfuscating and Shrinking your code ... 745 Section 131.1: Rules for some of the widely used Libraries ... 745 Section 131.2: Remove trace logging (and other) statements at build time ... 747 Section 131.3: Protecting your code from hackers ... 747 Section 131.4: Enable ProGuard for your build ... 748 Section 131.5: Enabling ProGuard with a custom obfuscation conguration le ... 748 Chapter 132: Typedef Annotations: @IntDef, @StringDef ... 750 Section 132.1: IntDef Annotations ... 750 Section 132.2: Combining constants with ags ... 750 Chapter 133: Capturing Screenshots ... 752 Section 133.1: Taking a screenshot of a particular view ... 752 Section 133.2: Capturing Screenshot via Android Studio ... 752 Section 133.3: Capturing Screenshot via ADB and saving directly in your PC ... 753 Section 133.4: Capturing Screenshot via Android Device Monitor ... 753 Section 133.5: Capturing Screenshot via ADB ... 754 Chapter 134: MVP Architecture ... 755 Section 134.1: Login example in the Model View Presenter (MVP) pattern ... 755 Section 134.2: Simple Login Example in MVP ... 758 Chapter 135: Orientation Changes ... 765 Section 135.1: Saving and Restoring Activity State ... 765 Section 135.2: Retaining Fragments ... 765 Section 135.3: Manually Managing Conguration Changes ... 766 Section 135.4: Handling AsyncTask ... 767 Section 135.5: Lock Screen's rotation programmatically ... 768 Section 135.6: Saving and Restoring Fragment State ... 769 Chapter 136: Xposed ... 771 Section 136.1: Creating a Xposed Module ... 771 Section 136.2: Hooking a method ... 771 Chapter 137: PackageManager ... 773 Section 137.1: Retrieve application version ... 773 Section 137.2: Version name and version code ... 773 Section 137.3: Install time and update time ... 773 Section 137.4: Utility method using PackageManager ... 774 Chapter 138: Gesture Detection ... 776 Section 138.1: Swipe Detection ... 776 Section 138.2: Basic Gesture Detection ... 777 Chapter 139: Doze Mode ... 779 Section 139.1: Whitelisting an Android application programmatically ... 779 Section 139.2: Exclude app from using doze mode ... 779 Chapter 140: Colors ... 780 Section 140.1: Color Manipulation ... 780 Chapter 141: Keyboard ... 781 Section 141.1: Register a callback for keyboard open and close ... 781 Section 141.2: Hide keyboard when user taps anywhere else on the screen ... 781 Chapter 142: RenderScript ... 783 Section 142.1: Getting Started ... 783 Section 142.2: Blur a View ... 789 Section 142.3: Blur an image ... 791 Chapter 143: Fresco ... 794 Section 143.1: Getting Started with Fresco ... 794 Section 143.2: Using OkHttp 3 with Fresco ... 795 Section 143.3: JPEG Streaming with Fresco using DraweeController ... 795 Chapter 144: Swipe to Refresh ... 796 Section 144.1: How to add Swipe-to-Refresh To your app ... 796 Section 144.2: Swipe To Refresh with RecyclerView ... 796 Chapter 145: Creating Splash screen ... 798 Section 145.1: Splash screen with animation ... 798 Section 145.2: A basic splash screen ... 799 Chapter 146: IntentService ... 802 Section 146.1: Creating an IntentService ... 802 Section 146.2: Basic IntentService Example ... 802 Section 146.3: Sample Intent Service ... 803 Chapter 147: Implicit Intents ... 805 Section 147.1: Implicit and Explicit Intents ... 805 Section 147.2: Implicit Intents ... 805 Chapter 148: Publish to Play Store ... 806 Section 148.1: Minimal app submission guide ... 806 Chapter 149: Universal Image Loader ... 808 Section 149.1: Basic usage ... 808 Section 149.2: Initialize Universal Image Loader ... 808 Chapter 150: Image Compression ... 809 Section 150.1: How to compress image without size change ... 809 Chapter 151: 9-Patch Images ... 812 Section 151.1: Basic rounded corners ... 812 Section 151.2: Optional padding lines ... 812 Section 151.3: Basic spinner ... 813 Chapter 152: Email Validation ... 814 Section 152.1: Email address validation ... 814 Section 152.2: Email Address validation with using Patterns ... 814 Chapter 153: Bottom Sheets ... 815 Section 153.1: Quick Setup ... 815 Section 153.2: BottomSheetBehavior like Google maps ... 815 Section 153.3: Modal bottom sheets with BottomSheetDialog ... 822 Section 153.4: Modal bottom sheets with BottomSheetDialogFragment ... 822 Section 153.5: Persistent Bottom Sheets ... 822 Section 153.6: Open BottomSheet DialogFragment in Expanded mode by default ... 823 Chapter 154: EditText ... 825 Section 154.1: Working with EditTexts ... 825 Section 154.2: Customizing the InputType ... 827 Section 154.3: Icon or button inside Custom Edit Text and its action and click listeners ... 827 Section 154.4: Hiding SoftKeyboard ... 829 Section 154.5: `inputype` attribute ... 830 Chapter 155: Speech to Text Conversion ... 832 Section 155.1: Speech to Text With Default Google Prompt Dialog ... 832 Section 155.2: Speech to Text without Dialog ... 833 Chapter 156: Installing apps with ADB ... 835 Section 156.1: Uninstall an app ... 835 Section 156.2: Install all apk le in directory ... 835 Section 156.3: Install an app ... 835 Chapter 157: Count Down Timer ... 836 Section 157.1: Creating a simple countdown timer ... 836 Section 157.2: A More Complex Example ... 836 Chapter 158: Barcode and QR code reading ... 838 Section 158.1: Using QRCodeReaderView (based on Zxing) ... 838 Chapter 159: Android PayPal Gateway Integration ... 840 Section 159.1: Setup PayPal in your android code ... 840 Chapter 160: Drawables ... 842 Section 160.1: Custom Drawable ... 842 Section 160.2: Tint a drawable ... 843 Section 160.3: Circular View ... 844 Section 160.4: Make View with rounded corners ... 844 Chapter 161: TransitionDrawable ... 846 Section 161.1: Animate views background color (switch-color) with TransitionDrawable ... 846 Section 161.2: Add transition or Cross-fade between two images ... 846 Chapter 162: Vector Drawables ... 848 Section 162.1: Importing SVG le as VectorDrawable ... 848 Section 162.2: VectorDrawable Usage Example ... 850 Section 162.3: VectorDrawable xml example ... 851 Chapter 163: VectorDrawable and AnimatedVectorDrawable ... 852 Section 163.1: Basic VectorDrawable ... 852 Section 163.2: <group> tags ... 852 Section 163.3: Basic AnimatedVectorDrawable ... 853 Section 163.4: Using Strokes ... 854 Section 163.5: Using <clip-path> ... 856 Section 163.6: Vector compatibility through AppCompat ... 856 Chapter 164: Port Mapping using Cling library in Android ... 858 Section 164.1: Mapping a NAT port ... 858 Section 164.2: Adding Cling Support to your Android Project ... 858 Chapter 165: Creating Overlay (always-on-top) Windows ... 860 Section 165.1: Popup overlay ... 860 Section 165.2: Granting SYSTEM_ALERT_WINDOW Permission on android 6.0 and above ... 860 Chapter 166: ExoPlayer ... 862 Section 166.1: Add ExoPlayer to the project ... 862 Section 166.2: Using ExoPlayer ... 862 Section 166.3: Main steps to play video & audio using the standard TrackRenderer implementations ... 863 Chapter 167: XMPP register login and chat simple example ... 864 Section 167.1: XMPP register login and chat basic example ... 864 Chapter 168: Android Authenticator ... 873 Section 168.1: Basic Account Authenticator Service ... 873 Chapter 169: AudioManager ... 876 Section 169.1: Requesting Transient Audio Focus ... 876 Section 169.2: Requesting Audio Focus ... 876 Chapter 170: AudioTrack ... 877 Section 170.1: Generate tone of a specic frequency ... 877 Chapter 171: Job Scheduling ... 878 Section 171.1: Basic usage ... 878 Chapter 172: Accounts and AccountManager ... 880 Section 172.1: Understanding custom accounts/authentication ... 880 Chapter 173: Integrate OpenCV into Android Studio ... 882 Section 173.1: Instructions ... 882 Chapter 174: MVVM (Architecture) ... 890 Section 174.1: MVVM Example using DataBinding Library ... 890 Chapter 175: ORMLite in android ... 897 Section 175.1: Android OrmLite over SQLite example ... 897 Chapter 176: Retrot2 with RxJava ... 901 Section 176.1: Retrot2 with RxJava ... 901 Section 176.2: Nested requests example: multiple requests, combine results ... 902 Section 176.3: Retrot with RxJava to fetch data asyncronously ... 903 Chapter 177: ShortcutManager ... 906 Section 177.1: Dynamic Launcher Shortcuts ... 906 Chapter 178: LruCache ... 907 Section 178.1: Adding a Bitmap(Resource) to the cache ... 907 Section 178.2: Initialising the cache ... 907 Section 178.3: Getting a Bitmap(Resouce) from the cache ... 907 Chapter 179: Jenkins CI setup for Android Projects ... 908 Section 179.1: Step by step approach to set up Jenkins for Android ... 908 Chapter 180: fastlane ... 912 Section 180.1: Fastle lane to build and install all avors for given build type to a device ... 912 Section 180.2: Fastle to build and upload multiple avors to Beta by Crashlytics ... 912 Chapter 181: Dene step value (increment) for custom RangeSeekBar ... 915 Section 181.1: Dene a step value of 7 ... 915 Chapter 182: Getting started with OpenGL ES 2.0+ ... 916 Section 182.1: Setting up GLSurfaceView and OpenGL ES 2.0+ ... 916 Section 182.2: Compiling and Linking GLSL-ES Shaders from asset le ... 916 Chapter 183: Check Data Connection ... 919 Section 183.1: Check data connection ... 919 Section 183.2: Check connection using ConnectivityManager ... 919 Section 183.3: Use network intents to perform tasks while data is allowed ... 919 Chapter 184: Java on Android ... 920 Section 184.1: Java 8 features subset with Retrolambda ... 920 Chapter 185: Android Java Native Interface (JNI) ... 922 Section 185.1: How to call functions in a native library via the JNI interface ... 922 Section 185.2: How to call a Java method from native code ... 922 Section 185.3: Utility method in JNI layer ... 923 Chapter 186: Notication Channel Android O ... 925 Section 186.1: Notication Channel ... 925 Chapter 187: Robolectric ... 931 Section 187.1: Robolectric test ... 931 Section 187.2: Conguration ... 931 Chapter 188: Moshi ... 933 Section 188.1: JSON into Java ... 933 Section 188.2: serialize Java objects as JSON ... 933 Section 188.3: Built in Type Adapters ... 933 Chapter 189: Strict Mode Policy : A tool to catch the bug in the Compile Time... 935 Section 189.1: The below Code Snippet is to setup the StrictMode for Thread Policies. This Code is to be set at the entry points to our application ... 935 Section 189.2: The below code deals with leaks of memory, like it detects when in SQLLite nalize is called or not ... 935 Chapter 190: Internationalization and localization (I18N and L10N) ... 936 Section 190.1: Planning for localization : enable RTL support in Manifest ... 936 Section 190.2: Planning for localization : Add RTL support in Layouts ... 936 Section 190.3: Planning for localization : Test layouts for RTL ... 937 Section 190.4: Coding for Localization : Creating default strings and resources ... 937 Section 190.5: Coding for localization : Providing alternative strings ... 938 Section 190.6: Coding for localization : Providing alternate layouts ... 939 Chapter 191: Fast way to setup Retrolambda on an android project... 940 Section 191.1: Setup and example how to use: ... 940 Chapter 192: How to use SparseArray ... 942 Section 192.1: Basic example using SparseArray ... 942 Chapter 193: Shared Element Transitions ... 944 Section 193.1: Shared Element Transition between two Fragments ... 944 Chapter 194: Android Things ... 947 Section 194.1: Controlling a Servo Motor ... 947 Chapter 195: Library Dagger 2: Dependency Injection in Applications ... 949 Section 195.1: Create @Module Class and @Singleton annotation for Object ... 949 Section 195.2: Request Dependencies in Dependent Objects ... 949 Section 195.3: Connecting @Modules with @Inject ... 949 Section 195.4: Using @Component Interface to Obtain Objects ... 950 Chapter 196: JCodec ... 951 Section 196.1: Getting Started ... 951 Section 196.2: Getting frame from movie ... 951 Chapter 197: Formatting phone numbers with pattern... 952 Section 197.1: Patterns + 1 (786) 1234 5678 ... 952 Chapter 198: Paint ... 953 Section 198.1: Creating a Paint ... 953 Section 198.2: Setting up Paint for text ... 953 Section 198.3: Setting up Paint for drawing shapes ... 954 Section 198.4: Setting ags ... 954 Chapter 199: What is ProGuard? What is use in Android? ... 955 Section 199.1: Shrink your code and resources with proguard ... 955 Chapter 200: Create Android Custom ROMs ... 957 Section 200.1: Making Your Machine Ready for Building! ... 957 Chapter 201: Genymotion for android ... 958 Section 201.1: Installing Genymotion, the free version ... 958 Section 201.2: Google framework on Genymotion ... 959 Chapter 202: ConstraintSet ... 960 Section 202.1: ConstraintSet with ContraintLayout Programmatically ... 960 Chapter 203: CleverTap ... 961 Section 203.1: Setting the debug level ... 961 Section 203.2: Get an instance of the SDK to record events ... 961 Chapter 204: Publish a library to Maven Repositories ... 962 Section 204.1: Publish .aar le to Maven ... 962 Chapter 205: adb shell ... 964 Section 205.1: Granting & revoking API 23+ permissions ... 964 Section 205.2: Send text, key pressed and touch events to Android Device via ADB ... 964 Section 205.3: List packages ... 966 Section 205.4: Recording the display ... 966 Section 205.5: Open Developer Options ... 967 Section 205.6: Set Date/Time via adb ... 967 Section 205.7: Generating a "Boot Complete" broadcast ... 968 Section 205.8: Print application data ... 968 Section 205.9: Changing le permissions using chmod command ... 968 Section 205.10: View external/secondary storage content ... 969 Section 205.11: kill a process inside an Android device ... 969 Chapter 206: Ping ICMP ... 971 Section 206.1: Performs a single Ping ... 971 Chapter 207: AIDL ... 972 Section 207.1: AIDL Service ... 972 Chapter 208: Android game development ... 974 Section 208.1: Game using Canvas and SurfaceView ... 974 Chapter 209: Android programming with Kotlin ... 980 Section 209.1: Installing the Kotlin plugin ... 980 Section 209.2: Conguring an existing Gradle project with Kotlin ... 981 Section 209.3: Creating a new Kotlin Activity ... 982 Section 209.4: Converting existing Java code to Kotlin ... 983 Section 209.5: Starting a new Activity ... 983 Chapter 210: Android-x86 in VirtualBox ... 984 Section 210.1: Virtual hard drive Setup for SDCARD Support ... 984 Section 210.2: Installation in partition ... 986 Section 210.3: Virtual Machine setup ... 988 Chapter 211: Leakcanary ... 989 Section 211.1: Implementing a Leak Canary in Android Application ... 989 Chapter 212: Okio ... 990 Section 212.1: Download / Implement ... 990 Section 212.2: PNG decoder ... 990 Section 212.3: ByteStrings and Buers ... 990 Chapter 213: Bluetooth Low Energy ... 992 Section 213.1: Finding BLE Devices ... 992 Section 213.2: Connecting to a GATT Server ... 992 Section 213.3: Writing and Reading from Characteristics ... 993 Section 213.4: Subscribing to Notications from the Gatt Server ... 994 Section 213.5: Advertising a BLE Device ... 994 Section 213.6: Using a Gatt Server ... 995 Chapter 214: Looper ... 997 Section 214.1: Create a simple LooperThread ... 997 Section 214.2: Run a loop with a HandlerThread ... 997 Chapter 215: Annotation Processor ... 998 Section 215.1: @NonNull Annotation ... 998 Section 215.2: Types of Annotations ... 998 Section 215.3: Creating and Using Custom Annotations ... 998 Chapter 216: SyncAdapter with periodically do sync of data ... 1000 Section 216.1: Sync adapter with every min requesting value from server ... 1000 Chapter 217: Fastjson ... 1010 Section 217.1: Parsing JSON with Fastjson ... 1010 Section 217.2: Convert the data of type Map to JSON String ... 1011 Chapter 218: JSON in Android with org.json ... 1013 Section 218.1: Creating a simple JSON object ... 1013 Section 218.2: Create a JSON String with null value ... 1013 Section 218.3: Add JSONArray to JSONObject ... 1013 Section 218.4: Parse simple JSON object ... 1014 Section 218.5: Check for the existence of elds on JSON ... 1015 Section 218.6: Create nested JSON object ... 1015 Section 218.7: Updating the elements in the JSON ... 1016 Section 218.8: Using JsonReader to read JSON from a stream ... 1016 Section 218.9: Working with null-string when parsing json ... 1018 Section 218.10: Handling dynamic key for JSON response ... 1019 Chapter 219: Gson ... 1021 Section 219.1: Parsing JSON with Gson ... 1021 Section 219.2: Adding a custom Converter to Gson ... 1023 Section 219.3: Parsing a List<String> with Gson ... 1023 Section 219.4: Adding Gson to your project ... 1024 Section 219.5: Parsing JSON to Generic Class Object with Gson ... 1024 Section 219.6: Using Gson with inheritance ... 1025 Section 219.7: Parsing JSON property to enum with Gson ... 1027 Section 219.8: Using Gson to load a JSON le from disk ... 1027 Section 219.9: Using Gson as serializer with Retrot ... 1027 Section 219.10: Parsing json array to generic class using Gson ... 1028 Section 219.11: Custom JSON Deserializer using Gson ... 1029 Section 219.12: JSON Serialization/Deserialization with AutoValue and Gson ... 1030 Chapter 220: Android Architecture Components ... 1032 Section 220.1: Using Lifecycle in AppCompatActivity ... 1032 Section 220.2: Add Architecture Components ... 1032 Section 220.3: ViewModel with LiveData transformations ... 1033 Section 220.4: Room peristence ... 1034 Section 220.5: Custom LiveData ... 1036 Section 220.6: Custom Lifecycle-aware component ... 1036 Chapter 221: Jackson ... 1038 Section 221.1: Full Data Binding Example ... 1038 Chapter 222: Smartcard ... 1040 Section 222.1: Smart card send and receive ... 1040 Chapter 223: Security ... 1042 Section 223.1: Verifying App Signature - Tamper Detection ... 1042 Chapter 224: How to store passwords securely ... 1043 Section 224.1: Using AES for salted password encryption ... 1043 Chapter 225: Secure SharedPreferences ... 1046 Section 225.1: Securing a Shared Preference ... 1046 Chapter 226: Secure SharedPreferences ... 1047 Section 226.1: Securing a Shared Preference ... 1047 Chapter 227: SQLite ... 1048 Section 227.1: onUpgrade() method ... 1048 Section 227.2: Reading data from a Cursor ... 1048 Section 227.3: Using the SQLiteOpenHelper class ... 1050 Section 227.4: Insert data into database ... 1051 Section 227.5: Bulk insert ... 1051 Section 227.6: Create a Contract, Helper and Provider for SQLite in Android ... 1052 Section 227.7: Delete row(s) from the table ... 1056 Section 227.8: Updating a row in a table ... 1057 Section 227.9: Performing a Transaction ... 1057 Section 227.10: Create Database from assets folder ... 1058 Section 227.11: Store image into SQLite ... 1060 Section 227.12: Exporting and importing a database ... 1062 Chapter 228: Accessing SQLite databases using the ContentValues class ... 1064 Section 228.1: Inserting and updating rows in a SQLite database ... 1064 Chapter 229: Firebase ... 1065 Section 229.1: Add Firebase to Your Android Project ... 1065 Section 229.2: Updating a Firebase users's email ... 1066 Section 229.3: Create a Firebase user ... 1067 Section 229.4: Change Password ... 1068 Section 229.5: Firebase Cloud Messaging ... 1069 Section 229.6: Firebase Storage Operations ... 1071 Section 229.7: Firebase Realtime Database: how to set/get data ... 1077 Section 229.8: Demo of FCM based notications ... 1078 Section 229.9: Sign In Firebase user with email and password ... 1088 Section 229.10: Send Firebase password reset email ... 1089 Section 229.11: Re-Authenticate Firebase user ... 1091 Section 229.12: Firebase Sign Out ... 1092 Chapter 230: Firebase Cloud Messaging ... 1093 Section 230.1: Set Up a Firebase Cloud Messaging Client App on Android ... 1093 Section 230.2: Receive Messages ... 1093 Section 230.3: This code that i have implemnted in my app for pushing image,message and also link for opening in your webView ... 1094 Section 230.4: Registration token ... 1095 Section 230.5: Subscribe to a topic ... 1096 Chapter 231: Firebase Realtime DataBase ... 1097 Section 231.1: Quick setup ... 1097 Section 231.2: Firebase Realtime DataBase event handler ... 1097 Section 231.3: Understanding rebase JSON database ... 1098 Section 231.4: Retrieving data from rebase ... 1099 Section 231.5: Listening for child updates ... 1100 Section 231.6: Retrieving data with pagination ... 1101 Section 231.7: Denormalization: Flat Database Structure ... 1102 Section 231.8: Designing and understanding how to retrieve realtime data from the Firebase Database ... 1104 Chapter 232: Firebase App Indexing ... 1107 Section 232.1: Supporting Http URLs ... 1107 Section 232.2: Add AppIndexing API ... 1108 Chapter 233: Firebase Crash Reporting ... 1110 Section 233.1: How to report an error ... 1110 Section 233.2: How to add Firebase Crash Reporting to your app ... 1110 Chapter 234: Twitter APIs ... 1112 Section 234.1: Creating login with twitter button and attach a callback to it ... 1112 Chapter 235: Youtube-API ... 1114 Section 235.1: Activity extending YouTubeBaseActivity ... 1114 Section 235.2: Consuming YouTube Data API on Android ... 1115 Section 235.3: Launching StandAlonePlayerActivity ... 1117 Section 235.4: YoutubePlayerFragment in portrait Activty ... 1118 Section 235.5: YouTube Player API ... 1120 Chapter 236: Integrate Google Sign In ... 1123 Section 236.1: Google Sign In with Helper class ... 1123 Chapter 237: Google signin integration on android ... 1126 Section 237.1: Integration of google Auth in your project. (Get a conguration le) ... 1126 Section 237.2: Code Implementation Google SignIn ... 1126 Chapter 238: Google Awareness APIs ... 1128 Section 238.1: Get changes for location within a certain range using Fence API ... 1128 Section 238.2: Get current location using Snapshot API ... 1129 Section 238.3: Get changes in user activity with Fence API ... 1129 Section 238.4: Get current user activity using Snapshot API ... 1130 Section 238.5: Get headphone state with Snapshot API ... 1130 Section 238.6: Get nearby places using Snapshot API ... 1131 Section 238.7: Get current weather using Snapshot API ... 1131 Chapter 239: Google Maps API v2 for Android ... 1132 Section 239.1: Custom Google Map Styles ... 1132 Section 239.2: Default Google Map Activity ... 1143 Section 239.3: Show Current Location in a Google Map ... 1144 Section 239.4: Change Oset ... 1150 Section 239.5: MapView: embedding a GoogleMap in an existing layout ... 1150 Section 239.6: Get debug SHA1 ngerprint ... 1152 Section 239.7: Adding markers to a map ... 1153 Section 239.8: UISettings ... 1153 Section 239.9: InfoWindow Click Listener ... 1154 Section 239.10: Obtaining the SH1-Fingerprint of your certicate keystore le ... 1155 Section 239.11: Do not launch Google Maps when the map is clicked (lite mode) ... 1156 Chapter 240: Google Drive API ... 1157 Section 240.1: Integrate Google Drive in Android ... 1157 Section 240.2: Create a File on Google Drive ... 1165 Chapter 241: Displaying Google Ads ... 1168 Section 241.1: Adding Interstitial Ad ... 1168 Section 241.2: Basic Ad Setup ... 1169 Chapter 242: AdMob ... 1171 Section 242.1: Implementing ... 1171 Chapter 243: Google Play Store ... 1173 Section 243.1: Open Google Play Store Listing for your app ... 1173 Section 243.2: Open Google Play Store with the list of all applications from your publisher account .......... 1173 Chapter 244: Sign your Android App for Release ... 1175 Section 244.1: Sign your App ... 1175 Section 244.2: Congure the build.gradle with signing conguration ... 1176 Chapter 245: TensorFlow ... 1178 Section 245.1: How to use ... 1178 Chapter 246: Android Vk Sdk ... 1179 Section 246.1: Initialization and login ... 1179 Chapter 247: Project SDK versions ... 1181 Section 247.1: Dening project SDK versions ... 1181 Chapter 248: Facebook SDK for Android ... 1182 Section 248.1: How to add Facebook Login in Android ... 1182 Section 248.2: Create your own custom button for Facebook login ... 1184 Section 248.3: A minimalistic guide to Facebook login/signup implementation ... 1185 Section 248.4: Setting permissions to access data from the Facebook prole ... 1186 Section 248.5: Logging out of Facebook ... 1186 Chapter 249: Thread ... 1187 Section 249.1: Thread Example with its description ... 1187 Section 249.2: Updating the UI from a Background Thread ... 1187 Chapter 250: AsyncTask ... 1189 Section 250.1: Basic Usage ... 1189 Section 250.2: Pass Activity as WeakReference to avoid memory leaks ... 1191 Section 250.3: Download Image using AsyncTask in Android ... 1192 Section 250.4: Canceling AsyncTask ... 1195 Section 250.5: AsyncTask: Serial Execution and Parallel Execution of Task ... 1195 Section 250.6: Order of execution ... 1198 Section 250.7: Publishing progress ... 1198 Chapter 251: Testing UI with Espresso ... 1200 Section 251.1: Overall Espresso ... 1200 Section 251.2: Espresso simple UI test ... 1202 Section 251.3: Open Close DrawerLayout ... 1205 Section 251.4: Set Up Espresso ... 1206 Section 251.5: Performing an action on a view ... 1207 Section 251.6: Finding a view with onView ... 1207 Section 251.7: Create Espresso Test Class ... 1207 Section 251.8: Up Navigation ... 1208 Section 251.9: Group a collection of test classes in a test suite ... 1208 Section 251.10: Espresso custom matchers ... 1209 Chapter 252: Writing UI tests - Android ... 1212 Section 252.1: MockWebServer example ... 1212 Section 252.2: IdlingResource ... 1214 Chapter 253: Unit testing in Android with JUnit ... 1218 Section 253.1: Moving Business Logic Out of Android Componenets ... 1218 Section 253.2: Creating Local unit tests ... 1220 Section 253.3: Getting started with JUnit ... 1221 Section 253.4: Exceptions ... 1224 Section 253.5: Static import ... 1225 Chapter 254: Inter-app UI testing with UIAutomator ... 1226 Section 254.1: Prepare your project and write the rst UIAutomator test ... 1226 Section 254.2: Writing more complex tests using the UIAutomatorViewer ... 1226 Section 254.3: Creating a test suite of UIAutomator tests ... 1228 Chapter 255: Lint Warnings ... 1229 Section 255.1: Using tools:ignore in xml les ... 1229 Section 255.2: Congure LintOptions with gradle ... 1229 Section 255.3: Conguring lint checking in Java and XML source les ... 1230 Section 255.4: How to congure the lint.xml le ... 1230 Section 255.5: Mark Suppress Warnings ... 1231 Section 255.6: Importing resources without "Deprecated" error ... 1231 Chapter 256: Performance Optimization ... 1233 Section 256.1: Save View lookups with the ViewHolder pattern ... 1233 Chapter 257: Android Kernel Optimization ... 1234 Section 257.1: Low RAM Conguration ... 1234 Section 257.2: How to add a CPU Governor ... 1234 Section 257.3: I/O Schedulers ... 1236 Chapter 258: Memory Leaks ... 1237 Section 258.1: Avoid leaking Activities with AsyncTask ... 1237 Section 258.2: Common memory leaks and how to x them ... 1238 Section 258.3: Detect memory leaks with the LeakCanary library ... 1239 Section 258.4: Anonymous callback in activities ... 1239 Section 258.5: Activity Context in static classes ... 1240 Section 258.6: Avoid leaking Activities with Listeners ... 1241 Section 258.7: Avoid memory leaks with Anonymous Class, Handler, Timer Task, Thread ... 1246 Chapter 259: Enhancing Android Performance Using Icon Fonts ... 1248 Section 259.1: How to integrate Icon fonts ... 1248 Section 259.2: TabLayout with icon fonts ... 1250 Chapter 260: Bitmap Cache ... 1252 Section 260.1: Bitmap Cache Using LRU Cache ... 1252 Chapter 261: Loading Bitmaps Eectively ... 1253 Section 261.1: Load the Image from Resource from Android Device. Using Intents ... 1253 Chapter 262: Exceptions ... 1258 Section 262.1: ActivityNotFoundException ... 1258 Section 262.2: OutOfMemoryError ... 1258 Section 262.3: Registering own Handler for unexpected exceptions ... 1258 Section 262.4: UncaughtException ... 1260 Section 262.5: NetworkOnMainThreadException ... 1260 Section 262.6: DexException ... 1262 Chapter 263: Logging and using Logcat ... 1263 Section 263.1: Filtering the logcat output ... 1263 Section 263.2: Logging ... 1264 Section 263.3: Using the Logcat ... 1266 Section 263.4: Log with link to source directly from Logcat ... 1267 Section 263.5: Clear logs ... 1267 Section 263.6: Android Studio usage ... 1267 Section 263.7: Generating Logging code ... 1268 Chapter 264: ADB (Android Debug Bridge) ... 1270 Section 264.1: Connect ADB to a device via WiFi ... 1270 Section 264.2: Direct ADB command to specic device in a multi-device setting ... 1272 Section 264.3: Taking a screenshot and video (for kitkat only) from a device display ... 1272 Section 264.4: Pull (push) les from (to) the device ... 1273 Section 264.5: Print verbose list of connected devices ... 1274 Section 264.6: View logcat ... 1274 Section 264.7: View and pull cache les of an app ... 1275 Section 264.8: Clear application data ... 1275 Section 264.9: View an app's internal data (data/data/<sample.package.id>) on a device ... 1276 Section 264.10: Install and run an application ... 1276 Section 264.11: Sending broadcast ... 1276 Section 264.12: Backup ... 1277 Section 264.13: View available devices ... 1278 Section 264.14: Connect device by IP ... 1278 Section 264.15: Install ADB on Linux system ... 1279 Section 264.16: View activity stack ... 1279 Section 264.17: Reboot device ... 1279 Section 264.18: Read device information ... 1280 Section 264.19: List all permissions that require runtime grant from users on Android 6.0 ... 1280 Section 264.20: Turn on/o Wi ... 1280 Section 264.21: Start/stop adb ... 1280 Chapter 265: Localization with resources in Android ... 1281 Section 265.1: Conguration types and qualier names for each folder under the "res" directory ... 1281 Section 265.2: Adding translation to your Android app ... 1282 Section 265.3: Type of resource directories under the "res" folder ... 1284 Section 265.4: Change locale of android application programmatically ... 1284 Section 265.5: Currency ... 1287 Chapter 266: Convert vietnamese string to english string Android ... 1288 Section 266.1: example: ... 1288 Section 266.2: Chuyn chui Ting Vit thnh chui khng du ... 1288 Credits ... 1289 You may also like ... 1301 About Please feel free to share this PDF with anyone for free, latest version of this book can be downloaded from: https://goalkicker.com/AndroidBook This Android Notes for Professionals book is compiled from Stack Overow Documentation, the content is written by the beautiful people at Stack Overow. Text content is released under Creative Commons BY-SA, see credits at the end of this book whom contributed to the various chapters. Images may be copyright of their respective owners unless otherwise specied This is an unocial free book created for educational purposes and is not aliated with ocial Android group(s) or company(s) nor Stack Overow. All trademarks and registered trademarks are the property of their respective company owners The information presented in this book is not guaranteed to be correct nor accurate, use at your own risk Please send feedback and corrections to <EMAIL> <EMAIL> Android Notes for Professionals 1 Chapter 1: Getting started with Android Version API Level Version Code BASE 1.0 1 Release Date 2008-09-23 1.1 2 BASE_1_1 2009-02-09 1.5 3 CUPCAKE 2009-04-27 1.6 4 DONUT 2009-09-15 2.0 5 ECLAIR 2009-10-26 2.0.1 6 ECLAIR_0_1 2009-12-03 2.1.x 7 ECLAIR_MR1 2010-01-12 2.2.x 8 FROYO 2010-05-20 2.3 9 GINGERBREAD 2010-12-06 2.3.3 10 GINGERBREAD_MR1 2011-02-09 3.0.x 11 HONEYCOMB 2011-02-22 3.1.x 12 HONEYCOMB_MR1 2011-05-10 3.2.x 13 HONEYCOMB_MR2 2011-07-15 4.0 14 ICE_CREAM_SANDWICH 2011-10-18 4.0.3 15 ICE_CREAM_SANDWICH_MR1 2011-12-16 4.1 16 JELLY_BEAN 2012-07-09 4.2 17 JELLY_BEAN_MR1 2012-11-13 4.3 18 JELLY_BEAN_MR2 2013-07-24 4.4 19 KITKAT 2013-10-31 4.4W 20 KITKAT_WATCH 2014-06-25 5.0 21 LOLLIPOP 2014-11-12 5.1 22 LOLLIPOP_MR1 2015-03-09 6.0 23 M (Marshmallow) 2015-10-05 7.0 24 N (Nougat) 2016-08-22 7.1 25 N_MR1 (Nougat MR1) 2016-10-04 8.0 26 O (Developer Preview 4) 2017-07-24 Section 1.1: Creating a New Project Set up Android Studio Start by setting up Android Studio and then open it. Now, you're ready to make your rst Android App! Note: this guide is based on Android Studio 2.2, but the process on other versions is mainly the same. Congure Your Project Basic Conguration You can start a new project in two ways: Click Start a New Android Studio Project from the welcome screen. Navigate to File New Project if you already have a project open. Next, you need to describe your application by lling out some elds: GoalKicker.com Android Notes for Professionals 2 1. Application Name - This name will be shown to the user. Example: Hello World. You can always change it later in AndroidManifest.xml le. 2. Company Domain - This is the qualier for your project's package name. Example: stackoverflow.com. 3. Package Name (aka applicationId) - This is the fully qualied project package name. It should follow Reverse Domain Name Notation (aka Reverse DNS): Top Level Domain . Company Domain . [Company Segment .] Application Name. Example: com.stackoverflow.android.helloworld or com.stackoverflow.helloworld. You can always change your applicationId by overriding it in your gradle le. Don't use the default prex "com.example" unless you don't intend to submit your application to the Google Play Store. The package name will be your unique applicationId in Google Play. 4. Project Location - This is the directory where your project will be stored. GoalKicker.com Android Notes for Professionals 3 Select Form Factors and API Level The next window lets you select the form factors supported by your app, such as phone, tablet, TV, Wear, and Google Glass. The selected form factors become the app modules within the project. For each form factor, you can also select the API Level for that app. To get more information, click Help me choose GoalKicker.com Android Notes for Professionals 4 Chart of the current Android version distributions, shown when you click Help me choose. The Android Platform Distribution window shows the distribution of mobile devices running each version of Android, as shown in Figure 2. Click on an API level to see a list of features introduced in the corresponding version of Android. This helps you choose the minimum API Level that has all the features that your apps needs, so you can reach as many devices as possible. Then click OK. Now, choose what platforms and version of Android SDK the application will support. GoalKicker.com Android Notes for Professionals 5 For now, select only Phone and Tablet. The Minimum SDK is the lower bound for your app. It is one of the signals the Google Play Store uses to determine which devices an app can be installed on. For example, Stack Exchange's app supports Android 4.1+. Android Studio will tell you (approximately) what percentage of devices will be supported given the specied minimum SDK. GoalKicker.com Android Notes for Professionals 6 Lower API levels target more devices but have fewer features available. When deciding on the Minimum SDK, you should consider the Dashboards stats, which will give you version information about the devices that visited the Google Play Store globally in the last week. From: Dashboards on Android Developer website. Add an activity Now we are going to select a default activity for our application. In Android, an Activity is a single screen that will be presented to the user. An application can house multiple activities and navigate between them. For this example, choose Empty Activity and click next. Here, if you wish, you can change the name of the activity and layout. A good practice is to keep Activity as a sux for the activity name, and activity_ as a prex for the layout name. If we leave these as the default, Android Studio will generate an activity for us called MainActivity, and a layout le called activity_main. Now click Finish. Android Studio will create and congure our project, which can take some time depending on the system. Inspecting the Project To understand how Android works, let's take a look at some of the les that were created for us. On the left pane of Android Studio, we can see the structure of our Android application. GoalKicker.com Android Notes for Professionals 7 First, let's open AndroidManifest.xml by double clicking it. The Android manifest le describes some of the basic information about an Android application. It contains the declaration of our activities, as well as some more advanced components. If an application needs access to a feature protected by a permission, it must declare that it requires that permission with a <uses-permission> element in the manifest. Then, when the application is installed on the device, the installer determines whether or not to grant the requested permission by checking the authorities that signed the application's certicates and, in some cases, asking the user. An application can also protect its own components (activities, services, broadcast receivers, and content providers) with permissions. It can employ any of the permissions dened by Android (listed in android.Manifest.permission) or declared by other applications. Or it can dene its own. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.stackoverflow.helloworld"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> GoalKicker.com Android Notes for Professionals 8 </application> </manifest> Next, let's open activity_main.xml which is located in app/src/main/res/layout/. This le contains declarations for the visual components of our MainActivity. You will see visual designer. This allows you to drag and drop elements onto the selected layout. You can also switch to the xml layout designer by clicking "Text" at the bottom of Android Studio, as seen here: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.stackexchange.docs.helloworld.MainActivity"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" /> </RelativeLayout> You will see a widget called a TextView inside of this layout, with the android:text property set to "Hello World!". This is a block of text that will be shown to the user when they run the application. You can read more about Layouts and attributes. Next, let's take a look at MainActivity. This is the Java code that has been generated for MainActivity. public class MainActivity extends AppCompatActivity { // The onCreate method is called when an Activity starts GoalKicker.com Android Notes for Professionals 9 // This is where we will set up our layout @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // setContentView sets the Activity's layout to a specified XML layout // In our case we are using the activity_main layout setContentView(R.layout.activity_main); } } As dened in our Android manifest, MainActivity will launch by default when a user starts the HelloWorld app. Lastly, open up the le named build.gradle located in app/. Android Studio uses the build system Gradle to compile and build Android applications and libraries. apply plugin: 'com.android.application' android { signingConfigs { applicationName { keyAlias 'applicationName' keyPassword 'password' storeFile file('../key/applicationName.jks') storePassword 'anotherPassword' } } compileSdkVersion 26 buildToolsVersion "26.0.0" defaultConfig { applicationId "com.stackexchange.docs.helloworld" minSdkVersion 16 targetSdkVersion 26 versionCode 1 versionName "1.0" signingConfig signingConfigs.applicationName } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:26.0.0' } This le contains information about the build and your app version, and you can also use it to add dependencies to external libraries. For now, let's not make any changes. It is advisable to always select the latest version available for the dependencies: buildToolsVersion: 26.0.0 com.android.support:appcompat-v7: 26.0.0 (July 2017) rebase: 11.0.4 (August 2017) GoalKicker.com Android Notes for Professionals 10 compileSdkVersion compileSdkVersion is your way to tell Gradle what version of the Android SDK to compile your app with. Using the new Android SDK is a requirement to use any of the new APIs added in that level. It should be emphasized that changing your compileSdkVersion does not change runtime behavior. While new compiler warnings/errors may be present when changing your compileSdkVersion, your compileSdkVersion is not included in your APK: it is purely used at compile time. Therefore it is strongly recommended that you always compile with the latest SDK. Youll get all the benets of new compilation checks on existing code, avoid newly deprecated APIs, and be ready to use new APIs. minSdkVersion If compileSdkVersion sets the newest APIs available to you, minSdkVersion is the lower bound for your app. The minSdkVersion is one of the signals the Google Play Store uses to determine which of a users devices an app can be installed on. It also plays an important role during development: by default lint runs against your project, warning you when you use any APIs above your minSdkVersion, helping you avoid the runtime issue of attempting to call an API that doesnt exist. Checking the system version at runtime is a common technique when using APIs only on newer platform versions. targetSdkVersion targetSdkVersion is the main way Android provides forward compatibility by not applying behavior changes unless the targetSdkVersion is updated. This allows you to use new APIs prior to working through the behavior changes. Updating to target the latest SDK should be a high priority for every app. That doesnt mean you have to use every new feature introduced nor should you blindly update your targetSdkVersion without testing. targetSDKVersion is the version of Android which is the upper-limit for the available tools. If targetSDKVersion is less than 23, the app does not need to request permissions at runtime for an instance, even if the app is being run on API 23+. TargetSDKVersion does not prevent android versions above the picked Android version from running the app. You can nd more info about the Gradle plugin: A basic example Introduction to the Gradle plugin for android and the wrapper Introduction to the conguration of the build.gradle and the DSL methods Running the Application Now, let's run our HelloWorld application. You can either run an Android Virtual Device (which you can set up by using the AVD Manager in Android Studio, as described in the example below) or connect your own Android device through a USB cable. Setting up an Android device To run an application from Android Studio on your Android Device, you must enable USB Debugging in the Developer Options in the settings of your device. Settings > Developer options > USB debugging If Developer Options is not visible in the settings, navigate to About Phone and tap on the Build Number seven GoalKicker.com Android Notes for Professionals 11 times. This will enable Developer Options to show up in your settings. Settings > About phone > Build number You also might need to change build.gradle conguration to build on a version that your device has. Running from Android Studio Click the green Run button from the toolbar at the top of Android Studio. In the window that appears, select whichever device you would like to run the app on (start an Android Virtual Device if necessary, or see Setting up an AVD (Android Virtual Device) if you need to set one up) and click OK. On devices running Android 4.4 (KitKat) and possibly higher, a pop-up will be shown to authorize USB debugging. Click OK to accept. The application will now install and run on your Android device or emulator. APK le location When you prepare your application for release, you congure, build, and test a release version of your application. The conguration tasks are straightforward, involving basic code cleanup and code modication tasks that help optimize your application. The build process is similar to the debug build process and can be done using JDK and Android SDK tools. The testing tasks serve as a nal check, ensuring that your application performs as expected under real-world conditions. When you are nished preparing your application for release you have a signed APK le, which you can distribute directly to users or distribute through an application marketplace such as Google Play. Android Studio Since in the above examples Gradle is used, the location of the generated APK le is: <Your Project Location>/app/build/outputs/apk/app-debug.apk IntelliJ If you are a user of IntelliJ before switching to Studio, and are importing your IntelliJ project directly, then nothing changed. The location of the output will be the same under: out/production/... Note: this is will become deprecated sometimes around 1.0 Eclipse If you are importing Android Eclipse project directly, do not do this! As soon as you have dependencies in your project (jars or Library Projects), this will not work and your project will not be properly setup. If you have no dependencies, then the apk would be under the same location as you'd nd it in Eclipse: bin/... GoalKicker.com Android Notes for Professionals 12 Section 1.2: Setting up Android Studio Android Studio is the Android development IDE that is ocially supported and recommended by Google. Android Studio comes bundled with the Android SDK Manager, which is a tool to download the Android SDK components required to start developing apps. Installing Android Studio and Android SDK tools: 1. Download and install Android Studio. 2. Download the latest SDK Tools and SDK Platform-tools by opening the Android Studio, and then following the Android SDK Tool Updates instructions. You should install the latest available stable packages. If you need to work on old projects that were built using older SDK versions, you may need to download these versions as well Since Android Studio 2.2, a copy of the latest OpenJDK comes bundled with the install and is the recommended JDK (Java Development Kit) for all Android Studio projects. This removes the requirement of having Oracle's JDK package installed. To use the bundled SDK, proceed as follows; 1. Open your project in Android Studio and select File > Project Structure in the menu bar. 2. In the SDK Location page and under JDK location, check the Use embedded JDK checkbox. 3. Click OK. Congure Android Studio Android Studio provides access to two conguration les through the Help menu: studio.vmoptions: Customize options for Studio's Java Virtual Machine (JVM), such as heap size and cache size. Note that on Linux machines this le may be named studio64.vmoptions, depending on your version of Android Studio. idea.properties: Customize Android Studio properties, such as the plugins folder path or maximum supported le size. Change/add theme You can change it as your preference.File->Settings->Editor->Colors & Fonts-> and select a theme.Also you can download new themes from http://color-themes.com/ Once you have downloaded the .jar.zip le, go to File -> Import Settings... and choose the le downloaded. Compiling Apps Create a new project or open an existing project in Android Studio and press the green Play button on the top toolbar to run it. If it is gray you need to wait a second to allow Android Studio to properly index some les, the progress of which can be seen in the bottom status bar. If you want to create a project from the shell make sure that you have a local.properties le, which is created by Android Studio automatically. If you need to create the project without Android Studio you need a line starting with sdk.dir= followed by the path to your SDK installation. Open a shell and go into the project's directory. Enter ./gradlew aR and press enter. aR is a shortcut for assembleRelease, which will download all dependencies for you and build the app. The nal APK le will be in ProjectName/ModuleName/build/outputs/apk and will been called ModuleName-release.apk. GoalKicker.com Android Notes for Professionals 13 Section 1.3: Android programming without an IDE This is a minimalist Hello World example that uses only the most basic Android tools. Requirements and assumptions Oracle JDK 1.7 or later Android SDK Tools (just the command line tools) This example assumes Linux. You may have to adjust the syntax for your own platform. Setting up the Android SDK After unpacking the SDK release: 1. Install additional packages using the SDK manager. Don't use android update sdk --no-ui as instructed in the bundled Readme.txt; it downloads some 30 GB of unnecessary les. Instead use the interactive SDK manager android sdk to get the recommended minimum of packages. 2. Append the following JDK and SDK directories to your execution PATH. This is optional, but the instructions below assume it. JDK/bin SDK/platform-tools SDK/tools SDK/build-tools/LATEST (as installed in step 1) 3. Create an Android virtual device. Use the interactive AVD Manager (android avd). You might have to ddle a bit and search for advice; the on-site instructions aren't always helpful. (You can also use your own device) 4. Run the device: emulator -avd DEVICE 5. If the device screen appears to be locked, then swipe to unlock it. Leave it running while you code the app. Coding the app 0. Change to an empty working directory. 1. Make the source le: mkdir --parents src/dom/domain touch src/dom/domain/SayingHello.java Content: package dom.domain; import android.widget.TextView; GoalKicker.com Android Notes for Professionals 14 public final class SayingHello extends android.app.Activity { protected @Override void onCreate( final android.os.Bundle activityState ) { super.onCreate( activityState ); final TextView textV = new TextView( SayingHello.this ); textV.setText( "Hello world" ); setContentView( textV ); } } 2. Add a manifest: touch AndroidManifest.xml Content: <?xml version='1.0'?> <manifest xmlns:a='http://schemas.android.com/apk/res/android' package='dom.domain' a:versionCode='0' a:versionName='0'> <application a:label='Saying hello'> <activity a:name='dom.domain.SayingHello'> <intent-filter> <category a:name='android.intent.category.LAUNCHER'/> <action a:name='android.intent.action.MAIN'/> </intent-filter> </activity> </application> </manifest> 3. Make a sub-directory for the declared resources: mkdir res Leave it empty for now. Building the code 0. Generate the source for the resource declarations. Substitute here the correct path to your SDK, and the installed API to build against (e.g. "android-23"): aapt package -f \\ -I SDK/platforms/android-API/android.jar \\ -J src -m \\ -M AndroidManifest.xml -S res -v Resource declarations (described further below) are actually optional. Meantime the above call does nothing if res/ is still empty. 1. Compile the source code to Java bytecode (.java .class): javac \\ -bootclasspath SDK/platforms/android-API/android.jar \\ -classpath src -source 1.7 -target 1.7 \\ src/dom/domain/*.java GoalKicker.com Android Notes for Professionals 15 2. Translate the bytecode from Java to Android (.class .dex): First using Jill (.class .jayce): java -jar SDK/build-tools/LATEST/jill.jar \\ --output classes.jayce src Then Jack (.jayce .dex): java -jar SDK/build-tools/LATEST/jack.jar \\ --import classes.jayce --output-dex . Android bytecode used to be called "Dalvik executable code", and so "dex". You could replace steps 11 and 12 with a single call to Jack if you like; it can compile directly from Java source (.java .dex). But there are advantages to compiling with javac. It's a better known, better documented and more widely applicable tool. 3. Package up the resource les, including the manifest: aapt package -f \\ -F app.apkPart \\ -I SDK/platforms/android-API/android.jar \\ -M AndroidManifest.xml -S res -v That results in a partial APK le (Android application package). 4. Make the full APK using the ApkBuilder tool: java -classpath SDK/tools/lib/sdklib.jar \\ com.android.sdklib.build.ApkBuilderMain \\ app.apkUnalign \\ -d -f classes.dex -v -z app.apkPart It warns, "THIS TOOL IS DEPRECATED. See --help for more information." If --help fails with an ArrayIndexOutOfBoundsException, then instead pass no arguments: java -classpath SDK/tools/lib/sdklib.jar \\ com.android.sdklib.build.ApkBuilderMain It explains that the CLI (ApkBuilderMain) is deprecated in favour of directly calling the Java API (ApkBuilder). (If you know how to do that from the command line, please update this example.) 5. Optimize the data alignment of the APK (recommended practice): zipalign -f -v 4 app.apkUnalign app.apk Installing and running 0. Install the app to the Android device: GoalKicker.com Android Notes for Professionals 16 adb install -r app.apk 1. Start the app: adb shell am start -n dom.domain/.SayingHello It should run and say hello. That's all. That's what it takes to say hello using the basic Android tools. Declaring a resource This section is optional. Resource declarations aren't required for a simple "hello world" app. If they aren't required for your app either, then you could streamline the build somewhat by omitting step 10, and removing the reference to the res/ directory from step 13. Otherwise, here's a brief example of how to declare a resource, and how to reference it. 0. Add a resource le: mkdir res/values touch res/values/values.xml Content: <?xml version='1.0'?> <resources> <string name='appLabel'>Saying hello</string> </resources> 1. Reference the resource from the XML manifest. This is a declarative style of reference: <!-- <application a:label='Saying hello'> --> <application a:label='@string/appLabel'> 2. Reference the same resource from the Java source. This is an imperative reference: // v.setText( "Hello world" ); v.setText( "This app is called " + getResources().getString( R.string.appLabel )); 3. Test the above modications by rebuilding, reinstalling and re-running the app (steps 10-17). It should restart and say, "This app is called Saying hello". Uninstalling the app adb uninstall dom.domain See also original question - The original question that prompted this example working example - A working build script that uses the above commands GoalKicker.com Android Notes for Professionals 17 Section 1.4: Application Fundamentals Android Apps are written in Java. The Android SDK tools compile the code, data and resource les into an APK (Android package). Generally, one APK le contains all the content of the app. Each app runs on its own virtual machine(VM) so that app can run isolated from other apps. Android system works with the principle of least privilege. Each app only has access to the components which it requires to do its work, and no more. However, there are ways for an app to share data with other apps, such as by sharing Linux user id between app, or apps can request permission to access device data like SD card, contacts etc. App Components App components are the building blocks of an Android app. Each components plays a specic role in an Android app which serves a distinct purpose and has distinct life-cycles(the ow of how and when the component is created and destroyed). Here are the four types of app components: 1. Activities: An activity represents a single screen with a User Interface(UI). An Android app may have more than one activity. (e.g. an email app might have one activity to list all the emails, another to show the contents of each email, and another to compose new email.) All the activities in an App work together to create a User eXperience (UX). 2. Services: A service runs in the background to perform long-running operations or to perform work for a remote processes. A service does not provide any UI, it runs only in the background with the User's input. (e.g. a service can play music in the background while the user is in a dierent App, or it might download data from the internet without blocking user's interaction with the Android device.) 3. Content Providers: A content provider manages shared app data. There are four ways to store data in an app: it can be written to a le and stored in the le system, inserted or updated to a SQLite database, posted to the web, or saved in any other persistent storage location the App can access. Through content providers, other Apps can query or even modify the data. (e.g. Android system provides a content provider that manages the user's contact information so that any app which has permission can query the contacts.) Content providers can also be used to save the data which is private to the app for better data integrity. 4. Broadcast receivers: A broadcast receiver responds to the system-wide broadcasts of announcements (e.g. a broadcast announcing that the screen has turned o, the battery is low, etc.) or from Apps (e.g. to let other apps know that some data has been downloaded to the device and is available for them to use). Broadcast receivers don't have UIs but they can show notication in the status bar to alert the user. Usually broadcast receivers are used as a gateway to other components of the app, consisting mostly of activities and services. One unique aspect of the Android system is that any app can start another app's component (e.g. if you want to make call, send SMS, open a web page, or view a photo, there is an app which already does that and your app can make use of it, instead of developing a new activity for the same task). When the system starts a component, it starts the process for that app (if it isn't already running, i.e. only one foreground process per app can run at any given time on an Android system) and instantiates the classes needed for that component. Thus the component runs on the process of that App that it belongs to. Therefore, unlike apps on other systems, Android apps don't have a single entry point(there is no main() method). Because the system runs each app in a separate process, one app cannot directly activate another app's components, however the Android system can. Thus to start another app's component, one app must send a message to the system that species an intent to start that component, then the system will start that component. Context Instances of the class android.content.Context provide the connection to the Android system which executes the application. Instance of Context is required to get access to the resources of the project and the global information GoalKicker.com Android Notes for Professionals 18 about the app's environment. Let's have an easy to digest example: Consider you are in a hotel, and you want to eat something. You call roomservice and ask them to bring you things or clean up things for you. Now think of this hotel as an Android app, yourself as an activity, and the room-service person is then your context, which provides you access to the hotel resources like room-service, food items etc. Yet an other example, You are in a restaurant sitting on a table, each table has an attendant, when ever you want to order food items you ask the attendant to do so. The attendant then places your order and your food items gets served on your table. Again in this example, the restaurant is an Android App, the tables or the customers are App components, the food items are your App resources and the attendant is your context thus giving you a way to access the resources like food items. Activating any of the above components requires the context's instance. Not just only the above, but almost every system resource: creation of the UI using views(discussed later), creating instance of system services, starting new activities or services -- all require context. More detailed description is written here. Section 1.5: Setting up an AVD (Android Virtual Device) TL;DR It basically allows us to simulate real devices and test our apps without a real device. According to Android Developer Documentation, an Android Virtual Device (AVD) denition lets you dene the characteristics of an Android Phone, Tablet, Android Wear, or Android TV device that you want to simulate in the Android Emulator. The AVD Manager helps you easily create and manage AVDs. To set up an AVD, follow these steps: 1. Click this button to bring up the AVD Manager: 2. You should see a dialog like this: GoalKicker.com Android Notes for Professionals 19 3. Now click the + Create Virtual Device... button. This will bring up Virtual Device Conguration Dialog: GoalKicker.com Android Notes for Professionals 20 4. Select any device you want, then click Next: 5. Here you need to choose an Android version for your emulator. You might also need to download it rst by clicking Download. After you've chosen a version, click Next. GoalKicker.com Android Notes for Professionals 21 6. Here, enter a name for your emulator, initial orientation, and whether you want to display a frame around it. After you chosen all these, click Finish. 7. Now you got a new AVD ready for launching your apps on it. GoalKicker.com Android Notes for Professionals 22 Chapter 2: Android Studio Section 2.1: Setup Android Studio System Requirements Microsoft Windows 8/7/Vista/2003 (32 or 64-bit). Mac OS X 10.8.5 or higher, up to 10.9 (Mavericks) GNOME or KDE desktop Installation Window 1. Download and install JDK (Java Development Kit) version 8 2. Download Android Studio 3. Launch Android Studio.exe then mention JDK path and download the latest SDK Linux 1. Download and install JDK (Java Development Kit) version 8 2. Download Android Studio 3. Extract the zip le 4. Open terminal, cd to the extracted folder, cd to bin (example cd android-studio/bin) 5. Run ./studio.sh Section 2.2: View And Add Shortcuts in Android Studio By going to Settings >> Keymap A window will popup showing All the Editor Actions with the their name and shortcuts. Some of the Editor Actions do not have shortcuts. So right click on that and add a new shortcut to that. Check the image below GoalKicker.com Android Notes for Professionals 23 Section 2.3: Android Studio useful shortcuts The following are some of the more common/useful shortcuts. These are based on the default IntelliJ shortcut map. You can switch to other common IDE shortcut maps via File -> Settings -> Keymap -> <Choose Eclipse/Visual Studio/etc from Keymaps dropdown> Format code Action Shortcut CTRL + ALT + L Add unimplemented methods CTRL + I Show logcat ALT + 6 Build CTRL + F9 Build and Run CTRL + F10 Find CTRL + F Find in project CTRL + SHIFT + F Find and replace CTRL + R Find and replace in project CTRL + SHIFT + R Override methods CTRL + O Show project ALT + 1 Hide project - logcat SHIFT + ESC Collapse all CTRL + SHIFT + NumPad + View Debug Points CTRL + SHIFT + F8 Expand all CTRL + SHIFT + NumPad - Open Settings ALT + s GoalKicker.com Android Notes for Professionals 24 Select Target (open current le in Project view) ALT + F1 ENTER Search Everywhere SHIFT SHIFT (Double shift) Code | Surround With CTRL ALT + T Create method form selected code ALT + CTRL Refactor: Action Refactor This (menu/picker for all applicable refactor actions of the current element) Shortcut Mac CTRL + T - Win/Linux CTRL + ALT + T Rename SHIFT + F6 Extract Method Mac CMD + ALT + M - Win/Linux CTRL + ALT + M Extract Parameter Mac CMD + ALT + P - Win/Linux CTRL + ALT + P Extract Variable Mac CMD + ALT + V - Win/Linux CTRL + ALT + V Section 2.4: Android Studio Improve performance tip Enable Oine Work: 1. Click File -> Settings. Search for "gradle" and click in Offline work box. 2. Go to Compiler (in same settings dialog just below Gradle) and add --offline to Command-line Options text box. Improve Gradle Performance Add following two line of code in your gradle.properties le. org.gradle.daemon=true org.gradle.parallel=true Increasing the value of -Xmx and -Xms in studio.vmoptions le -Xms1024m -Xmx4096m -XX:MaxPermSize=1024m -XX:ReservedCodeCacheSize=256m -XX:+UseCompressedOops Window %USERPROFILE%.{FOLDER_NAME}\studio.exe.vmoptions and/or %USERPROFILE%.{FOLDER_NAME}\studio64.exe.vmoptions Mac ~/Library/Preferences/{FOLDER_NAME}/studio.vmoptions Linux ~/.{FOLDER_NAME}/studio.vmoptions and/or ~/.{FOLDER_NAME}/studio64.vmoptions GoalKicker.com Android Notes for Professionals 25 Section 2.5: Gradle build project takes forever Android Studio -> Preferences -> Gradle -> Tick Oine work and then restart your Android studio. Reference screenshot: Section 2.6: Enable/Disable blank line copy ctrl + alt + shift + / (cmd + alt + shift + / on MacOS) should show you the following dialog: Clicking on Registry you will get GoalKicker.com Android Notes for Professionals 26 The key you want to enable/disable is editor.skip.copy.and.cut.for.empty.selection Tested on Linux Ubuntu and MacOS. Section 2.7: Custom colors of logcat message based on message importance Go to File -> Settings -> Editor -> Colors & Fonts -> Android Logcat Change the colors as you need: GoalKicker.com Android Notes for Professionals 27 Choose the appropriate color: Section 2.8: Filter logs from UI Android logs can be ltered directly from the UI. Using this code GoalKicker.com Android Notes for Professionals 28 public class MainActivity extends AppCompatActivity { private final static String TAG1 = MainActivity.class.getSimpleName(); private final static String TAG2 = MainActivity.class.getCanonicalName(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Log.e(TAG1,"Log from onCreate method with TAG1"); Log.i(TAG2,"Log from onCreate method with TAG2"); } } If I use the regex TAG1|TAG2 and the level verbose I get 01-14 10:34:46.961 12880-12880/android.doc.so.thiebaudthomas.sodocandroid E/MainActivity: Log from onCreate method with TAG1 01-14 10:34:46.961 12880-12880/android.doc.so.thiebaudthomas.sodocandroid I/androdi.doc.so.thiebaudthomas.sodocandroid.MainActivity: Log from onCreate method with TAG2 The level can be set to get logs with given level and above. For example the verbose level will catch verbose, debug, info, warn, error and assert logs. Using the same example, if I set the level to error, I only get 01-14 10:34:46.961 12880-12880/androdi.doc.so.thiebaudthomas.sodocandroid E/MainActivity: Log from onCreate method with TAG1 Section 2.9: Create lters conguration Custom lters can be set and save from the UI. In the AndroidMonitor tab, click on the right dropdown (must contains Show only selected application or No filters) and select Edit filter configuration. Enter the lter you want GoalKicker.com Android Notes for Professionals 29 And use it (you can selected it from the same dropdown) Important If you add an input in the lter bar, android studio will consider both your lter and your input. With both input and lter there is no output Without lter, there is some outputs Section 2.10: Create assets folder Right click in MAIN folder > New > Folder > Assets Folder. Assets folder will be under MAIN folder with the same symbol as RES folder. In this example I put a font le. GoalKicker.com Android Notes for Professionals 30 GoalKicker.com Android Notes for Professionals 31 Chapter 3: Instant Run in Android Studio Section 3.1: Enabling or disabling Instant Run 1. Open the Settings or Preferences dialog: On Windows or Linux, select File > Settings from the main menu. On Mac OSX, select Android Studio > Preferences from the main menu. 2. Navigate to Build, Execution, Deployment > Compiler. 3. In the text eld next to Command-line Options, enter your command-line options. 4. Click OK to save and exit. The top option is Instant run. Check/uncheck that box. Documentation Section 3.2: Types of code Swaps in Instant Run There are three types of code swaps that Instant run enables to support faster debugging and running app from your code in Android Studio. Hot Swap Warm Swap Cold Swap When are each of these swaps triggered? GoalKicker.com Android Notes for Professionals 32 HOT SWAP is triggered when an existing method's implementation is changed. WARM SWAP is triggered when an existing resource is changed or removed (anything in the res folder) COLD SWAP whenever there is a structural code change in your app's code e.g. 1. Add, remove, or change: an annotation an instance eld a static eld a static method signature an instance method signature 2. Change which parent class the current class inherits from 3. Change the list of implemented interfaces 4. Change a class's static initializer 5. Reorder layout elements that use dynamic resource IDs What happens when a code swap happens? HOT SWAP changes are visible instantly - as soon as the next call to the method whose implementation is changed is made. WARM SWAP restarts the current activity COLD SWAP restarts the entire app (without reinstall) Section 3.3: Unsupported code changes when using Instant Run There are a few changes where instant won't do its trick and a full build and reinstall fo your app will happen just like it used to happen before Instant Run was born. 1. Change the app manifest 2. Change resources referenced by the app manifest 3. Change an Android widget UI element (requires a Clean and Rerun) Documentation GoalKicker.com Android Notes for Professionals 33 Chapter 4: TextView Everything related to TextView customization in Android SDK Section 4.1: Spannable TextView A spannable TextView can be used in Android to highlight a particular portion of text with a dierent color, style, size, and/or click event in a single TextView widget. Consider that you have dened a TextView as follows: TextView textview=findViewById(R.id.textview); Then you can apply dierent highlighting to it as shown below: Spannable color: In order to set a dierent color to some portion of text, a ForegroundColorSpan can be used, as shown in the following example: Spannable spannable = new SpannableString(firstWord+lastWord); spannable.setSpan(new ForegroundColorSpan(firstWordColor), 0, firstWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); spannable.setSpan(new ForegroundColorSpan(lastWordColor), firstWord.length(), firstWord.length()+lastWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); textview.setText( spannable ); Output created by the code above: Spannable font: In order to set a dierent font size to some portion of text, a RelativeSizeSpan can be used, as shown in the following example: Spannable spannable = new SpannableString(firstWord+lastWord); spannable.setSpan(new RelativeSizeSpan(1.1f),0, firstWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); // set size spannable.setSpan(new RelativeSizeSpan(0.8f), firstWord.length(), firstWord.length() + lastWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); // set size textview.setText( spannable ); Output created by the code above: Spannable typeface: In order to set a dierent font typeface to some portion of text, a custom TypefaceSpan can be used, as shown in the following example: Spannable spannable = new SpannableString(firstWord+lastWord); GoalKicker.com Android Notes for Professionals 34 spannable.setSpan( new CustomTypefaceSpan("SFUIText-Bold.otf",fontBold), 0, firstWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); spannable.setSpan( new CustomTypefaceSpan("SFUIText-Regular.otf",fontRegular), firstWord.length(), firstWord.length() + lastWord.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); text.setText( spannable ); However, in order to make the above code working, the class CustomTypefaceSpan has to be derived from the class TypefaceSpan. This can be done as follows: public class CustomTypefaceSpan extends TypefaceSpan { private final Typeface newType; public CustomTypefaceSpan(String family, Typeface type) { super(family); newType = type; } @Override public void updateDrawState(TextPaint ds) { applyCustomTypeFace(ds, newType); } @Override public void updateMeasureState(TextPaint paint) { applyCustomTypeFace(paint, newType); } private static void applyCustomTypeFace(Paint paint, Typeface tf) { int oldStyle; Typeface old = paint.getTypeface(); if (old == null) { oldStyle = 0; } else { oldStyle = old.getStyle(); } int fake = oldStyle & ~tf.getStyle(); if ((fake & Typeface.BOLD) != 0) { paint.setFakeBoldText(true); } if ((fake & Typeface.ITALIC) != 0) { paint.setTextSkewX(-0.25f); } paint.setTypeface(tf); } } Section 4.2: Strikethrough TextView Strikethrough the entire text String sampleText = "This is a test strike"; textView.setPaintFlags(tv.getPaintFlags()| Paint.STRIKE_THRU_TEXT_FLAG); textView.setText(sampleText); Output: This is a test strike GoalKicker.com Android Notes for Professionals 35 Strikethrough only parts of the text String sampleText = "This is a test strike"; SpannableStringBuilder spanBuilder = new SpannableStringBuilder(sampleText); StrikethroughSpan strikethroughSpan = new StrikethroughSpan(); spanBuilder.setSpan( strikethroughSpan, // Span to add 0, // Start 4, // End of the span (exclusive) Spanned.SPAN_EXCLUSIVE_EXCLUSIVE // Text changes will not reflect in the strike changing ); textView.setText(spanBuilder); Output: This is a test strike Section 4.3: TextView with image Android allows programmers to place images at all four corners of a TextView. For example, if you are creating a eld with a TextView and at same time you want to show that the eld is editable, then developers will usually place an edit icon near that eld. Android provides us an interesting option called compound drawable for a TextView: <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:drawablePadding="4dp" android:drawableRight="@drawable/edit" android:text="Hello world" android:textSize="18dp" /> You can set the drawable to any side of your TextView as follows: android:drawableLeft="@drawable/edit" android:drawableRight="@drawable/edit" android:drawableTop="@drawable/edit" android:drawableBottom="@drawable/edit" Setting the drawable can also be achieved programmatically in the following way: yourTextView.setCompoundDrawables(leftDrawable, rightDrawable, topDrawable, bottomDrawable); Setting any of the parameters handed over to setCompoundDrawables() to null will remove the icon from the corresponding side of the TextView. Section 4.4: Make RelativeSizeSpan align to top In order to make a RelativeSizeSpan align to the top, a custom class can be derived from the class SuperscriptSpan. In the following example, the derived class is named TopAlignSuperscriptSpan: activity_main.xml: <TextView android:id="@+id/txtView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="50dp" GoalKicker.com Android Notes for Professionals 36 android:textSize="26sp" /> MainActivity.java: TextView txtView = (TextView) findViewById(R.id.txtView); SpannableString spannableString = new SpannableString("RM123.456"); spannableString.setSpan( new TopAlignSuperscriptSpan( (float)0.35 ), 0, 2, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE ); txtView.setText(spannableString); TopAlignSuperscriptSpan.java: private class TopAlignSuperscriptSpan extends SuperscriptSpan { //divide superscript by this number protected int fontScale = 2; //shift value, 0 to 1.0 protected float shiftPercentage = 0; //doesn't shift TopAlignSuperscriptSpan() {} //sets the shift percentage TopAlignSuperscriptSpan( float shiftPercentage ) { if( shiftPercentage > 0.0 && shiftPercentage < 1.0 ) this.shiftPercentage = shiftPercentage; } @Override public void updateDrawState( TextPaint tp ) { //original ascent float ascent = tp.ascent(); //scale down the font tp.setTextSize( tp.getTextSize() / fontScale ); //get the new font ascent float newAscent = tp.getFontMetrics().ascent; //move baseline to top of old font, then move down size of new font //adjust for errors with shift percentage tp.baselineShift += ( ascent - ascent * shiftPercentage ) - (newAscent - newAscent * shiftPercentage ); } @Override public void updateMeasureState( TextPaint tp ) { updateDrawState( tp ); } } Reference screenshot: GoalKicker.com Android Notes for Professionals 37 Section 4.5: Pinchzoom on TextView activity_main.xml: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <TextView android:id="@+id/mytv" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:text="This is my sample text for pinch zoom demo, you can zoom in and out using pinch zoom, thanks" /> </RelativeLayout> MainActivity.java: import android.app.Activity; import android.os.Bundle; import android.view.MotionEvent; import android.view.View; import android.view.View.OnTouchListener; import android.widget.TextView; public class MyTextViewPinchZoomClass extends Activity implements OnTouchListener { final static float STEP = 200; TextView mytv; float mRatio = 1.0f; int mBaseDist; float mBaseRatio; float fontsize = 13; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mytv = (TextView) findViewById(R.id.mytv); GoalKicker.com Android Notes for Professionals 38 mytv.setTextSize(mRatio + 13); } public boolean onTouchEvent(MotionEvent event) { if (event.getPointerCount() == 2) { int action = event.getAction(); int pureaction = action & MotionEvent.ACTION_MASK; if (pureaction == MotionEvent.ACTION_POINTER_DOWN) { mBaseDist = getDistance(event); mBaseRatio = mRatio; } else { float delta = (getDistance(event) - mBaseDist) / STEP; float multi = (float) Math.pow(2, delta); mRatio = Math.min(1024.0f, Math.max(0.1f, mBaseRatio * multi)); mytv.setTextSize(mRatio + 13); } } return true; } int getDistance(MotionEvent event) { int dx = (int) (event.getX(0) - event.getX(1)); int dy = (int) (event.getY(0) - event.getY(1)); return (int) (Math.sqrt(dx * dx + dy * dy)); } public boolean onTouch(View v, MotionEvent event) { return false; } } Section 4.6: Textview with dierent Textsize You can archive dierent Textsizes inside a Textview with a Span TextView textView = (TextView) findViewById(R.id.textView); Spannable span = new SpannableString(textView.getText()); span.setSpan(new RelativeSizeSpan(0.8f), start, end, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); textView.setText(span) Section 4.7: Theme and Style customization MainActivity.java: public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } } activity_main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:custom="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" GoalKicker.com Android Notes for Professionals 39 android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical" tools:context=".MainActivity"> <com.customthemeattributedemo.customview.CustomTextView style="?mediumTextStyle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="20dp" android:text="@string/message_hello" custom:font_family="@string/bold_font" /> <com.customthemeattributedemo.customview.CustomTextView style="?largeTextStyle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="20dp" android:text="@string/message_hello" custom:font_family="@string/bold_font" /> </LinearLayout> CustomTextView.java: public class CustomTextView extends TextView { private static final String TAG = "TextViewPlus"; private Context mContext; public CustomTextView(Context context) { super(context); mContext = context; } public CustomTextView(Context context, AttributeSet attrs) { super(context, attrs); mContext = context; setCustomFont(context, attrs); } public CustomTextView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); mContext = context; setCustomFont(context, attrs); } private void setCustomFont(Context ctx, AttributeSet attrs) { TypedArray customFontNameTypedArray = ctx.obtainStyledAttributes(attrs, R.styleable.CustomTextView); String customFont = customFontNameTypedArray.getString(R.styleable.CustomTextView_font_family); Typeface typeface = null; typeface = Typeface.createFromAsset(ctx.getAssets(), customFont); setTypeface(typeface); customFontNameTypedArray.recycle(); } } attrs.xml: GoalKicker.com Android Notes for Professionals 40 <?xml version="1.0" encoding="utf-8"?> <resources> <attr name="mediumTextStyle" format="reference" /> <attr name="largeTextStyle" format="reference" /> <declare-styleable name="CustomTextView"> <attr name="font_family" format="string" /> <!--- Your other attributes --> </declare-styleable> </resources> strings.xml: <resources> <string name="app_name">Custom Style Theme Attribute Demo</string> <string name="message_hello">Hello Hiren!</string> <string name="bold_font">bold.ttf</string> </resources> styles.xml: <resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> <item name="mediumTextStyle">@style/textMedium</item> <item name="largeTextStyle">@style/textLarge</item> </style> <style name="textMedium" parent="textParentStyle"> <item name="android:textAppearance">@android:style/TextAppearance.Medium</item> </style> <style name="textLarge" parent="textParentStyle"> <item name="android:textAppearance">@android:style/TextAppearance.Large</item> </style> <style name="textParentStyle"> <item name="android:textColor">@android:color/white</item> <item name="android:background">@color/colorPrimary</item> <item name="android:padding">5dp</item> </style> </resources> Section 4.8: TextView customization public class CustomTextView extends TextView { GoalKicker.com Android Notes for Professionals 41 private float strokeWidth; private Integer strokeColor; private Paint.Join strokeJoin; private float strokeMiter; public CustomTextView(Context context) { super(context); init(null); } public CustomTextView(Context context, AttributeSet attrs) { super(context, attrs); init(attrs); } public CustomTextView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(attrs); } public void init(AttributeSet attrs) { if (attrs != null) { TypedArray a = getContext().obtainStyledAttributes(attrs, R.styleable.CustomTextView); if (a.hasValue(R.styleable.CustomTextView_strokeColor)) { float strokeWidth = a.getDimensionPixelSize(R.styleable.CustomTextView_strokeWidth, 1); int strokeColor = a.getColor(R.styleable.CustomTextView_strokeColor, 0xff000000); float strokeMiter = a.getDimensionPixelSize(R.styleable.CustomTextView_strokeMiter, 10); Paint.Join strokeJoin = null; switch (a.getInt(R.styleable.CustomTextView_strokeJoinStyle, 0)) { case (0): strokeJoin = Paint.Join.MITER; break; case (1): strokeJoin = Paint.Join.BEVEL; break; case (2): strokeJoin = Paint.Join.ROUND; break; } this.setStroke(strokeWidth, strokeColor, strokeJoin, strokeMiter); } } } public void setStroke(float width, int color, Paint.Join join, float miter) { strokeWidth = width; strokeColor = color; strokeJoin = join; strokeMiter = miter; } @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); int restoreColor = this.getCurrentTextColor(); if (strokeColor != null) { GoalKicker.com Android Notes for Professionals 42 TextPaint paint = this.getPaint(); paint.setStyle(Paint.Style.STROKE); paint.setStrokeJoin(strokeJoin); paint.setStrokeMiter(strokeMiter); this.setTextColor(strokeColor); paint.setStrokeWidth(strokeWidth); super.onDraw(canvas); paint.setStyle(Paint.Style.FILL); this.setTextColor(restoreColor); } } } Usage: public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); CustomTextView customTextView = (CustomTextView) findViewById(R.id.pager_title); } } Layout: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@mipmap/background"> <pk.sohail.gallerytest.activity.CustomTextView android:id="@+id/pager_title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_centerVertical="true" android:gravity="center" android:text="@string/txt_title_photo_gallery" android:textColor="@color/white" android:textSize="30dp" android:textStyle="bold" app:outerShadowRadius="10dp" app:strokeColor="@color/title_text_color" app:strokeJoinStyle="miter" app:strokeWidth="2dp" /> </RelativeLayout> attars: <?xml version="1.0" encoding="utf-8"?> <resources> <declare-styleable name="CustomTextView"> GoalKicker.com Android Notes for Professionals 43 <attr name="outerShadowRadius" format="dimension" /> <attr name="strokeWidth" format="dimension" /> <attr name="strokeMiter" format="dimension" /> <attr name="strokeColor" format="color" /> <attr name="strokeJoinStyle"> <enum name="miter" value="0" /> <enum name="bevel" value="1" /> <enum name="round" value="2" /> </attr> </declare-styleable> </resources> Programmatically usage: CustomTextView mtxt_name = (CustomTextView) findViewById(R.id.pager_title); //then use setStroke(float width, int color, Paint.Join join, float miter); //method before setting setText("Sample Text"); Section 4.9: Single TextView with two dierent colors Colored text can be created by passing the text and a font color name to the following function: private String getColoredSpanned(String text, String color) { String input = "<font color=" + color + ">" + text + "</font>"; return input; } The colored text can then be set to a TextView (or even to a Button, EditText, etc.) by using the example code below. First, dene a TextView as follows: TextView txtView = (TextView)findViewById(R.id.txtView); Then, create dierently colored text and assign it to strings: String name = getColoredSpanned("Hiren", "#800000"); String surName = getColoredSpanned("Patel","#000080"); Finally, set the two dierently colored strings to the TextView: txtView.setText(Html.fromHtml(name+" "+surName)); Reference screenshot: GoalKicker.com Android Notes for Professionals 44 GoalKicker.com Android Notes for Professionals 45 Chapter 5: AutoCompleteTextView Section 5.1: AutoComplete with CustomAdapter, ClickListener and Filter Main layout : activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent"> <AutoCompleteTextView android:id="@+id/auto_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:completionThreshold="2" android:hint="@string/hint_enter_name" /> </LinearLayout> Row layout row.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <TextView android:id="@+id/lbl_name" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="16dp" android:paddingLeft="8dp" android:paddingRight="8dp" android:paddingTop="16dp" android:text="Medium Text" android:textAppearance="?android:attr/textAppearanceMedium" /> </RelativeLayout> strings.xml <resources> <string name="hint_enter_name">Enter Name</string> </resources> MainActivity.java public class MainActivity extends AppCompatActivity { AutoCompleteTextView txtSearch; List<People> mList; PeopleAdapter adapter; private People selectedPerson; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 46 setContentView(R.layout.activity_main); mList = retrievePeople(); txtSearch = (AutoCompleteTextView) findViewById(R.id.auto_name); adapter = new PeopleAdapter(this, R.layout.activity_main, R.id.lbl_name, mList); txtSearch.setAdapter(adapter); txtSearch.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int pos, long id) { //this is the way to find selected object/item selectedPerson = (People) adapterView.getItemAtPosition(pos); } }); } private List<People> retrievePeople() { List<People> list = new ArrayList<People>(); list.add(new People("James", "Bond", 1)); list.add(new People("Jason", "Bourne", 2)); list.add(new People("Ethan", "Hunt", 3)); list.add(new People("Sherlock", "Holmes", 4)); list.add(new People("David", "Beckham", 5)); list.add(new People("Bryan", "Adams", 6)); list.add(new People("Arjen", "Robben", 7)); list.add(new People("Van", "Persie", 8)); list.add(new People("Zinedine", "Zidane", 9)); list.add(new People("Luis", "Figo", 10)); list.add(new People("John", "Watson", 11)); return list; } } Model class : People.java public class People { private String name, lastName; private int id; public People(String name, String lastName, int id) { this.name = name; this.lastName = lastName; this.id = id; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getlastName() { GoalKicker.com Android Notes for Professionals 47 return lastName; } public void setlastName(String lastName) { this.lastName = lastName; } } Adapter class : PeopleAdapter.java public class PeopleAdapter extends ArrayAdapter<People> { Context context; int resource, textViewResourceId; List<People> items, tempItems, suggestions; public PeopleAdapter(Context context, int resource, int textViewResourceId, List<People> items) { super(context, resource, textViewResourceId, items); this.context = context; this.resource = resource; this.textViewResourceId = textViewResourceId; this.items = items; tempItems = new ArrayList<People>(items); // this makes the difference. suggestions = new ArrayList<People>(); } @Override public View getView(int position, View convertView, ViewGroup parent) { View view = convertView; if (convertView == null) { LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); view = inflater.inflate(R.layout.row, parent, false); } People people = items.get(position); if (people != null) { TextView lblName = (TextView) view.findViewById(R.id.lbl_name); if (lblName != null) lblName.setText(people.getName()); } return view; } @Override public Filter getFilter() { return nameFilter; } /** * Custom Filter implementation for custom suggestions we provide. */ Filter nameFilter = new Filter() { @Override public CharSequence convertResultToString(Object resultValue) { String str = ((People) resultValue).getName(); return str; } @Override protected FilterResults performFiltering(CharSequence constraint) { GoalKicker.com Android Notes for Professionals 48 if (constraint != null) { suggestions.clear(); for (People people : tempItems) { if (people.getName().toLowerCase().contains(constraint.toString().toLowerCase())) { suggestions.add(people); } } FilterResults filterResults = new FilterResults(); filterResults.values = suggestions; filterResults.count = suggestions.size(); return filterResults; } else { return new FilterResults(); } } @Override protected void publishResults(CharSequence constraint, FilterResults results) { List<People> filterList = (ArrayList<People>) results.values; if (results != null && results.count > 0) { clear(); for (People people : filterList) { add(people); notifyDataSetChanged(); } } } }; } Section 5.2: Simple, hard-coded AutoCompleteTextView Design (layout XML): <AutoCompleteTextView android:id="@+id/autoCompleteTextView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="65dp" android:ems="10" /> Find the view in code after setContentView() (or its fragment or custom view equivalent): final AutoCompleteTextView myAutoCompleteTextView = (AutoCompleteTextView) findViewById(R.id.autoCompleteTextView1); Provide hard-coded data via an adapter: String[] countries = getResources().getStringArray(R.array.list_of_countries); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,countries); myAutoCompleteTextView.setAdapter(adapter); Tip: Though the preferred way would be to provide data via a Loader of some kind instead of a hard-coded list like this. GoalKicker.com Android Notes for Professionals 49 Chapter 6: Autosizing TextViews A TextView that automatically resizes text to t perfectly within its bounds. Android O allows you to instruct a TextView to let the size of the text expand or contract automatically to ll its layout based on the TextViews characteristics and boundaries. You can set up the TextView autosizing in either code or XML. There are two ways to set autosizing TextView: Granularity and Preset Sizes Section 6.1: Granularity In Java: Call the setAutoSizeTextTypeUniformWithConfiguration() method: setAutoSizeTextTypeUniformWithConfiguration(int autoSizeMinTextSize, int autoSizeMaxTextSize, int autoSizeStepGranularity, int unit) In XML: Use the autoSizeMinTextSize, autoSizeMaxTextSize, and autoSizeStepGranularity attributes to set the autosizing dimensions in the layout XML le: <TextView android:id=@+id/autosizing_textview_presetsize android:layout_width=wrap_content android:layout_height=250dp android:layout_marginLeft=0dp android:layout_marginTop=0dp android:autoSizeMaxTextSize=100sp android:autoSizeMinTextSize=12sp android:autoSizeStepGranularity=2sp android:autoSizeText=uniform android:text=Hello World! android:textSize=100sp app:layout_constraintLeft_toLeftOf=parent app:layout_constraintTop_toTopOf=parent /> Check out the AutosizingTextViews-Demo at GitHub for more details. Section 6.2: Preset Sizes In Java: Call the setAutoSizeTextTypeUniformWithPresetSizes() method: setAutoSizeTextTypeUniformWithPresetSizes(int[] presetSizes, int unit) In XML: Use the autoSizePresetSizes attribute in the layout XML le: <TextView android:id=@+id/autosizing_textview_presetsize GoalKicker.com Android Notes for Professionals 50 android:layout_width=wrap_content android:layout_height=250dp android:layout_marginLeft=0dp android:layout_marginTop=0dp android:autoSizeText=uniform android:autoSizePresetSizes=@array/autosize_text_sizes android:text=Hello World! android:textSize=100sp app:layout_constraintLeft_toLeftOf=parent app:layout_constraintTop_toTopOf=parent /> To access the array as a resource, dene the array in the res/values/arrays.xml le: <array name=autosize_text_sizes> <item>10sp</item> <item>12sp</item> <item>20sp</item> <item>40sp</item> <item>100sp</item> </array> Check out the AutosizingTextViews-Demo at GitHub for more details. GoalKicker.com Android Notes for Professionals 51 Chapter 7: ListView ListView is a viewgroup which groups several items from a data source like array or database and displays them in a scroll-able list. Data are bound with listview using an Adapter class. Section 7.1: Custom ArrayAdapter By default the ArrayAdapter class creates a view for each array item by calling toString() on each item and placing the contents in a TextView. To create a complex view for each item (for example, if you want an ImageView for each array item), extend the ArrayAdapter class and override the getView() method to return the type of View you want for each item. For example: public class MyAdapter extends ArrayAdapter<YourClassData>{ private LayoutInflater inflater; public MyAdapter (Context context, List<YourClassData> data){ super(context, 0, data); inflater = LayoutInflater.from(context); } @Override public long getItemId(int position) { //It is just an example YourClassData data = (YourClassData) getItem(position); return data.ID; } @Override public View getView(int position, View view, ViewGroup parent) { ViewHolder viewHolder; if (view == null) { view = inflater.inflate(R.layout.custom_row_layout_design, null); // Do some initialization //Retrieve the view on the item layout and set the value. viewHolder = new ViewHolder(view); view.setTag(viewHolder); } else { viewHolder = (ViewHolder) view.getTag(); } //Retrieve your object YourClassData data = (YourClassData) getItem(position); viewHolder.txt.setTypeface(m_Font); viewHolder.txt.setText(data.text); viewHolder.img.setImageBitmap(BitmapFactory.decodeFile(data.imageAddr)); return view; } GoalKicker.com Android Notes for Professionals 52 private class ViewHolder { private final TextView txt; private final ImageView img; private ViewHolder(View view) { txt = (TextView) view.findViewById(R.id.txt); img = (ImageView) view.findViewById(R.id.img); } } } Section 7.2: A basic ListView with an ArrayAdapter By default the ArrayAdapter creates a view for each array item by calling toString() on each item and placing the contents in a TextView. Example: ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, myStringArray); where android.R.layout.simple_list_item_1 is the layout that contains a TextView for each string in the array. Then simply call setAdapter() on your ListView: ListView listView = (ListView) findViewById(R.id.listview); listView.setAdapter(adapter); To use something other than TextViews for the array display, for instance, ImageViews, or to have some of data besides toString() results ll the views, override getView(int, View, ViewGroup) to return the type of view you want. Check this example. Section 7.3: Filtering with CursorAdapter // Get the reference to your ListView ListView listResults = (ListView) findViewById(R.id.listResults); // Set its adapter listResults.setAdapter(adapter); // Enable filtering in ListView listResults.setTextFilterEnabled(true); // Prepare your adapter for filtering adapter.setFilterQueryProvider(new FilterQueryProvider() { @Override public Cursor runQuery(CharSequence constraint) { // in real life, do something more secure than concatenation // but it will depend on your schema // This is the query that will run on filtering String query = "SELECT _ID as _id, name FROM MYTABLE " + "where name like '%" + constraint + "%' " + "ORDER BY NAME ASC"; return db.rawQuery(query, null); } GoalKicker.com Android Notes for Professionals 53 }); Let's say your query will run every time the user types in an EditText: EditText queryText = (EditText) findViewById(R.id.textQuery); queryText.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(final CharSequence s, final int start, final int count, final int after) { } @Override public void onTextChanged(final CharSequence s, final int start, final int before, final int count) { // This is the filter in action adapter.getFilter().filter(s.toString()); // Don't forget to notify the adapter adapter.notifyDataSetChanged(); } @Override public void afterTextChanged(final Editable s) { } }); GoalKicker.com Android Notes for Professionals 54 Chapter 8: Layouts A layout denes the visual structure for a user interface, such as an activity or widget. A layout is declared in XML, including screen elements that will appear in it. Code can be added to the application to modify the state of screen objects at runtime, including those declared in XML. Section 8.1: LayoutParams Every single ViewGroup (e.g. LinearLayout, RelativeLayout, CoordinatorLayout, etc.) needs to store information about its children's properties. About the way its children are being laid out in the ViewGroup. This information is stored in objects of a wrapper class ViewGroup.LayoutParams. To include parameters specic to a particular layout type, ViewGroups use subclasses of ViewGroup.LayoutParams class. E.g. for LinearLayout it's LinearLayout.LayoutParams RelativeLayout it's RelativeLayout.LayoutParams CoordinatorLayout it's CoordinatorLayout.LayoutParams ... Most of ViewGroups reutilize the ability to set margins for their children, so they do not subclass ViewGroup.LayoutParams directly, but they subclass ViewGroup.MarginLayoutParams instead (which itself is a subclass of ViewGroup.LayoutParams). LayoutParams in xml LayoutParams objects are created based on the inated layout xml le. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <TextView android:layout_width="wrap_content" android:layout_height="50dp" android:layout_gravity="right" android:gravity="bottom" android:text="Example text" android:textColor="@android:color/holo_green_dark"/> <ImageView android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" android:background="@android:color/holo_green_dark" android:scaleType="centerInside" android:src="@drawable/example"/> </LinearLayout> All parameters that begin with layout_ specify how the enclosing layout should work. When the layout is inated, those parameters are wrapped in a proper LayoutParams object, that later will be used by the Layout to properly GoalKicker.com Android Notes for Professionals 55 position a particular View within the ViewGroup. Other attributes of a View are directly View-related and are processed by the View itself. For TextView: layout_width, layout_height and layout_gravity will be stored in a LinearLayout.LayoutParams object and used by the LinearLayout gravity, text and textColor will be used by the TextView itself For ImageView: layout_width, layout_height and layout_weight will be stored in a LinearLayout.LayoutParams object and used by the LinearLayout background, scaleType and src will be used by the ImageView itself Getting LayoutParams object getLayoutParams is a View's method that allows to retrieve a current LayoutParams object. Because the LayoutParams object is directly related to the enclosing ViewGroup, this method will return a non-null value only when View is attached to the ViewGroup. You need to bare in mind that this object might not be present at all times. Especially you should not depend on having it inside View's constructor. public class ExampleView extends View { public ExampleView(Context context) { super(context); setupView(context); } public ExampleView(Context context, AttributeSet attrs) { super(context, attrs); setupView(context); } public ExampleView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); setupView(context); } private void setupView(Context context) { if (getLayoutParams().height == 50){ // DO NOT DO THIS! // This might produce NullPointerException doSomething(); } } //... } If you want to depend on having LayoutParams object, you should use onAttachedToWindow method instead. public class ExampleView extends View { public ExampleView(Context context) { super(context); } GoalKicker.com Android Notes for Professionals 56 public ExampleView(Context context, AttributeSet attrs) { super(context, attrs); } public ExampleView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } @Override protected void onAttachedToWindow() { super.onAttachedToWindow(); if (getLayoutParams().height == 50) { // getLayoutParams() will NOT return null here doSomething(); } } //... } Casting LayoutParams object You might need to use features that are specic to a particular ViewGroup (e.g. you might want to programmatically change rules of a RelativeLayout). For that purpose you will need to know how to properly cast the ViewGroup.LayoutParams object. This might be a bit confusing when getting a LayoutParams object for a child View that actually is another ViewGroup. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/outer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <FrameLayout android:id="@+id/inner_layout" android:layout_width="match_parent" android:layout_height="50dp" android:layout_gravity="right"/> </LinearLayout> IMPORTANT: The type of LayoutParams object is directly related to the type of the ENCLOSING ViewGroup. Incorrect casting: FrameLayout innerLayout = (FrameLayout)findViewById(R.id.inner_layout); FrameLayout.LayoutParams par = (FrameLayout.LayoutParams) innerLayout.getLayoutParams(); // INCORRECT! This will produce ClassCastException Correct casting: FrameLayout innerLayout = (FrameLayout)findViewById(R.id.inner_layout); LinearLayout.LayoutParams par = (LinearLayout.LayoutParams) innerLayout.getLayoutParams(); // CORRECT! the enclosing layout is a LinearLayout GoalKicker.com Android Notes for Professionals 57 Section 8.2: Gravity and layout gravity android:layout_gravity android:layout_gravity is used to set the position of an element in its parent (e.g. a child View inside a Layout). Supported by LinearLayout and FrameLayout android:gravity android:gravity is used to set the position of content inside an element (e.g. a text inside a TextView). <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:orientation="vertical"> <LinearLayout android:layout_width="wrap_content" android:layout_height="0dp" android:layout_weight="1" android:orientation="vertical" android:layout_gravity="left" android:gravity="center_vertical"> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/first" android:background="@color/colorPrimary" android:gravity="left"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/second" android:background="@color/colorPrimary" android:gravity="center"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/third" android:background="@color/colorPrimary" android:gravity="right"/> </LinearLayout> <LinearLayout android:layout_width="wrap_content" android:layout_height="0dp" android:layout_weight="1" android:orientation="vertical" android:layout_gravity="center" android:gravity="center_vertical"> GoalKicker.com Android Notes for Professionals 58 <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/first" android:background="@color/colorAccent" android:gravity="left"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/second" android:background="@color/colorAccent" android:gravity="center"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/third" android:background="@color/colorAccent" android:gravity="right"/> </LinearLayout> <LinearLayout android:layout_width="wrap_content" android:layout_height="0dp" android:layout_weight="1" android:orientation="vertical" android:layout_gravity="right" android:gravity="center_vertical"> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/first" android:background="@color/colorPrimaryDark" android:gravity="left"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/second" android:background="@color/colorPrimaryDark" android:gravity="center"/> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/third" android:background="@color/colorPrimaryDark" android:gravity="right"/> </LinearLayout> </LinearLayout> Which gets rendered as following: GoalKicker.com Android Notes for Professionals 59 Section 8.3: CoordinatorLayout Scrolling Behavior Version 2.3-2.3.2 An enclosing CoordinatorLayout can be used to achieve Material Design Scrolling Eects when using inner layouts that support Nested Scrolling, such as NestedScrollView or RecyclerView. For this example: app:layout_scrollFlags="scroll|enterAlways" is used in the Toolbar properties app:layout_behavior="@string/appbar_scrolling_view_behavior" is used in the ViewPager properties A RecyclerView is used in the ViewPager Fragments Here is the layout xml le used in an Activity: <android.support.design.widget.CoordinatorLayout android:id="@+id/main_layout" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" GoalKicker.com Android Notes for Professionals 60 android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <android.support.design.widget.AppBarLayout android:id="@+id/appBarLayout" android:layout_width="match_parent" android:layout_height="wrap_content" app:elevation="6dp"> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:background="?attr/colorPrimary" android:minHeight="?attr/actionBarSize" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" app:elevation="0dp" app:layout_scrollFlags="scroll|enterAlways" /> <android.support.design.widget.TabLayout android:id="@+id/tab_layout" app:tabMode="fixed" android:layout_below="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="?attr/colorPrimary" app:elevation="0dp" app:tabTextColor="#d3d3d3" android:minHeight="?attr/actionBarSize" /> </android.support.design.widget.AppBarLayout> <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_below="@+id/tab_layout" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> </android.support.design.widget.CoordinatorLayout> Result: GoalKicker.com Android Notes for Professionals 61 Section 8.4: Percent Layouts Version 2.3 The Percent Support Library provides PercentFrameLayout and PercentRelativeLayout, two ViewGroups that provide an easy way to specify View dimensions and margins in terms of a percentage of the overall size. You can use the Percent Support Library by adding the following to your dependencies. compile 'com.android.support:percent:25.3.1' If you wanted to display a view that lls the screen horizontally but only half the screen vertically you would do thie following. <android.support.percent.PercentFrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <FrameLayout app:layout_widthPercent="100%" app:layout_heightPercent="50%" android:background="@android:color/black" /> <android.support.percent.PercentFrameLayout> You can also dene the percentages in a separate XML le with code such as: GoalKicker.com Android Notes for Professionals 62 <fraction name="margin_start_percent">25%</fraction> And refer to them in your layouts with @fraction/margin_start_percent. They also contain the ability to set a custom aspect ratio via app:layout_aspectRatio. This allows you to set only a single dimension, such as only the width, and the height will be automatically determined based on the aspect ratio youve dened, whether it is 4:3 or 16:9 or even a square 1:1 aspect ratio. For example: <ImageView app:layout_widthPercent="100%" app:layout_aspectRatio="178%" android:scaleType="centerCrop" android:src="@drawable/header_background"/> Section 8.5: View Weight One of the most used attribute for LinearLayout is the weight of its child views. Weight denes how much space a view will consume compared to other views within a LinearLayout. Weight is used when you want to give specic screen space to one component compared to other. Key Properties: weightSum is the overall sum of weights of all child views. If you don't specify the weightSum, the system will calculate the sum of all the weights on its own. layout_weight species the amount of space out of the total weight sum the widget will occupy. Code: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" android:weightSum="4"> <EditText android:layout_weight="2" android:layout_width="0dp" android:layout_height="wrap_content" android:text="Type Your Text Here" /> <Button android:layout_weight="1" android:layout_width="0dp" android:layout_height="wrap_content" android:text="Text1" /> <Button android:layout_weight="1" android:layout_width="0dp" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 63 android:text="Text1" /> </LinearLayout> The output is: Now even if the size of the device is larger, the EditText will take 2/4 of the screen's space. Hence the look of your app is seen consistent across all screens. Note: Here the layout_width is kept 0dp as the widget space is divided horizontally. If the widgets are to be aligned vertically layout_height will be set to 0dp. This is done to increase the eciency of the code because at runtime the system won't attempt to calculate the width or height respectively as this is managed by the weight. If you instead used wrap_content the system would attempt to calculate the width/height rst before applying the weight attribute which causes another calculation cycle. Section 8.6: Creating LinearLayout programmatically Hierarchy GoalKicker.com Android Notes for Professionals 64 - LinearLayout(horizontal) - ImageView - LinearLayout(vertical) - TextView - TextView Code LinearLayout rootView = new LinearLayout(context); rootView.setLayoutParams(new LinearLayout.LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.WRAP_CONTENT)); rootView.setOrientation(LinearLayout.HORIZONTAL); // for imageview ImageView imageView = new ImageView(context); // for horizontal linearlayout LinearLayout linearLayout2 = new LinearLayout(context); linearLayout2.setLayoutParams(new LinearLayout.LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.WRAP_CONTENT)); linearLayout2.setOrientation(LinearLayout.VERTICAL); TextView tv1 = new TextView(context); TextView tv2 = new TextView(context); // add 2 textview to horizontal linearlayout linearLayout2.addView(tv1); linearLayout2.addView(tv2); // finally, add imageview and horizontal linearlayout to vertical linearlayout (rootView) rootView.addView(imageView); rootView.addView(linearLayout2); Section 8.7: LinearLayout The LinearLayout is a ViewGroup that arranges its children in a single column or a single row. The orientation can be set by calling the method setOrientation() or using the xml attribute android:orientation. 1. Vertical orientation : android:orientation="vertical" <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="@string/app_name" /> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="@android:string/cancel" /> </LinearLayout> Here is a screenshot how this will look like: GoalKicker.com Android Notes for Professionals 65 2. Horizontal orientation : android:orientation="horizontal" <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="@string/app_name" /> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="@android:string/cancel" /> The LinearLayout also supports assigning a weight to individual children with the android:layout_weight attribute. Section 8.8: RelativeLayout RelativeLayout is a ViewGroup that displays child views in relative positions. By default, all child views are drawn at GoalKicker.com Android Notes for Professionals 66 the top-left of the layout, so you must dene the position of each view using the various layout properties available from RelativeLayout.LayoutParams. The value for each layout property is either a boolean to enable a layout position relative to the parent RelativeLayout or an ID that references another view in the layout against which the view should be positioned. Example: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/imageView" android:src="@mipmap/ic_launcher" /> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/editText" android:layout_toRightOf="@+id/imageView" android:layout_toEndOf="@+id/imageView" android:hint="@string/hint" /> </RelativeLayout> Here is a screenshot how this will look like: GoalKicker.com Android Notes for Professionals 67 Section 8.9: FrameLayout FrameLayout is designed to block out an area on the screen to display a single item. You can, however, add multiple children to a FrameLayout and control their position within the FrameLayout by assigning gravity to each child, using the android:layout_gravity attribute. Generally, FrameLayout is used to hold a single child view. Common use cases are creating place holders for inating Fragments in Activity, overlapping views or applying foreground to the views. Example: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:src="@drawable/nougat" android:scaleType="fitCenter" android:layout_height="match_parent" android:layout_width="match_parent"/> GoalKicker.com Android Notes for Professionals 68 <TextView android:text="FrameLayout Example" android:textSize="30sp" android:textStyle="bold" android:layout_height="match_parent" android:layout_width="match_parent" android:gravity="center"/> </FrameLayout> It will look like this: Section 8.10: GridLayout GridLayout, as the name suggests is a layout used to arrange Views in a grid. A GridLayout divides itself into columns and rows. As you can see in the example below, the amount of columns and/or rows is specied by the properties columnCount and rowCount. Adding Views to this layout will add the rst view to the rst column, the second view to the second column, and the third view to the rst column of the second row. <?xml version="1.0" encoding="utf-8"?> <GridLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" GoalKicker.com Android Notes for Professionals 69 android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:columnCount="2" android:rowCount="2"> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/first" android:background="@color/colorPrimary" android:layout_margin="@dimen/default_margin" /> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/second" android:background="@color/colorPrimary" android:layout_margin="@dimen/default_margin" /> <TextView android:layout_width="@dimen/fixed" android:layout_height="wrap_content" android:text="@string/third" android:background="@color/colorPrimary" android:layout_margin="@dimen/default_margin" /> </GridLayout> GoalKicker.com Android Notes for Professionals 70 Section 8.11: CoordinatorLayout Version 2.3 The CoordinatorLayout is a container somewhat similar to FrameLayout but with extra capabilities, it is called super-powered FrameLayout in the ocial documentation. By attaching a CoordinatorLayout.Behavior to a direct child of CoordinatorLayout, youll be able to intercept touch events, window insets, measurement, layout, and nested scrolling. In order to use it, you will rst have to add a dependency for the support library in your gradle le: compile 'com.android.support:design:25.3.1' The number of the latest version of the library may be found here One practical use case of the CoordinatorLayout is creating a view with a FloatingActionButton. In this specic case, we will create a RecyclerView with a SwipeRefreshLayout and a FloatingActionButton on top of that. Here's how you can do that: GoalKicker.com Android Notes for Professionals 71 <?xml version="1.0" encoding="utf-8"?> <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/coord_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal"> <android.support.v4.widget.SwipeRefreshLayout android:id="@+id/swipe_refresh_layout" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.v7.widget.RecyclerView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/recycler_view"/> </android.support.v4.widget.SwipeRefreshLayout> <android.support.design.widget.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="16dp" android:clickable="true" android:color="@color/colorAccent" android:src="@mipmap/ic_add_white" android:layout_gravity="end|bottom" app:layout_anchorGravity="bottom|right|end"/> </android.support.design.widget.CoordinatorLayout> Notice how the FloatingActionButton is anchored to the CoordinatorLayout with app:layout_anchor="@id/coord_layout" GoalKicker.com Android Notes for Professionals 72 Chapter 9: ConstraintLayout Parameter child Details The View to be added to the layout index The index of the View in the layout hierarchy params The LayoutParams of the View attrs The AttributeSet that denes the LayoutParams view The View that has been added or removed changed Indicates if this View has changed size or position left The left position, relative to the parent View top The top position, relative to the parent View right The right position, relative to the parent View bottom The bottom position, relative to the parent View widthMeasureSpec The horizontal space requirements imposed by the parent View heightMeasureSpec The vertical space requirements imposed by the parent View layoutDirection - a - widthAttr - heightAttr - ConstraintLayout is a ViewGroup which allows you to position and size widgets in a exible way. It is compatible with Android 2.3 (API level 9) and higher. It allows you to create large and complex layouts with a at view hierarchy. It is similar to RelativeLayout in that all views are laid out according to relationships between sibling views and the parent layout, but it's more exible than RelativeLayout and easier to use with Android Studio's Layout Editor. Section 9.1: Adding ConstraintLayout to your project To work with ConstraintLayout, you need Android Studio Version 2.2 or newer and have at least version 32 (or higher) of Android Support Repository. 1. Add the Constraint Layout library as a dependency in your build.gradle le: dependencies { compile 'com.android.support.constraint:constraint-layout:1.0.2' } 2. Sync project To add a new constraint layout to your project: 1. Right-click on your module's layout directory, then click New > XML > Layout XML. 2. Enter a name for the layout and enter "android.support.constraint.ConstraintLayout" for the Root Tag. 3. Click Finish. Otherwise just add in a layout le: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" GoalKicker.com Android Notes for Professionals 73 xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> </android.support.constraint.ConstraintLayout> Section 9.2: Chains Since ConstraintLayout alpha 9, Chains are available. A Chain is a set of views inside a ConstraintLayout that are connected in a bi-directional way between them, i.e A connected to B with a constraint, and B connected to A with another constraint. Example: <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <!-- this view is linked to the bottomTextView --> <TextView android:id="@+id/topTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TextView" app:layout_constraintBottom_toTopOf="@+id/bottomTextView" app:layout_constraintTop_toTopOf="parent" app:layout_constraintVertical_chainPacked="true"/> <!-- this view is linked to the topTextView at the same time --> <TextView android:id="@+id/bottomTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Bottom\nMkay" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintTop_toBottomOf="@+id/topTextView"/> </android.support.constraint.ConstraintLayout> In this example, the two views are positioned one under another and both of them are centered vertically. You may change the vertical position of these views by adjusting the chain's bias. Add the following code to the rst element of a chain: app:layout_constraintVertical_bias="0.2" In a vertical chain, the rst element is a top-most view, and in a horizontal chain it is the left-most view. The rst element denes the whole chain's behavior. Chains are a new feature and are updated frequently. Here is an ocial Android Documentation on Chains. GoalKicker.com Android Notes for Professionals 74 Chapter 10: TextInputLayout TextInputLayout was introduced to display the oating label on EditText. The EditText has to be wrapped by TextInputLayout in order to display the oating label. Section 10.1: Basic usage It is the basic usage of the TextInputLayout. Make sure to add the dependency in the build.gradle le as described in the remarks section. Example: <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/username"/> </android.support.design.widget.TextInputLayout> Section 10.2: Password Visibility Toggles With an input password type, you can also enable an icon that can show or hide the entire text using the passwordToggleEnabled attribute. You can also customize same default using these attributes: passwordToggleDrawable: to change the default eye icon passwordToggleTint: to apply a tint to the password visibility toggle drawable. passwordToggleTintMode: to specify the blending mode used to apply the background tint. Example: <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content" app:passwordToggleContentDescription="@string/description" app:passwordToggleDrawable="@drawable/another_toggle_drawable" app:passwordToggleEnabled="true"> <EditText/> </android.support.design.widget.TextInputLayout> Section 10.3: Adding Character Counting The TextInputLayout has a character counter for an EditText dened within it. The counter will be rendered below the EditText. Just use the setCounterEnabled() and setCounterMaxLength methods: TextInputLayout til = (TextInputLayout) findViewById(R.id.username); GoalKicker.com Android Notes for Professionals 75 til.setCounterEnabled(true); til.setCounterMaxLength(15); or the app:counterEnabled and app:counterMaxLength attributes in the xml. <android.support.design.widget.TextInputLayout app:counterEnabled="true" app:counterMaxLength="15"> <EditText/> </android.support.design.widget.TextInputLayout> Section 10.4: Handling Errors You can use the TextInputLayout to display error messages according to the material design guidelines using the setError and setErrorEnabledmethods. In order to show the error below the EditText use: TextInputLayout til = (TextInputLayout) findViewById(R.id.username); til.setErrorEnabled(true); til.setError("You need to enter a name"); To enable error in the TextInputLayout you can eithr use app:errorEnabled="true" in xml or til.setErrorEnabled(true); as shown above. You will obtain: Section 10.5: Customizing the appearance of the TextInputLayout You can customize the appearance of the TextInputLayout and its embedded EditTextby dening custom styles in your styles.xml. The dened styles can either be added as styles or themes to your TextInputLayout. Example for customizing the hint appearance: styles.xml: <!--Floating label text style--> <style name="MyHintStyle" parent="TextAppearance.AppCompat.Small"> <item name="android:textColor">@color/black</item> </style> <!--Input field style--> <style name="MyEditText" parent="Theme.AppCompat.Light"> <item name="colorControlNormal">@color/indigo</item> <item name="colorControlActivated">@color/pink</item> </style> GoalKicker.com Android Notes for Professionals 76 To Apply Style update your TextInputLayout And EditText as follows <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content" app:hintTextAppearance="@style/MyHintStyle"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/Title" android:theme="@style/MyEditText" /> </android.support.design.widget.TextInputLayout> Example to customize the accent color of the TextInputLayout. The accent color aects the color of the baseline of the EditText and the text color for the oating hint text: styles.xml: <style name="TextInputLayoutWithPrimaryColor" parent="Widget.Design.TextInputLayout"> <item name="colorAccent">@color/primary</item> </style> layout le: <android.support.design.widget.TextInputLayout android:id="@+id/textInputLayout_password" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/TextInputLayoutWithPrimaryColor"> <android.support.design.widget.TextInputEditText android:id="@+id/textInputEditText_password" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/login_hint_password" android:inputType="textPassword" /> </android.support.design.widget.TextInputLayout> Section 10.6: TextInputEditText The TextInputEditText is an EditText with an extra x to display a hint in the IME when in 'extract' mode. The Extract mode is the mode that the keyboard editor switches to when you click on an EditText when the space is too small (for example landscape on a smartphone). In this case, using an EditText while you are editing the text you can see that the IME doesn't give you a hint of what you're editing The TextInputEditText xes this issue providing hint text while the users devices IME is in Extract mode. Example: <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Description" GoalKicker.com Android Notes for Professionals 77 > <android.support.design.widget.TextInputEditText android:id="@+id/description" android:layout_width="match_parent" android:layout_height="wrap_content"/> </android.support.design.widget.TextInputLayout> GoalKicker.com Android Notes for Professionals 78 Chapter 11: CoordinatorLayout and Behaviors The CoordinatorLayout is a super-powered FrameLayout and goal of this ViewGroup is to coordinate the views that are inside it. The main appeal of the CoordinatorLayout is its ability to coordinate the animations and transitions of the views within the XML le itself. CoordinatorLayout is intended for two primary use cases: :As a top-level application decor or chrome layout :As a container for a specic interaction with one or more child views Section 11.1: Creating a simple Behavior To create a Behavior just extend the CoordinatorLayout.Behavior class. Extend the CoordinatorLayout.Behavior Example: public class MyBehavior<V extends View> extends CoordinatorLayout.Behavior<V> { /** * Default constructor. */ public MyBehavior() { } /** * Default constructor for inflating a MyBehavior from layout. * * @param context The {@link Context}. * @param attrs The {@link AttributeSet}. */ public MyBehavior(Context context, AttributeSet attrs) { super(context, attrs); } } This behavior need to be attached to a child View of a CoordinatorLayout to be called. Attach a Behavior programmatically MyBehavior myBehavior = new MyBehavior(); CoordinatorLayout.LayoutParams params = (CoordinatorLayout.LayoutParams) view.getLayoutParams(); params.setBehavior(myBehavior); Attach a Behavior in XML You can use the layout_behavior attribute to attach the behavior in XML: <View android:layout_height="...." android:layout_width="...." GoalKicker.com Android Notes for Professionals 79 app:layout_behavior=".MyBehavior" /> Attach a Behavior automatically If you are working with a custom view you can attach the behavior using the @CoordinatorLayout.DefaultBehavior annotation: @CoordinatorLayout.DefaultBehavior(MyBehavior.class) public class MyView extends ..... { } Section 11.2: Using the SwipeDismissBehavior The SwipeDismissBehavior works on any View and implements the functionality of swipe to dismiss in our layouts with a CoordinatorLayout. Just use: final SwipeDismissBehavior<MyView> swipe = new SwipeDismissBehavior(); //Sets the swipe direction for this behavior. swipe.setSwipeDirection( SwipeDismissBehavior.SWIPE_DIRECTION_ANY); //Set the listener to be used when a dismiss event occurs swipe.setListener( new SwipeDismissBehavior.OnDismissListener() { @Override public void onDismiss(View view) { //...... } @Override public void onDragStateChanged(int state) { //...... } }); //Attach the SwipeDismissBehavior to a view LayoutParams coordinatorParams = (LayoutParams) mView.getLayoutParams(); coordinatorParams.setBehavior(swipe); Section 11.3: Create dependencies between Views You can use the CoordinatorLayout.Behavior to create dependencies between views. You can anchor a View to another View by: using the layout_anchor attribute. creating a custom Behavior and implementing the layoutDependsOn method returning true. For example, in order to create a Behavior for moving an ImageView when another one is moved (example Toolbar), perform the following steps: Create the custom Behavior: public class MyBehavior extends CoordinatorLayout.Behavior<ImageView> {...} GoalKicker.com Android Notes for Professionals 80 Override the layoutDependsOn method returning true. This method is called every time a change occurs to the layout: @Override public boolean layoutDependsOn(CoordinatorLayout parent, ImageView child, View dependency) { // Returns true to add a dependency. return dependency instanceof Toolbar; } Whenever the method layoutDependsOn returns true the method onDependentViewChanged is called: @Override public boolean onDependentViewChanged(CoordinatorLayout parent, ImageView child, View dependency) { // Implement here animations, translations, or movements; always related to the provided dependency. float translationY = Math.min(0, dependency.getTranslationY() - dependency.getHeight()); child.setTranslationY(translationY); } GoalKicker.com Android Notes for Professionals 81 Chapter 12: TabLayout Section 12.1: Using a TabLayout without a ViewPager Most of the time a TabLayout is used together with a ViewPager, in order to get the swipe functionality that comes with it. It is possible to use a TabLayout without a ViewPager by using a TabLayout.OnTabSelectedListener. First, add a TabLayout to your activity's XML le: <android.support.design.widget.TabLayout android:layout_height="wrap_content" android:layout_width="match_parent" android:id="@+id/tabLayout" /> For navigation within an Activity, manually populate the UI based on the tab selected. TabLayout tabLayout = (TabLayout) findViewById(R.id.tabLayout); tabLayout.addOnTabSelectedListener(new TabLayout.OnTabSelectedListener() { @Override public void onTabSelected(TabLayout.Tab tab) { int position = tab.getPosition(); switch (tab.getPosition()) { case 1: getSupportFragmentManager().beginTransaction() .replace(R.id.fragment_container, new ChildFragment()).commit(); break; // Continue for each tab in TabLayout } @Override public void onTabUnselected(TabLayout.Tab tab) { } @Override public void onTabReselected(TabLayout.Tab tab) { } }); GoalKicker.com Android Notes for Professionals 82 Chapter 13: ViewPager ViewPager is a Layout manager that allows the user to ip left and right through pages of data. It is most often used in conjunction with Fragment, which is a convenient way to supply and manage the lifecycle of each page. Section 13.1: ViewPager with a dots indicator All we need are: ViewPager, TabLayout and 2 drawables for selected and default dots. Firstly, we have to add TabLayout to our screen layout, and connect it with ViewPager. We can do this in two ways: Nested TabLayout in ViewPager <android.support.v4.view.ViewPager android:id="@+id/photos_viewpager" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.design.widget.TabLayout android:layout_width="match_parent" android:layout_height="wrap_content"/> </android.support.v4.view.ViewPager> In this case TabLayout will be automatically connected with ViewPager, but TabLayout will be next to ViewPager, not over him. Separate TabLayout <android.support.v4.view.ViewPager android:id="@+id/photos_viewpager" android:layout_width="match_parent" android:layout_height="match_parent"/> GoalKicker.com Android Notes for Professionals 83 <android.support.design.widget.TabLayout android:id="@+id/tab_layout" android:layout_width="match_parent" android:layout_height="wrap_content"/> In this case, we can put TabLayout anywhere, but we have to connect TabLayout with ViewPager programmatically ViewPager pager = (ViewPager) view.findViewById(R.id.photos_viewpager); PagerAdapter adapter = new PhotosAdapter(getChildFragmentManager(), photosUrl); pager.setAdapter(adapter); TabLayout tabLayout = (TabLayout) view.findViewById(R.id.tab_layout); tabLayout.setupWithViewPager(pager, true); Once we created our layout, we have to prepare our dots. So we create three les: selected_dot.xml, default_dot.xml and tab_selector.xml. selected_dot.xml <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item> <shape android:innerRadius="0dp" android:shape="ring" android:thickness="8dp" android:useLevel="false"> <solid android:color="@color/colorAccent"/> </shape> </item> </layer-list> default_dot.xml <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item> <shape android:innerRadius="0dp" android:shape="ring" android:thickness="8dp" android:useLevel="false"> <solid android:color="@android:color/darker_gray"/> </shape> </item> </layer-list> tab_selector.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/selected_dot" android:state_selected="true"/> <item android:drawable="@drawable/default_dot"/> GoalKicker.com Android Notes for Professionals 84 </selector> Now we need to add only 3 lines of code to TabLayout in our xml layout and you're done. app:tabBackground="@drawable/tab_selector" app:tabGravity="center" app:tabIndicatorHeight="0dp" Section 13.2: Basic ViewPager usage with fragments A ViewPager allows to show multiple fragments in an activity that can be navigated by either iping left or right. A ViewPager needs to be feed of either Views or Fragments by using a PagerAdapter. There are however two more specic implementations that you will nd most useful in case of using Fragments which are FragmentPagerAdapter and FragmentStatePagerAdapter. When a Fragment needs to be instantiated for the rst time, getItem(position) will be called for each position that needs instantiating. The getCount() method will return the total number of pages so the ViewPager knows how many Fragments need to be shown. Both FragmentPagerAdapter and FragmentStatePagerAdapter keep a cache of the Fragments that the ViewPager will need to show. By default the ViewPager will try to store a maximum of 3 Fragments that correspond to the currently visible Fragment, and the ones next to the right and left. Also FragmentStatePagerAdapter will keep the state of each of your fragments. Be aware that both implementations assume your fragments will keep their positions, so if you keep a list of the fragments instead of having a static number of them as you can see in the getItem() method, you will need to create a subclass of PagerAdapter and override at least instantiateItem(),destroyItem() and getItemPosition()methods. Just add a ViewPager in your layout as described in the basic example: <?xml version="1.0" encoding="utf-8"?> <LinearLayout> <android.support.v4.view.ViewPager android:id="@+id/vpPager"> </android.support.v4.view.ViewPager> </LinearLayout> Then dene the adapter that will determine how many pages exist and which fragment to display for each page of the adapter. public class MyViewPagerActivity extends AppCompatActivity { private static final String TAG = MyViewPagerActivity.class.getName(); private MyPagerAdapter mFragmentAdapter; private ViewPager mViewPager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.myActivityLayout); //Apply the Adapter mFragmentAdapter = new MyPagerAdapter(getSupportFragmentManager()); mViewPager = (ViewPager) findViewById(R.id.view_pager); mViewPager.setAdapter(mFragmentAdapter); } GoalKicker.com Android Notes for Professionals 85 private class MyPagerAdapter extends FragmentPagerAdapter{ public MyPagerAdapter(FragmentManager supportFragmentManager) { super(supportFragmentManager); } // Returns the fragment to display for that page @Override public Fragment getItem(int position) { switch(position) { case 0: return new Fragment1(); case 1: return new Fragment2(); case 2: return new Fragment3(); default: return null; } } // Returns total number of pages @Override public int getCount() { return 3; } } } Version 3.2.x If you are using android.app.Fragment you have to add this dependency: compile 'com.android.support:support-v13:25.3.1' If you are using android.support.v4.app.Fragment you have to add this dependency: compile 'com.android.support:support-fragment:25.3.1' Section 13.3: ViewPager with PreferenceFragment Until recently, using android.support.v4.app.FragmentPagerAdapter would prevent the usage of a PreferenceFragment as one of the Fragments used in the FragmentPagerAdapter. The latest versions of the support v7 library now include the PreferenceFragmentCompat class, which will work with a ViewPager and the v4 version of FragmentPagerAdapter. Example Fragment that extends PreferenceFragmentCompat: import android.os.Bundle; import android.support.v7.preference.PreferenceFragmentCompat; import android.view.View; public class MySettingsPrefFragment extends PreferenceFragmentCompat { public MySettingsPrefFragment() { GoalKicker.com Android Notes for Professionals 86 // Required empty public constructor } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.fragment_settings_pref); } @Override public void onCreatePreferences(Bundle bundle, String s) { } } You can now use this Fragment in a android.support.v4.app.FragmentPagerAdapter subclass: private class PagerAdapterWithSettings extends FragmentPagerAdapter { public PagerAdapterWithSettings(FragmentManager supportFragmentManager) { super(supportFragmentManager); } @Override public Fragment getItem(int position) { switch(position) { case 0: return new FragmentOne(); case 1: return new FragmentTwo(); case 2: return new MySettingsPrefFragment(); default: return null; } } // ....... } Section 13.4: Adding a ViewPager Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:support-core-ui:25.3.0' Then add the ViewPager to your activity layout: <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_width="match_parent" android:layout_height="match_parent" /> Then dene your PagerAdapter: GoalKicker.com Android Notes for Professionals 87 public class MyPagerAdapter extends PagerAdapter { private Context mContext; public CustomPagerAdapter(Context context) { mContext = context; } @Override public Object instantiateItem(ViewGroup collection, int position) { // Create the page for the given position. For example: LayoutInflater inflater = LayoutInflater.from(mContext); ViewGroup layout = (ViewGroup) inflater.inflate(R.layout.xxxx, collection, false); collection.addView(layout); return layout; } @Override public void destroyItem(ViewGroup collection, int position, Object view) { // Remove a page for the given position. For example: collection.removeView((View) view); } @Override public int getCount() { //Return the number of views available. return numberOfPages; } @Override public boolean isViewFromObject(View view, Object object) { // Determines whether a page View is associated with a specific key object // as returned by instantiateItem(ViewGroup, int). For example: return view == object; } } Finally setup the ViewPager in your Activity: public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ViewPager viewPager = (ViewPager) findViewById(R.id.viewpager); viewPager.setAdapter(new MyPagerAdapter(this)); } } Section 13.5: Setup OnPageChangeListener If you need to listen for changes to the page selected you can implement the ViewPager.OnPageChangeListener listener on the ViewPager: viewPager.addOnPageChangeListener(new OnPageChangeListener() { // This method will be invoked when a new page becomes selected. Animation is not necessarily GoalKicker.com Android Notes for Professionals 88 complete. @Override public void onPageSelected(int position) { // Your code } // This method will be invoked when the current page is scrolled, either as part of // a programmatically initiated smooth scroll or a user initiated touch scroll. @Override public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) { // Your code } // Called when the scroll state changes. Useful for discovering when the user begins // dragging, when the pager is automatically settling to the current page, // or when it is fully stopped/idle. @Override public void onPageScrollStateChanged(int state) { // Your code } }); Section 13.6: ViewPager with TabLayout A TabLayout can be used for easier navigation. You can set the tabs for each fragment in your adapter by using TabLayout.newTab() method but there is another more convenient and easier method for this task which is TabLayout.setupWithViewPager(). This method will sync by creating and removing tabs according to the contents of the adapter associated with your ViewPager each time you call it. Also, it will set a callback so each time the user ips the page, the corresponding tab will be selected. Just dene a layout <?xml version="1.0" encoding="utf-8"?> <LinearLayout> <android.support.design.widget.TabLayout android:id="@+id/tabs" app:tabMode="scrollable" /> <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_width="match_parent" android:layout_height="0px" android:layout_weight="1" /> </LinearLayout> Then implement the FragmentPagerAdapter and apply it to the ViewPager: public class MyViewPagerActivity extends AppCompatActivity { private static final String TAG = MyViewPagerActivity.class.getName(); private MyPagerAdapter mFragmentAdapter; private ViewPager mViewPager; private TabLayout mTabLayout; @Override GoalKicker.com Android Notes for Professionals 89 protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.myActivityLayout); // Get the ViewPager and apply the PagerAdapter mFragmentAdapter = new MyPagerAdapter(getSupportFragmentManager()); mViewPager = (ViewPager) findViewById(R.id.view_pager); mViewPager.setAdapter(mFragmentAdapter); // link the tabLayout and the viewpager together mTabLayout = (TabLayout) findViewById(R.id.tab_layout); mTabLayout.setupWithViewPager(mViewPager); } private class MyPagerAdapter extends FragmentPagerAdapter{ public MyPagerAdapter(FragmentManager supportFragmentManager) { super(supportFragmentManager); } // Returns the fragment to display for that page @Override public Fragment getItem(int position) { switch(position) { case 0: return new Fragment1(); case 1: return new Fragment2(); case 2: return new Fragment3(); default: return null; } } // Will be displayed as the tab's label @Override public CharSequence getPageTitle(int position) { switch(position) { case 0: return "Fragment 1 title"; case 1: return "Fragment 2 title"; case 2: return "Fragment 3 title"; default: return null; } } // Returns total number of pages @Override public int getCount() { return 3; } GoalKicker.com Android Notes for Professionals 90 } } GoalKicker.com Android Notes for Professionals 91 Chapter 14: CardView Parameter cardBackgroundColor Background color for CardView. Details cardCornerRadius Corner radius for CardView. cardElevation Elevation for CardView. cardMaxElevation Maximum Elevation for CardView. cardPreventCornerOverlap Add padding to CardView on v20 and before to prevent intersections between the Card content and rounded corners. cardUseCompatPadding Add padding in API v21+ as well to have the same measurements with previous versions. May be a boolean value, such as "true" or "false". contentPadding Inner padding between the edges of the Card and children of the CardView. contentPaddingBottom Inner padding between the bottom edge of the Card and children of the CardView. contentPaddingLeft Inner padding between the left edge of the Card and children of the CardView. contentPaddingRight Elevation for CardView. cardElevation Inner padding between the right edge of the Card and children of the CardView. contentPaddingTop Inner padding between the top edge of the Card and children of the CardView. A FrameLayout with a rounded corner background and shadow. CardView uses elevation property on Lollipop for shadows and falls back to a custom emulated shadow implementation on older platforms. Due to expensive nature of rounded corner clipping, on platforms before Lollipop, CardView does not clip its children that intersect with rounded corners. Instead, it adds padding to avoid such intersection (See setPreventCornerOverlap(boolean) to change this behavior). Section 14.1: Getting Started with CardView CardView is a member of the Android Support Library, and provides a layout for cards. To add CardView to your project, add the following line to your build.gradle dependencies. compile 'com.android.support:cardview-v7:25.1.1' A number of the latest version may be found here In your layout you can then add the following to get a card. <android.support.v7.widget.CardView xmlns:card_view="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="wrap_content"> <!-- one child layout containing other layouts or views --> </android.support.v7.widget.CardView> You can then add other layouts inside this and they will be encompassed in a card. Also, CardView can be populated with any UI element and manipulated from code. <?xml version="1.0" encoding="utf-8"?> GoalKicker.com Android Notes for Professionals 92 <android.support.v7.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:card_view="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/card_view" android:layout_margin="5dp" card_view:cardBackgroundColor="#81C784" card_view:cardCornerRadius="12dp" card_view:cardElevation="3dp" card_view:contentPadding="4dp" > <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:padding="16dp" > <ImageView android:layout_width="100dp" android:layout_height="100dp" android:id="@+id/item_image" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:layout_marginRight="16dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/item_title" android:layout_toRightOf="@+id/item_image" android:layout_alignParentTop="true" android:textSize="30sp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/item_detail" android:layout_toRightOf="@+id/item_image" android:layout_below="@+id/item_title" /> </RelativeLayout> </android.support.v7.widget.CardView> Section 14.2: Adding Ripple animation To enable the ripple animation in a CardView, add the following attributes: <android.support.v7.widget.CardView ... android:clickable="true" android:foreground="?android:attr/selectableItemBackground"> ... </android.support.v7.widget.CardView> Section 14.3: Customizing the CardView CardView provides a default elevation and corner radius so that cards have a consistent appearance across the GoalKicker.com Android Notes for Professionals 93 platforms. You can customize these default values using these attributes in the xml le: 1. card_view:cardElevation attribute add elevation in CardView. 2. card_view:cardBackgroundColor attribute is used to customize background color of CardView's background(you can give any color). 3. card_view:cardCornerRadius attribute is used to curve 4 edges of CardView 4. card_view:contentPadding attribute add padding between card and children of card Note: card_view is a namespace dened in topmost parent layout view. xmlns:card_view="http://schemas.android.com/apk/res-auto" Here an example: <android.support.v7.widget.CardView xmlns:card_view="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="wrap_content" card_view:cardElevation="4dp" card_view:cardBackgroundColor="@android:color/white" card_view:cardCornerRadius="8dp" card_view:contentPadding="16dp"> <!-- one child layout containing other layouts or views --> </android.support.v7.widget.CardView> You can also do it programmatically using: card.setCardBackgroundColor(....); card.setCardElevation(...); card.setRadius(....); card.setContentPadding(); Check the ocial javadoc for additional properties. Section 14.4: Using Images as Background in CardView (PreLollipop device issues) While using Image/Colour as an background in a CardView, You might end up with slight white paddings (If default Card colour is white) on the edges. This occurs due to the default rounded corners in the Card View. Here is how to avoid those margins in Pre-lollipop devices. We need to use an attribute card_view:cardPreventCornerOverlap="false" in the CardView. 1). In XML use the following snippet. <android.support.v7.widget.CardView xmlns:card_view="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" card_view:cardPreventCornerOverlap="false" android:layout_height="wrap_content"> <ImageView android:id="@+id/row_wallet_redeem_img" android:layout_width="match_parent" android:layout_height="match_parent" android:adjustViewBounds="true" GoalKicker.com Android Notes for Professionals 94 android:scaleType="centerCrop" android:src="@drawable/bg_image" /> </android.support.v7.widget.CardView> 2. In Java like this cardView.setPreventCornerOverlap(false). Doing so removes an unwanted padding on the Card's edges. Here are some visual examples related to this implementation. 1 Card with image background in API 21 (perfectly ne) 2 Card with image background in API 19 without attribute (notice the paddings around image) 3 FIXED Card with image background in API 19 with attribute cardView.setPreventCornerOverlap(false) (Issue now xed) GoalKicker.com Android Notes for Professionals 95 Also read about this on Documentation here Original SOF post here Section 14.5: Animate CardView background color with TransitionDrawable public void setCardColorTran(CardView card) { ColorDrawable[] color = {new ColorDrawable(Color.BLUE), new ColorDrawable(Color.RED)}; TransitionDrawable trans = new TransitionDrawable(color); if(Build.VERSION.SDK_INT > Build.VERSION_CODES.ICE_CREAM_SANDWICH_MR1) { card.setBackground(trans); } else { card.setBackgroundDrawable(trans); } trans.startTransition(5000); } GoalKicker.com Android Notes for Professionals 96 Chapter 15: NavigationView Section 15.1: How to add the NavigationView To use a NavigationView just add the dependency in the build.gradle le as described in the remarks section Then add the NavigationView in the layout <?xml version="1.0" encoding="utf-8"?> <android.support.v4.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/drawer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:openDrawer="start"> <include layout="@layout/app_bar_main" android:layout_width="match_parent" android:layout_height="match_parent" /> <android.support.design.widget.NavigationView android:id="@+id/nav_view" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_gravity="start" app:headerLayout="@layout/nav_header_main" app:menu="@menu/activity_main_drawer" /> </android.support.v4.widget.DrawerLayout> res/layout/nav_header_main.xml: The view which will been displayed on the top of the drawer <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="@dimen/nav_header_height" android:background="@drawable/side_nav_bar" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:theme="@style/ThemeOverlay.AppCompat.Dark" android:orientation="vertical" android:gravity="bottom"> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingTop="@dimen/nav_header_vertical_spacing" android:src="@android:drawable/sym_def_app_icon" android:id="@+id/imageView" /> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 97 android:paddingTop="@dimen/nav_header_vertical_spacing" android:text="Android Studio" android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="<EMAIL>" android:id="@+id/textView" /> </LinearLayout> res/layout/app_bar_main.xml An abstraction layer for the toolbar to separate it from the content: <?xml version="1.0" encoding="utf-8"?> <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:context="eu.rekisoft.playground.MainActivity"> <android.support.design.widget.AppBarLayout android:layout_height="wrap_content" android:layout_width="match_parent" android:theme="@style/AppTheme.AppBarOverlay"> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/AppTheme.PopupOverlay" /> </android.support.design.widget.AppBarLayout> <include layout="@layout/content_main"/> <android.support.design.widget.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:layout_margin="@dimen/fab_margin" android:src="@android:drawable/ic_dialog_email" /> </android.support.design.widget.CoordinatorLayout> res/layout/content_main.xml The real content of the activity just for demo, here you would put your normal layout xml: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" GoalKicker.com Android Notes for Professionals 98 android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" app:layout_behavior="@string/appbar_scrolling_view_behavior" tools:showIn="@layout/app_bar_main" tools:context="eu.rekisoft.playground.MainActivity"> <TextView android:text="Hello World!" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </RelativeLayout> Dene your menu le as res/menu/activity_main_drawer.xml: <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android"> <group android:checkableBehavior="single"> <item android:id="@+id/nav_camera" android:icon="@drawable/ic_menu_camera" android:title="Import" /> <item android:id="@+id/nav_gallery" android:icon="@drawable/ic_menu_gallery" android:title="Gallery" /> <item android:id="@+id/nav_slideshow" android:icon="@drawable/ic_menu_slideshow" android:title="Slideshow" /> <item android:id="@+id/nav_manage" android:icon="@drawable/ic_menu_manage" android:title="Tools" /> </group> <item android:title="Communicate"> <menu> <item android:id="@+id/nav_share" android:icon="@drawable/ic_menu_share" android:title="Share" /> <item android:id="@+id/nav_send" android:icon="@drawable/ic_menu_send" android:title="Send" /> </menu> </item> </menu> And nally the java/main/eu/rekisoft/playground/MainActivity.java: public class MainActivity extends AppCompatActivity implements NavigationView.OnNavigationItemSelectedListener { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); GoalKicker.com Android Notes for Professionals 99 Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG) .setAction("Action", null).show(); } }); DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout); ActionBarDrawerToggle toggle = new ActionBarDrawerToggle( this, drawer, toolbar, R.string.navigation_drawer_open, R.string.navigation_drawer_close); drawer.setDrawerListener(toggle); toggle.syncState(); NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view); navigationView.setNavigationItemSelectedListener(this); } @Override public void onBackPressed() { DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout); if (drawer.isDrawerOpen(GravityCompat.START)) { drawer.closeDrawer(GravityCompat.START); } else { super.onBackPressed(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } @SuppressWarnings("StatementWithEmptyBody") @Override public boolean onNavigationItemSelected(MenuItem item) { // Handle navigation view item clicks here. switch(item.getItemId()) {/*...*/} DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout); GoalKicker.com Android Notes for Professionals 100 drawer.closeDrawer(GravityCompat.START); return true; } } It will look like this: Section 15.2: Add underline in menu elements Each group ends with a line separator. If each item in your menu has its own group you will achieve the desired graphical output. It will work only if your dierent groups have dierent android:id. Also, in menu.xml remember to mention android:checkable="true" for single item and android:checkableBehavior="single" for a group of items. <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/pos_item_help" android:checkable="true" android:title="Help" /> <item android:id="@+id/pos_item_pos" android:checkable="true" android:title="POS" /> <item android:id="@+id/pos_item_orders" android:checkable="true" android:title="Orders" /> <group android:id="@+id/group" android:checkableBehavior="single"> GoalKicker.com Android Notes for Professionals 101 <item android:id="@+id/menu_nav_home" android:icon="@drawable/ic_home_black_24dp" android:title="@string/menu_nav_home" /> </group> ...... </menu> Section 15.3: Add seperators to menu Access the RecyclerView in the NavigationView and add ItemDecoration to it. NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view); NavigationMenuView navMenuView = (NavigationMenuView) navigationView.getChildAt(0); navMenuView.addItemDecoration(new DividerItemDecoration(this)); Code for DividerItemDecoration GoalKicker.com Android Notes for Professionals 102 public class DividerItemDecoration extends RecyclerView.ItemDecoration { private static final int[] ATTRS = new int[]{android.R.attr.listDivider}; private Drawable mDivider; public DividerItemDecoration(Context context) { final TypedArray styledAttributes = context.obtainStyledAttributes(ATTRS); mDivider = styledAttributes.getDrawable(0); styledAttributes.recycle(); } @Override public void onDraw(Canvas c, RecyclerView parent, RecyclerView.State state) { int left = parent.getPaddingLeft(); int right = parent.getWidth() - parent.getPaddingRight(); int childCount = parent.getChildCount(); for (int i = 1; i < childCount; i++) { View child = parent.getChildAt(i); RecyclerView.LayoutParams params = (RecyclerView.LayoutParams) child.getLayoutParams(); int top = child.getBottom() + params.bottomMargin; int bottom = top + mDivider.getIntrinsicHeight(); mDivider.setBounds(left, top, right, bottom); mDivider.draw(c); } } } Preview: Section 15.4: Add menu Divider using default DividerItemDecoration Just use default DividerItemDecoration class : NavigationView navigationView = (NavigationView) findViewById(R.id.navigation); NavigationMenuView navMenuView = (NavigationMenuView) navigationView.getChildAt(0); GoalKicker.com Android Notes for Professionals 103 navMenuView.addItemDecoration(new DividerItemDecoration(context,DividerItemDecoration.VERTICAL)); Preview : GoalKicker.com Android Notes for Professionals 104 Chapter 16: RecyclerView Parameter Adapter Detail A subclass of RecyclerView.Adapter responsible for providing views that represent items in a data set Position The position of a data item within an Adapter Index The index of an attached child view as used in a call to getChildAt(int). Contrast with Position Binding The process of preparing a child view to display data corresponding to a position within the adapter A view previously used to display data for a specic adapter position may be placed in a cache for Recycle (view) later reuse to display the same type of data again later. This can drastically improve performance by skipping initial layout ination or construction Scrap (view) A child view that has entered into a temporarily detached state during layout. Scrap views may be reused without becoming fully detached from the parent RecyclerView, either unmodied if no rebinding is required or modied by the adapter if the view was considered dirty Dirty (view) A child view that must be rebound by the adapter before being displayed RecyclerView is a more advanced version of List View with improved performance and additional features. Section 16.1: Adding a RecyclerView Add the dependency as described in the Remark section, then add a RecyclerView to your layout: <android.support.v7.widget.RecyclerView android:id="@+id/my_recycler_view" android:layout_width="match_parent" android:layout_height="wrap_content"/> Once you have added a RecyclerView widget to your layout, obtain a handle to the object, connect it to a layout manager and attach an adapter for the data to be displayed: mRecyclerView = (RecyclerView) findViewById(R.id.my_recycler_view); // set a layout manager (LinearLayoutManager in this example) mLayoutManager = new LinearLayoutManager(getApplicationContext()); mRecyclerView.setLayoutManager(mLayoutManager); // specify an adapter mAdapter = new MyAdapter(myDataset); mRecyclerView.setAdapter(mAdapter); Or simply setup layout manager from xml by adding this lines: xmlns:app="http://schemas.android.com/apk/res-auto" app:layoutManager="android.support.v7.widget.LinearLayoutManager" If you know that changes in content of the RecyclerView won't change the layout size of the RecyclerView, use the following code to improve the performance of the component. If RecyclerView has a xed size, it knows that RecyclerView itself will not resize due to its children, so it doesnt call request layout at all. It just handles the change itself. If invalidating whatever the parent is, the coordinator, layout, or whatever. (you can use this method even before setting LayoutManager and Adapter): mRecyclerView.setHasFixedSize(true); GoalKicker.com Android Notes for Professionals 105 RecyclerView provides these built-in layout managers to use. So you can create a list, a grid and a staggered grid using RecyclerView: 1. LinearLayoutManager shows items in a vertical or horizontal scrolling list. 2. GridLayoutManager shows items in a grid. 3. StaggeredGridLayoutManager shows items in a staggered grid. Section 16.2: Smoother loading of items If the items in your RecyclerView load data from the network (commonly images) or carry out other processing, that can take a signicant amount of time and you may end up with items on-screen but not fully loaded. To avoid this you can extend the existing LinearLayoutManager to preload a number of items before they become visible onscreen: package com.example; import android.content.Context; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.OrientationHelper; import android.support.v7.widget.RecyclerView; /** * A LinearLayoutManager that preloads items off-screen. * <p> * Preloading is useful in situations where items might take some time to load * fully, commonly because they have maps, images or other items that require * network requests to complete before they can be displayed. * <p> * By default, this layout will load a single additional page's worth of items, * a page being a pixel measure equivalent to the on-screen size of the * recycler view. This can be altered using the relevant constructor, or * through the {@link #setPages(int)} method. */ public class PreLoadingLinearLayoutManager extends LinearLayoutManager { private int mPages = 1; private OrientationHelper mOrientationHelper; public PreLoadingLinearLayoutManager(final Context context) { super(context); } public PreLoadingLinearLayoutManager(final Context context, final int pages) { super(context); this.mPages = pages; } public PreLoadingLinearLayoutManager(final Context context, final int orientation, final boolean reverseLayout) { super(context, orientation, reverseLayout); } @Override public void setOrientation(final int orientation) { super.setOrientation(orientation); mOrientationHelper = null; } /** * Set the number of pages of layout that will be preloaded off-screen, GoalKicker.com Android Notes for Professionals 106 * a page being a pixel measure equivalent to the on-screen size of the * recycler view. * @param pages the number of pages; can be {@code 0} to disable preloading */ public void setPages(final int pages) { this.mPages = pages; } @Override protected int getExtraLayoutSpace(final RecyclerView.State state) { if (mOrientationHelper == null) { mOrientationHelper = OrientationHelper.createOrientationHelper(this, getOrientation()); } return mOrientationHelper.getTotalSpace() * mPages; } } Section 16.3: RecyclerView with DataBinding Here is a generic ViewHolder class that you can use with any DataBinding layout. Here an instance of particular ViewDataBinding class is created using the inated View object and DataBindingUtil utility class. import android.databinding.DataBindingUtil; import android.support.v7.widget.RecyclerView; import android.view.View; public class BindingViewHolder<T> extends RecyclerView.ViewHolder{ private final T binding; public BindingViewHolder(View itemView) { super(itemView); binding = (T)DataBindingUtil.bind(itemView); } public T getBinding() { return binding; } } After creating this class you can use the <layout> in your layout le to enable databinding for that layout like this: file name: my_item.xml <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="item" type="ItemModel" /> </data> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="wrap_content" android:layout_height="match_parent" GoalKicker.com Android Notes for Professionals 107 android:text="@{item.itemLabel}" /> </LinearLayout> </layout> and here is your sample dataModel: public class ItemModel { public String itemLabel; } By default, Android Data Binding library generates a ViewDataBinding class based on the layout le name, converting it to Pascal case and suxing "Binding" to it. For this example it would be MyItemBinding for the layout le my_item.xml. That Binding class would also have a setter method to set the object dened as data in the layout le(ItemModel for this example). Now that we have all the pieces we can implement our adapter like this: class MyAdapter extends RecyclerView.Adapter<BindingViewHolder<MyItemBinding>>{ ArrayList<ItemModel> items = new ArrayList<>(); public MyAdapter(ArrayList<ItemModel> items) { this.items = items; } @Override public BindingViewHolder<MyItemBinding> onCreateViewHolder(ViewGroup parent, int viewType) { return new BindingViewHolder<>(LayoutInflater.from(parent.getContext()).inflate(R.layout.my_item, parent, false)); } @Override public void onBindViewHolder(BindingViewHolder<ItemModel> holder, int position) { holder.getBinding().setItemModel(items.get(position)); holder.getBinding().executePendingBindings(); } @Override public int getItemCount() { return items.size(); } } Section 16.4: Animate data change RecyclerView will perform a relevant animation if any of the "notify" methods are used except for notifyDataSetChanged; this includes notifyItemChanged, notifyItemInserted, notifyItemMoved, notifyItemRemoved, etc. The adapter should extend this class instead of RecyclerView.Adapter. import android.support.annotation.NonNull; import android.support.v7.widget.RecyclerView; import java.util.List; public abstract class AnimatedRecyclerAdapter<T, VH extends RecyclerView.ViewHolder> extends RecyclerView.Adapter<VH> { protected List<T> models; GoalKicker.com Android Notes for Professionals 108 protected AnimatedRecyclerAdapter(@NonNull List<T> models) { this.models = models; } //Set new models. public void setModels(@NonNull final List<T> models) { applyAndAnimateRemovals(models); applyAndAnimateAdditions(models); applyAndAnimateMovedItems(models); } //Remove an item at position and notify changes. private T removeItem(int position) { final T model = models.remove(position); notifyItemRemoved(position); return model; } //Add an item at position and notify changes. private void addItem(int position, T model) { models.add(position, model); notifyItemInserted(position); } //Move an item at fromPosition to toPosition and notify changes. private void moveItem(int fromPosition, int toPosition) { final T model = models.remove(fromPosition); models.add(toPosition, model); notifyItemMoved(fromPosition, toPosition); } //Remove items that no longer exist in the new models. private void applyAndAnimateRemovals(@NonNull final List<T> newTs) { for (int i = models.size() - 1; i >= 0; i--) { final T model = models.get(i); if (!newTs.contains(model)) { removeItem(i); } } } //Add items that do not exist in the old models. private void applyAndAnimateAdditions(@NonNull final List<T> newTs) { for (int i = 0, count = newTs.size(); i < count; i++) { final T model = newTs.get(i); if (!models.contains(model)) { addItem(i, model); } } } //Move items that have changed their position. private void applyAndAnimateMovedItems(@NonNull final List<T> newTs) { for (int toPosition = newTs.size() - 1; toPosition >= 0; toPosition--) { final T model = newTs.get(toPosition); final int fromPosition = models.indexOf(model); if (fromPosition >= 0 && fromPosition != toPosition) { moveItem(fromPosition, toPosition); } } } } GoalKicker.com Android Notes for Professionals 109 You should NOT use the same List for setModels and List in the adapter. You declare models as global variables. DataModel is a dummy class only. private List<DataModel> models; private YourAdapter adapter; Initialize models before pass it to adapter. YourAdapter is the implementation of AnimatedRecyclerAdapter. models = new ArrayList<>(); //Add models models.add(new DataModel()); //Do NOT pass the models directly. Otherwise, when you modify global models, //you will also modify models in adapter. //adapter = new YourAdapter(models); <- This is wrong. adapter = new YourAdapter(new ArrayList(models)); Call this after you have updated your global models. adapter.setModels(new ArrayList(models)); If you do not override equals, all the comparison is compared by reference. Example using SortedList Android introduced the SortedList class soon after RecyclerView was introduced. This class handles all 'notify' method calls to the RecyclerView.Adapter to ensure proper animation, and even allows batching multiple changes, so the animations don't jitter. import android.support.v7.util.SortedList; import android.support.v7.widget.RecyclerView; import android.support.v7.widget.util.SortedListAdapterCallback; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import java.util.List; public class MyAdapter extends RecyclerView.Adapter<MyAdapter.ViewHolder> { private SortedList<DataModel> mSortedList; class ViewHolder extends RecyclerView.ViewHolder { TextView text; CheckBox checkBox; ViewHolder(View itemView){ super(itemView); //Initiate your code here... } void setDataModel(DataModel model) { //Update your UI with the data model passed here... GoalKicker.com Android Notes for Professionals 110 text.setText(modle.getText()); checkBox.setChecked(model.isChecked()); } } public MyAdapter() { mSortedList = new SortedList<>(DataModel.class, new SortedListAdapterCallback<DataModel>(this) { @Override public int compare(DataModel o1, DataModel o2) { //This gets called to find the ordering between objects in the array. if (o1.someValue() < o2.someValue()) { return -1; } else if (o1.someValue() > o2.someValue()) { return 1; } else { return 0; } } @Override public boolean areContentsTheSame(DataModel oldItem, DataModel newItem) { //This is to see of the content of this object has changed. These items are only considered equal if areItemsTheSame() returned true. //If this returns false, onBindViewHolder() is called with the holder containing the item, and the item's position. return oldItem.getText().equals(newItem.getText()) && oldItem.isChecked() == newItem.isChecked(); } @Override public boolean areItemsTheSame(DataModel item1, DataModel item2) { //Checks to see if these two items are the same. If not, it is added to the list, otherwise, check if content has changed. return item1.equals(item2); } }); } @Override public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = //Initiate your item view here. return new ViewHolder(itemView); } @Override public void onBindViewHolder(ViewHolder holder, int position) { //Just update the holder with the object in the sorted list from the given position DataModel model = mSortedList.get(position); if (model != null) { holder.setDataModel(model); } } @Override public int getItemCount() { return mSortedList.size(); } public void resetList(List<DataModel> models) { //If you are performing multiple changes, use the batching methods to ensure proper GoalKicker.com Android Notes for Professionals 111 animation. mSortedList.beginBatchedUpdates(); mSortedList.clear(); mSortedList.addAll(models); mSortedList.endBatchedUpdates(); } //The following methods each modify the data set and automatically handles calling the appropriate 'notify' method on the adapter. public void addModel(DataModel model) { mSortedList.add(model); } public void addModels(List<DataModel> models) { mSortedList.addAll(models); } public void clear() { mSortedList.clear(); } public void removeModel(DataModel model) { mSortedList.remove(model); } public void removeModelAt(int i) { mSortedList.removeItemAt(i); } } Section 16.5: Popup menu with recyclerView put this code inside your ViewHolder note: In this code I am using btnExpand click-event, for whole recyclerview click event you can set listener to itemView object. public class MyViewHolder extends RecyclerView.ViewHolder{ CardView cv; TextView recordName, visibleFile, date, time; Button btnIn, btnExpand; public MyViewHolder(final View itemView) { super(itemView); cv = (CardView)itemView.findViewById(R.id.cardview); recordName = (TextView)itemView.findViewById(R.id.tv_record); visibleFile = (TextView)itemView.findViewById(R.id.visible_file); date = (TextView)itemView.findViewById(R.id.date); time = (TextView)itemView.findViewById(R.id.time); btnIn = (Button)itemView.findViewById(R.id.btn_in_out); btnExpand = (Button) itemView.findViewById(R.id.btn_expand); btnExpand.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { PopupMenu popup = new PopupMenu(btnExpand.getContext(), itemView); popup.setOnMenuItemClickListener(new PopupMenu.OnMenuItemClickListener() { @Override GoalKicker.com Android Notes for Professionals 112 public boolean onMenuItemClick(MenuItem item) { switch (item.getItemId()) { case R.id.action_delete: moveFile(recordName.getText().toString(), getAdapterPosition()); return true; case R.id.action_play: String valueOfPath = recordName.getText().toString(); Intent intent = new Intent(); intent.setAction(android.content.Intent.ACTION_VIEW); File file = new File(valueOfPath); intent.setDataAndType(Uri.fromFile(file), "audio/*"); context.startActivity(intent); return true; case R.id.action_share: String valueOfPath = recordName.getText().toString(); File filee = new File(valueOfPath); try { Intent sendIntent = new Intent(); sendIntent.setAction(Intent.ACTION_SEND); sendIntent.setType("audio/*"); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.fromFile(filee)); context.startActivity(sendIntent); } catch (NoSuchMethodError | IllegalArgumentException | NullPointerException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return true; default: return false; } } }); // here you can inflate your menu popup.inflate(R.menu.my_menu_item); popup.setGravity(Gravity.RIGHT); // if you want icon with menu items then write this try-catch block. try { Field mFieldPopup=popup.getClass().getDeclaredField("mPopup"); mFieldPopup.setAccessible(true); MenuPopupHelper mPopup = (MenuPopupHelper) mFieldPopup.get(popup); mPopup.setForceShowIcon(true); } catch (Exception e) { } popup.show(); } }); } } alternative way to show icons in menu try { Field[] fields = popup.getClass().getDeclaredFields(); for (Field field : fields) { GoalKicker.com Android Notes for Professionals 113 if ("mPopup".equals(field.getName())) { field.setAccessible(true); Object menuPopupHelper = field.get(popup); Class<?> classPopupHelper = Class.forName(menuPopupHelper .getClass().getName()); Method setForceIcons = classPopupHelper.getMethod( "setForceShowIcon", boolean.class); setForceIcons.invoke(menuPopupHelper, true); break; } } } catch (Exception e) { } Here is the output: Section 16.6: Using several ViewHolders with ItemViewType Sometimes a RecyclerView will need to use several types of Views to be displayed in the list shown in the UI, and each View needs a dierent layout xml to be inated. For this issue, you may use dierent ViewHolders in single Adapter, by using a special method in RecyclerView getItemViewType(int position). Below is example of using two ViewHolders: 1. A ViewHolder for displaying list entries 2. A ViewHolder for displaying multiple header views @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = LayoutInflater.from(context).inflate(viewType, parent, false); return ViewHolder.create(itemView, viewType); } @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { final Item model = this.items.get(position); ((ViewHolder) holder).bind(model); } @Override public int getItemViewType(int position) { GoalKicker.com Android Notes for Professionals 114 return inSearchState ? R.layout.item_header : R.layout.item_entry; } abstract class ViewHolder { abstract void bind(Item model); public static ViewHolder create(View v, int viewType) { return viewType == R.layout.item_header ? new HeaderViewHolder(v) :new EntryViewHolder(v); } } static class EntryViewHolder extends ViewHolder { private View v; public EntryViewHolder(View v) { this.v = v; } @Override public void bind(Item model) { // Bind item data to entry view. } } static class HeaderViewHolder extends ViewHolder { private View v; public HeaderViewHolder(View v) { this.v = v; } @Override public void bind(Item model) { // Bind item data to header view. } } Section 16.7: Filter items inside RecyclerView with a SearchView add filter method in RecyclerView.Adapter: public void filter(String text) { if(text.isEmpty()){ items.clear(); items.addAll(itemsCopy); } else{ ArrayList<PhoneBookItem> result = new ArrayList<>(); text = text.toLowerCase(); for(PhoneBookItem item: itemsCopy){ //match by name or phone if(item.name.toLowerCase().contains(text) || item.phone.toLowerCase().contains(text)){ result.add(item); } } items.clear(); items.addAll(result); } notifyDataSetChanged(); GoalKicker.com Android Notes for Professionals 115 } itemsCopy is initialized in adapter's constructor like itemsCopy.addAll(items). If you do so, just call filter from OnQueryTextListener from SearchView: searchView.setOnQueryTextListener(new SearchView.OnQueryTextListener() { @Override public boolean onQueryTextSubmit(String query) { adapter.filter(query); return true; } @Override public boolean onQueryTextChange(String newText) { adapter.filter(newText); return true; } }); Section 16.8: Drag&Drop and Swipe with RecyclerView You can implement the swipe-to-dismiss and drag-and-drop features with the RecyclerView without using 3rd party libraries. Just use the ItemTouchHelper class included in the RecyclerView support library. Instantiate the ItemTouchHelper with the SimpleCallback callback and depending on which functionality you support, you should override onMove(RecyclerView, ViewHolder, ViewHolder) and / or onSwiped(ViewHolder, int)and and nally attach to your RecyclerView. ItemTouchHelper.SimpleCallback simpleItemTouchCallback = new ItemTouchHelper.SimpleCallback(0, ItemTouchHelper.LEFT | ItemTouchHelper.RIGHT) { @Override public void onSwiped(RecyclerView.ViewHolder viewHolder, int swipeDir) { // remove item from adapter } @Override public boolean onMove(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder, RecyclerView.ViewHolder target) { final int fromPos = viewHolder.getAdapterPosition(); final int toPos = target.getAdapterPosition(); // move item in `fromPos` to `toPos` in adapter. return true;// true if moved, false otherwise } }; ItemTouchHelper itemTouchHelper = new ItemTouchHelper(simpleItemTouchCallback); itemTouchHelper.attachToRecyclerView(recyclerView); It's worth mentioning that SimpleCallback constructor applies the same swiping strategy to all items in the RecyclerView. It's possible in any case to update the default swiping direction for specic items by simply overriding method getSwipeDirs(RecyclerView, ViewHolder). Let's suppose for example that our RecyclerView includes a HeaderViewHolder and that we obviously don't want to apply swiping to it. It will be enough to override getSwipeDirs as follows: GoalKicker.com Android Notes for Professionals 116 @Override public int getSwipeDirs(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder) { if (viewHolder instanceof HeaderViewHolder) { // no swipe for header return 0; } // default swipe for all other items return super.getSwipeDirs(recyclerView, viewHolder); } Section 16.9: Show default view till items load or when data is not available Screenshot Adapter Class private class MyAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { final int EMPTY_VIEW = 77777; List<CustomData> datalist = new ArrayList<>(); MyAdapter() { super(); } @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { LayoutInflater layoutInflater = LayoutInflater.from(parent.getContext()); GoalKicker.com Android Notes for Professionals 117 if (viewType == EMPTY_VIEW) { return new EmptyView(layoutInflater.inflate(R.layout.nothing_yet, parent, false)); } else { return new ItemView(layoutInflater.inflate(R.layout.my_item, parent, false)); } } @SuppressLint("SetTextI18n") @Override public void onBindViewHolder(final RecyclerView.ViewHolder holder, int position) { if (getItemViewType(position) == EMPTY_VIEW) { EmptyView emptyView = (EmptyView) holder; emptyView.primaryText.setText("No data yet"); emptyView.secondaryText.setText("You're doing good !"); emptyView.primaryText.setCompoundDrawablesWithIntrinsicBounds(null, new IconicsDrawable(getActivity()).icon(FontAwesome.Icon.faw_ticket).sizeDp(48).color(Color.DKGRAY), null, null); } else { ItemView itemView = (ItemView) holder; // Bind data to itemView } } @Override public int getItemCount() { return datalist.size() > 0 ? datalist.size() : 1; } @Override public int getItemViewType(int position) { if datalist.size() == 0) { return EMPTY_VIEW; } return super.getItemViewType(position); } } nothing_yet.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:orientation="vertical" android:paddingBottom="100dp" android:paddingTop="100dp"> <TextView android:id="@+id/nothingPrimary" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:drawableTint="@android:color/secondary_text_light" android:drawableTop="@drawable/ic_folder_open_black_24dp" android:enabled="false" android:fontFamily="sans-serif-light" android:text="No Item's Yet" GoalKicker.com Android Notes for Professionals 118 android:textAppearance="?android:attr/textAppearanceLarge" android:textColor="@android:color/secondary_text_light" android:textSize="40sp" tools:targetApi="m" /> <TextView android:id="@+id/nothingSecondary" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:enabled="false" android:fontFamily="sans-serif-condensed" android:text="You're doing good !" android:textAppearance="?android:attr/textAppearanceSmall" android:textColor="@android:color/tertiary_text_light" /> </LinearLayout> I'm using FontAwesome with Iconics Library for the images. Add this to your app level build.gradle le. compile 'com.mikepenz:fontawesome-typeface:4.6.0.3@aar' compile 'com.mikepenz:iconics-core:2.8.1@aar' Section 16.10: Add header/footer to a RecyclerView This is a sample adapter code. public class SampleAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { private static final int FOOTER_VIEW = 1; // Define a view holder for Footer view public class FooterViewHolder extends ViewHolder { public FooterViewHolder(View itemView) { super(itemView); itemView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Do whatever you want on clicking the item } }); } } // Now define the viewholder for Normal list item public class NormalViewHolder extends ViewHolder { public NormalViewHolder(View itemView) { super(itemView); itemView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Do whatever you want on clicking the normal items } }); } } // And now in onCreateViewHolder you have to pass the correct view // while populating the list item. GoalKicker.com Android Notes for Professionals 119 @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View v; if (viewType == FOOTER_VIEW) { v = LayoutInflater.from(parent.getContext()).inflate(R.layout.list_item_footer, parent, false); FooterViewHolder vh = new FooterViewHolder(v); return vh; } v = LayoutInflater.from(parent.getContext()).inflate(R.layout.list_item_normal, parent, false); NormalViewHolder vh = new NormalViewHolder(v); return vh; } // Now bind the viewholders in onBindViewHolder @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { try { if (holder instanceof NormalViewHolder) { NormalViewHolder vh = (NormalViewHolder) holder; vh.bindView(position); } else if (holder instanceof FooterViewHolder) { FooterViewHolder vh = (FooterViewHolder) holder; } } catch (Exception e) { e.printStackTrace(); } } // Now the critical part. You have return the exact item count of your list // I've only one footer. So I returned data.size() + 1 // If you've multiple headers and footers, you've to return total count // like, headers.size() + data.size() + footers.size() @Override public int getItemCount() { if (data == null) { return 0; } if (data.size() == 0) { //Return 1 here to show nothing return 1; } // Add extra view to show the footer view return data.size() + 1; } // Now define getItemViewType of your own. @Override public int getItemViewType(int position) { GoalKicker.com Android Notes for Professionals 120 if (position == data.size()) { // This is where we'll add footer. return FOOTER_VIEW; } return super.getItemViewType(position); } // So you're done with adding a footer and its action on onClick. // Now set the default ViewHolder for NormalViewHolder public class ViewHolder extends RecyclerView.ViewHolder { // Define elements of a row here public ViewHolder(View itemView) { super(itemView); // Find view by ID and initialize here } public void bindView(int position) { // bindView() method to implement actions } } } Here's a good read about the implementation of RecyclerView with header and footer. Alternate method: While the above answer will work you can use this approach as well using a recycler view using a NestedScrollView .You can add a layout for header using the following approach: <android.support.v4.widget.NestedScrollView android:layout_width="match_parent" android:layout_height="match_parent"> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <include layout="@layout/drawer_view_header" android:id="@+id/navigation_header"/> <android.support.v7.widget.RecyclerView android:layout_below="@id/navigation_header" android:id="@+id/followers_list" android:layout_width="match_parent" android:layout_height="wrap_content"/> </RelativeLayout> </android.support.v4.widget.NestedScrollView> Or you may also use a LinearLayout with vertical alignment in your NestedScrollView. Note: This will only work with RecyclerView above 23.2.0 compile 'com.android.support:recyclerview-v7:23.2.0' GoalKicker.com Android Notes for Professionals 121 Section 16.11: Endless Scrolling in Recycleview Here I have shared a code snippet for implementing endless scrolling in recycle view. Step 1: First make a one abstract method in Recycleview adapter like below. public abstract class ViewAllCategoryAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { public abstract void load(); } Step 2: Now override onBindViewHolder and getItemCount() method of ViewAllCategoryAdapter class and call Load() method like below. @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, final int position) { if ((position >= getItemCount() - 1)) { load(); } } @Override public int getItemCount() { return YOURLIST.size(); } Step 3: Now every backend logic is complete now it's time to execute this logic.It's simple you can override load method where you create object of your adapter.this method is automatically call while user reach at end of the listing. adapter = new ViewAllCategoryAdapter(CONTEXT, YOURLIST) { @Override public void load() { /* do your stuff here */ /* This method is automatically call while user reach at end of your list. */ } }; recycleCategory.setAdapter(adapter); Now load() method automatically call while user scroll at end of list. Best Luck Section 16.12: Add divider lines to RecyclerView items Just add these lines to the initialization RecyclerView mRecyclerView = (RecyclerView) view.findViewById(recyclerView); mRecyclerView.setLayoutManager(new LinearLayoutManager(getActivity())); mRecyclerView.addItemDecoration(new DividerItemDecoration(getActivity(), DividerItemDecoration.VERTICAL)); Add an adapter and call .notifyDataSetChanged(); as usual ! This is not an inbuilt feature of Recyclerview but added in the support libraries. So don't forget to include this in your app level build.gradle le GoalKicker.com Android Notes for Professionals 122 compile "com.android.support:appcompat-v7:25.3.1" compile "com.android.support:recyclerview-v7:25.3.1" Multiple ItemDecorations can be added to a single RecyclerView. Changing divider color : It's pretty easy to set an color for a itemDecoration. 1. step is: creating a divider.xml le which is located on drawable folder <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="line"> <size android:width="1px" android:height="1px"/> <solid android:color="@color/divider_color"/> </shape> 2. step is: setting drawable // Get drawable object Drawable mDivider = ContextCompat.getDrawable(m_jContext, R.drawable.divider); // Create a DividerItemDecoration whose orientation is Horizontal DividerItemDecoration hItemDecoration = new DividerItemDecoration(m_jContext, DividerItemDecoration.HORIZONTAL); // Set the drawable on it hItemDecoration.setDrawable(mDivider); // Create a DividerItemDecoration whose orientation is vertical DividerItemDecoration vItemDecoration = new DividerItemDecoration(m_jContext, DividerItemDecoration.VERTICAL); // Set the drawable on it vItemDecoration.setDrawable(mDivider); GoalKicker.com Android Notes for Professionals 123 GoalKicker.com Android Notes for Professionals 124 Chapter 17: RecyclerView Decorations Parameter Details decoration the item decoration to add to the RecyclerView index the index in the list of decorations for this RecyclerView. This is the order in which getItemOffset and onDraw are called. Later calls might overdraw previous ones. Section 17.1: Add divider to RecyclerView First of all you need to create a class which extends RecyclerView.ItemDecoration : public class SimpleBlueDivider extends RecyclerView.ItemDecoration { private Drawable mDivider; public SimpleBlueDivider(Context context) { mDivider = context.getResources().getDrawable(R.drawable.divider_blue); } @Override public void onDrawOver(Canvas c, RecyclerView parent, RecyclerView.State state) { //divider padding give some padding whatever u want or disable int left =parent.getPaddingLeft()+80; int right = parent.getWidth() - parent.getPaddingRight()-30; int childCount = parent.getChildCount(); for (int i = 0; i < childCount; i++) { View child = parent.getChildAt(i); RecyclerView.LayoutParams params = (RecyclerView.LayoutParams) child.getLayoutParams(); int top = child.getBottom() + params.bottomMargin; int bottom = top + mDivider.getIntrinsicHeight(); mDivider.setBounds(left, top, right, bottom); mDivider.draw(c); } } } Add divider_blue.xml to your drawable folder : <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <size android:width="1dp" android:height="4dp" /> <solid android:color="#AA123456" /> </shape> Then use it like : recyclerView.addItemDecoration(new SimpleBlueDivider(context)); The result will be like : GoalKicker.com Android Notes for Professionals 125 This image is just an example how dividers working , if you want to follow Material Design specs when adding dividers please take a look at this link : dividers and thanks @B<NAME> by providing link . GoalKicker.com Android Notes for Professionals 126 Section 17.2: Drawing a Separator This will draw a line at the bottom of every view but the last to act as a separator between items. public class SeparatorDecoration extends RecyclerView.ItemDecoration { private final Paint mPaint; private final int mAlpha; public SeparatorDecoration(@ColorInt int color, float width) { mPaint = new Paint(); mPaint.setColor(color); mPaint.setStrokeWidth(width); mAlpha = mPaint.getAlpha(); } @Override public void getItemOffsets(Rect outRect, View view, RecyclerView parent, RecyclerView.State state) { final RecyclerView.LayoutParams params = (RecyclerView.LayoutParams) view.getLayoutParams(); // we retrieve the position in the list final int position = params.getViewAdapterPosition(); // add space for the separator to the bottom of every view but the last one if (position < state.getItemCount()) { outRect.set(0, 0, 0, (int) mPaint.getStrokeWidth()); // left, top, right, bottom } else { outRect.setEmpty(); // 0, 0, 0, 0 } } @Override public void onDraw(Canvas c, RecyclerView parent, RecyclerView.State state) { // a line will draw half its size to top and bottom, // hence the offset to place it correctly final int offset = (int) (mPaint.getStrokeWidth() / 2); // this will iterate over every visible view for (int i = 0; i < parent.getChildCount(); i++) { final View view = parent.getChildAt(i); final RecyclerView.LayoutParams params = (RecyclerView.LayoutParams) view.getLayoutParams(); // get the position final int position = params.getViewAdapterPosition(); // and finally draw the separator if (position < state.getItemCount()) { // apply alpha to support animations mPaint.setAlpha((int) (view.getAlpha() * mAlpha)); float positionY = view.getBottom() + offset + view.getTranslationY(); // do the drawing c.drawLine(view.getLeft() + view.getTranslationX(), positionY, view.getRight() + view.getTranslationX(), positionY, mPaint); } GoalKicker.com Android Notes for Professionals 127 } } } Section 17.3: How to add dividers using and DividerItemDecoration The DividerItemDecoration is a RecyclerView.ItemDecoration that can be used as a divider between items. DividerItemDecoration mDividerItemDecoration = new DividerItemDecoration(context, mLayoutManager.getOrientation()); recyclerView.addItemDecoration(mDividerItemDecoration); It supports both orientation using DividerItemDecoration.VERTICAL and DividerItemDecoration.HORIZONTAL. Section 17.4: Per-item margins with ItemDecoration You can use a RecyclerView.ItemDecoration to put extra margins around each item in a RecyclerView. This can in some cases clean up both your adapter implementation and your item view XML. public class MyItemDecoration extends RecyclerView.ItemDecoration { private final int extraMargin; @Override public void getItemOffsets(Rect outRect, View view, RecyclerView parent, RecyclerView.State state) { int position = parent.getChildAdapterPosition(view); // It's easy to put extra margin on the last item... if (position + 1 == parent.getAdapter().getItemCount()) { outRect.bottom = extraMargin; // unit is px } // ...or you could give each item in the RecyclerView different // margins based on its position... if (position % 2 == 0) { outRect.right = extraMargin; } else { outRect.left = extraMargin; } // ...or based on some property of the item itself MyListItem item = parent.getAdapter().getItem(position); if (item.isFirstItemInSection()) { outRect.top = extraMargin; } } public MyItemDecoration(Context context) { extraMargin = context.getResources() .getDimensionPixelOffset(R.dimen.extra_margin); } } To enable the decoration, simply add it to your RecyclerView: GoalKicker.com Android Notes for Professionals 128 // in your onCreate() RecyclerView rv = (RecyclerView) findItemById(R.id.myList); rv.addItemDecoration(new MyItemDecoration(context)); Section 17.5: ItemOsetDecoration for GridLayoutManager in RecycleView Following example will help to give equal space to an item in GridLayout. ItemOsetDecoration.java public class ItemOffsetDecoration extends RecyclerView.ItemDecoration { private int mItemOffset; private int spanCount = 2; public ItemOffsetDecoration(int itemOffset) { mItemOffset = itemOffset; } public ItemOffsetDecoration(@NonNull Context context, @DimenRes int itemOffsetId) { this(context.getResources().getDimensionPixelSize(itemOffsetId)); } @Override public void getItemOffsets(Rect outRect, View view, RecyclerView parent, RecyclerView.State state) { super.getItemOffsets(outRect, view, parent, state); int position = parent.getChildLayoutPosition(view); GridLayoutManager manager = (GridLayoutManager) parent.getLayoutManager(); if (position < manager.getSpanCount()) outRect.top = mItemOffset; if (position % 2 != 0) { outRect.right = mItemOffset; } outRect.left = mItemOffset; outRect.bottom = mItemOffset; } } You can call ItemDecoration like below code. recyclerView = (RecyclerView) view.findViewById(R.id.recycler_view); GridLayoutManager lLayout = new GridLayoutManager(getActivity(), 2); ItemOffsetDecoration itemDecoration = new ItemOffsetDecoration(mActivity, R.dimen.item_offset); recyclerView.addItemDecoration(itemDecoration); recyclerView.setLayoutManager(lLayout); and example item oset GoalKicker.com Android Notes for Professionals 129 <dimen name="item_offset">5dp</dimen> GoalKicker.com Android Notes for Professionals 130 Chapter 18: RecyclerView onClickListeners Section 18.1: Kotlin and RxJava example First example reimplemented in Kotlin and using RxJava for cleaner interaction. import android.view.LayoutInflater import android.view.View import android.view.ViewGroup import android.support.v7.widget.RecyclerView import rx.subjects.PublishSubject public class SampleAdapter(private val items: Array<String>) : RecyclerView.Adapter<SampleAdapter.ViewHolder>() { // change to different subjects from rx.subjects to get different behavior // BehaviorSubject for example allows to receive last event on subscribe // PublishSubject sends events only after subscribing on the other hand which is desirable for clicks public val itemClickStream: PublishSubject<View> = PublishSubject.create() override fun getItemCount(): Int { return items.size } override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ViewHolder? { val v = LayoutInflater.from(parent.getContext()).inflate(R.layout.text_row_item, parent, false); return ViewHolder(view) } override fun onBindViewHolder(holder: ViewHolder, position: Int) { holder.bind(items[position]) } public inner class ViewHolder(view: View) : RecyclerView.ViewHolder(view) { private val textView: TextView by lazy { view.findViewById(R.id.textView) as TextView } init { view.setOnClickListener { v -> itemClickStream.onNext(v) } } fun bind(text: String) { textView.text = text } } } Usage is quite simple then. It's possible to subscribe on separate thread using RxJava facilities. val adapter = SampleAdapter(arrayOf("Hello", "World")) adapter.itemClickStream.subscribe { v -> if (v.id == R.id.textView) { // do something } } GoalKicker.com Android Notes for Professionals 131 Section 18.2: RecyclerView Click listener public class RecyclerTouchListener implements RecyclerView.OnItemTouchListener { private GestureDetector gestureDetector; private RecyclerTouchListener.ClickListener clickListener; public RecyclerTouchListener(Context context, final RecyclerView recyclerView, final RecyclerTouchListener.ClickListener clickListener) { this.clickListener = clickListener; gestureDetector = new GestureDetector(context, new GestureDetector.SimpleOnGestureListener() { @Override public boolean onSingleTapUp(MotionEvent e) { return true; } @Override public void onLongPress(MotionEvent e) { View child = recyclerView.findChildViewUnder(e.getX(), e.getY()); if (child != null && clickListener != null) { clickListener.onLongClick(child, recyclerView.getChildPosition(child)); } } }); } @Override public boolean onInterceptTouchEvent(RecyclerView rv, MotionEvent e) { View child = rv.findChildViewUnder(e.getX(), e.getY()); if (child != null && clickListener != null && gestureDetector.onTouchEvent(e)) { clickListener.onClick(child, rv.getChildPosition(child)); } return false; } @Override public void onTouchEvent(RecyclerView rv, MotionEvent e) { } @Override public void onRequestDisallowInterceptTouchEvent(boolean disallowIntercept) { } public interface ClickListener { void onLongClick(View child, int childPosition); void onClick(View child, int childPosition); } } In MainActivity RecyclerView recyclerView =(RecyclerView) findViewById(R.id.recyclerview); recyclerView.addOnItemTouchListener(new RecyclerTouchListener(getActivity(),recyclerView, new RecyclerTouchListener.ClickListener() { @Override public void onLongClick(View child, int childPosition) { GoalKicker.com Android Notes for Professionals 132 } @Override public void onClick(View child, int childPosition) { } })); Section 18.3: Another way to implement Item Click Listener Another way to implement item click listener is to use interface with several methods, the number of which is equal to the number of clickable views, and use overrided click listeners as you can see below. This method is more exible, because you can set click listeners to dierent views and quite easy control the click logic separately for each. public class CustomAdapter extends RecyclerView.Adapter<CustomAdapter.CustomHolder> { private ArrayList<Object> mObjects; private ClickInterface mClickInterface; public interface ClickInterface { void clickEventOne(Object obj); void clickEventTwo(Object obj1, Object obj2); } public void setClickInterface(ClickInterface clickInterface) { mClickInterface = clickInterface; } public CustomAdapter(){ mList = new ArrayList<>(); } public void addItems(ArrayList<Object> objects) { mObjects.clear(); mObjects.addAll(objects); notifyDataSetChanged(); } @Override public CustomHolder onCreateViewHolder(ViewGroup parent, int viewType) { View v = LayoutInflater.from(parent.getContext()) .inflate(R.layout.list_item, parent, false); return new CustomHolder(v); } @Override public void onBindViewHolder(CustomHolder holder, int position) { //make all even positions not clickable holder.firstClickListener.setClickable(position%2==0); holder.firstClickListener.setPosition(position); holder.secondClickListener.setPosition(position); } private class FirstClickListener implements View.OnClickListener { private int mPosition; private boolean mClickable; GoalKicker.com Android Notes for Professionals 133 void setPosition(int position) { mPosition = position; } void setClickable(boolean clickable) { mPosition = position; } @Override public void onClick(View v) { if(mClickable) { mClickInterface.clickEventOne(mObjects.get(mPosition)); } } } private class SecondClickListener implements View.OnClickListener { private int mPosition; void setPosition(int position) { mPosition = position; } @Override public void onClick(View v) { mClickInterface.clickEventTwo(mObjects.get(mPosition), v); } } @Override public int getItemCount() { return mObjects.size(); } protected class CustomHolder extends RecyclerView.ViewHolder { FirstClickListener firstClickListener; SecondClickListener secondClickListener; View v1, v2; public DialogHolder(View itemView) { super(itemView); v1 = itemView.findViewById(R.id.v1); v2 = itemView.findViewById(R.id.v2); firstClickListener = new FirstClickListener(); secondClickListener = new SecondClickListener(); v1.setOnClickListener(firstClickListener); v2.setOnClickListener(secondClickListener); } } } And when you have an instance of adapter, you can set your click listener which listens to clicking on each of the views: customAdapter.setClickInterface(new CustomAdapter.ClickInterface { @Override public void clickEventOne(Object obj) { // Your implementation here } @Override GoalKicker.com Android Notes for Professionals 134 public void clickEventTwo(Object obj1, Object obj2) { // Your implementation here } }); Section 18.4: New Example public class SampleAdapter extends RecyclerView.Adapter<SampleAdapter.ViewHolder> { private String[] mDataSet; private OnRVItemClickListener mListener; /** * Provide a reference to the type of views that you are using (custom ViewHolder) */ public static class ViewHolder extends RecyclerView.ViewHolder { private final TextView textView; public ViewHolder(View v) { super(v); // Define click listener for the ViewHolder's View. v.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // handle click events here Log.d(TAG, "Element " + getPosition() + " clicked."); mListener.onRVItemClicked(getPosition(),v); //set callback } }); textView = (TextView) v.findViewById(R.id.textView); } public TextView getTextView() { return textView; } } /** * Initialize the dataset of the Adapter. * * @param dataSet String[] containing the data to populate views to be used by RecyclerView. */ public SampleAdapter(String[] dataSet) { mDataSet = dataSet; } // Create new views (invoked by the layout manager) @Override public ViewHolder onCreateViewHolder(ViewGroup viewGroup, int viewType) { // Create a new view. View v = LayoutInflater.from(viewGroup.getContext()) .inflate(R.layout.text_row_item, viewGroup, false); return new ViewHolder(v); } // Replace the contents of a view (invoked by the layout manager) @Override public void onBindViewHolder(ViewHolder viewHolder, final int position) { // Get element from your dataset at this position and replace the contents of the view // with that element viewHolder.getTextView().setText(mDataSet[position]); GoalKicker.com Android Notes for Professionals 135 } // Return the size of your dataset (invoked by the layout manager) @Override public int getItemCount() { return mDataSet.length; } public void setOnRVClickListener(OnRVItemClickListener) { mListener = OnRVItemClickListener; } public interface OnRVItemClickListener { void onRVItemClicked(int position, View v); } } Section 18.5: Easy OnLongClick and OnClick Example First of all, implement your view holder: implements View.OnClickListener, View.OnLongClickListener Then, register the listeners as follows: itemView.setOnClickListener(this); itemView.setOnLongClickListener(this); Next, override the listeners as follows: @Override public void onClick(View v) { onclicklistner.onItemClick(getAdapterPosition(), v); } @Override public boolean onLongClick(View v) { onclicklistner.onItemLongClick(getAdapterPosition(), v); return true; } And nally, add the following code: public void setOnItemClickListener(onClickListner onclicklistner) { SampleAdapter.onclicklistner = onclicklistner; } public void setHeader(View v) { this.headerView = v; } public interface onClickListner { void onItemClick(int position, View v); void onItemLongClick(int position, View v); } Adaptor demo package adaptor; GoalKicker.com Android Notes for Professionals 136 import android.annotation.SuppressLint; import android.content.Context; import android.support.v7.widget.RecyclerView; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.TextView; import com.wings.example.recycleview.MainActivity; import com.wings.example.recycleview.R; import java.util.ArrayList; public class SampleAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { Context context; private ArrayList<String> arrayList; private static onClickListner onclicklistner; private static final int VIEW_HEADER = 0; private static final int VIEW_NORMAL = 1; private View headerView; public SampleAdapter(Context context) { this.context = context; arrayList = MainActivity.arrayList; } public class HeaderViewHolder extends RecyclerView.ViewHolder { public HeaderViewHolder(View itemView) { super(itemView); } } public class ItemViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener, View.OnLongClickListener { TextView txt_pos; SampleAdapter sampleAdapter; public ItemViewHolder(View itemView, SampleAdapter sampleAdapter) { super(itemView); itemView.setOnClickListener(this); itemView.setOnLongClickListener(this); txt_pos = (TextView) itemView.findViewById(R.id.txt_pos); this.sampleAdapter = sampleAdapter; itemView.setOnClickListener(this); } @Override public void onClick(View v) { onclicklistner.onItemClick(getAdapterPosition(), v); } @Override public boolean onLongClick(View v) { onclicklistner.onItemLongClick(getAdapterPosition(), v); return true; } } public void setOnItemClickListener(onClickListner onclicklistner) { GoalKicker.com Android Notes for Professionals 137 SampleAdapter.onclicklistner = onclicklistner; } public void setHeader(View v) { this.headerView = v; } public interface onClickListner { void onItemClick(int position, View v); void onItemLongClick(int position, View v); } @Override public int getItemCount() { return arrayList.size()+1; } @Override public int getItemViewType(int position) { return position == 0 ? VIEW_HEADER : VIEW_NORMAL; } @SuppressLint("InflateParams") @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup viewGroup, int viewType) { if (viewType == VIEW_HEADER) { return new HeaderViewHolder(headerView); } else { View view = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.custom_recycler_row_sample_item, viewGroup, false); return new ItemViewHolder(view, this); } } @Override public void onBindViewHolder(RecyclerView.ViewHolder viewHolder, int position) { if (viewHolder.getItemViewType() == VIEW_HEADER) { return; } else { ItemViewHolder itemViewHolder = (ItemViewHolder) viewHolder; itemViewHolder.txt_pos.setText(arrayList.get(position-1)); } } } The example code above can be called by the following code: sampleAdapter.setOnItemClickListener(new SampleAdapter.onClickListner() { @Override public void onItemClick(int position, View v) { position = position+1;//As we are adding header Log.e(TAG + "ON ITEM CLICK", position + ""); Snackbar.make(v, "On item click "+position, Snackbar.LENGTH_LONG).show(); } @Override public void onItemLongClick(int position, View v) { position = position+1;//As we are adding header Log.e(TAG + "ON ITEM LONG CLICK", position + ""); Snackbar.make(v, "On item longclick "+position, Snackbar.LENGTH_LONG).show(); GoalKicker.com Android Notes for Professionals 138 } }); Section 18.6: Item Click Listeners To implement an item click listener and/or an item long click listener, you can create an interface in your adapter: public class CustomAdapter extends RecyclerView.Adapter<CustomAdapter.ViewHolder> { public interface OnItemClickListener { void onItemSeleted(int position, View view, CustomObject object); } public interface OnItemLongClickListener { boolean onItemSelected(int position, View view, CustomObject object); } public final class ViewHolder extends RecyclerView.ViewHolder { public ViewHolder(View itemView) { super(itemView); final int position = getAdapterPosition(); itemView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if(mOnItemClickListener != null) { mOnItemClickListener.onItemSeleted(position, view, mDataSet.get(position)); } } }); itemView.setOnLongClickListener(new View.OnLongClickListener() { @Override public boolean onLongClick(View view) { if(mOnItemLongClickListener != null) { return mOnItemLongClickListener.onItemSelected(position, view, mDataSet.get(position)); } } }); } } private List<CustomObject> mDataSet; private OnItemClickListener mOnItemClickListener; private OnItemLongClickListener mOnItemLongClickListener; public CustomAdapter(List<CustomObject> dataSet) { mDataSet = dataSet; } @Override public CustomAdapter.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()) .inflate(R.layout.view_item_custom, parent, false); return new ViewHolder(view); GoalKicker.com Android Notes for Professionals 139 } @Override public void onBindViewHolder(CustomAdapter.ViewHolder holder, int position) { // Bind views } @Override public int getItemCount() { return mDataSet.size(); } public void setOnItemClickListener(OnItemClickListener listener) { mOnItemClickListener = listener; } public void setOnItemLongClickListener(OnItemLongClickListener listener) { mOnItemLongClickListener = listener; } } Then you can set your click listeners after you create an instance of the adapter: customAdapter.setOnItemClickListener(new CustomAdapter.OnItemClickListener { @Override public void onItemSelected(int position, View view, CustomObject object) { // Your implementation here } }); customAdapter.setOnItemLongClickListener(new CustomAdapter.OnItemLongClickListener { @Override public boolean onItemSelected(int position, View view, CustomObject object) { // Your implementation here return true; } }); GoalKicker.com Android Notes for Professionals 140 Chapter 19: RecyclerView and LayoutManagers Section 19.1: Adding header view to recyclerview with gridlayout manager To add a header to a recyclerview with a gridlayout, rst the adapter needs to be told that the header view is the rst position - rather than the standard cell used for the content. Next, the layout manager must be told that the rst position should have a span equal to the *span count of the entire list. * Take a regular RecyclerView.Adapter class and congure it as follows: public class HeaderAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { private static final int ITEM_VIEW_TYPE_HEADER = 0; private static final int ITEM_VIEW_TYPE_ITEM = 1; private List<YourModel> mModelList; public HeaderAdapter (List<YourModel> modelList) { mModelList = modelList; } public boolean isHeader(int position) { return position == 0; } @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { LayoutInflater inflater = LayoutInflater.from(parent.getContext()); if (viewType == ITEM_VIEW_TYPE_HEADER) { View headerView = inflater.inflate(R.layout.header, parent, false); return new HeaderHolder(headerView); } View cellView = inflater.inflate(R.layout.gridcell, parent, false); return new ModelHolder(cellView); } @Override public int getItemViewType(int position) { return isHeader(position) ? ITEM_VIEW_TYPE_HEADER : ITEM_VIEW_TYPE_ITEM; } @Override public void onBindViewHolder(RecyclerView.ViewHolder h, int position) { if (isHeader(position)) { return; } final YourModel model = mModelList.get(position -1 ); // Subtract 1 for header ModelHolder holder = (ModelHolder) h; // populate your holder with data from your model as usual } @Override GoalKicker.com Android Notes for Professionals 141 public int getItemCount() { return _categories.size() + 1; // add one for the header } } Then in the activity/fragment: final HeaderAdapter adapter = new HeaderAdapter (mModelList); final GridLayoutManager manager = new GridLayoutManager(); manager.setSpanSizeLookup(new GridLayoutManager.SpanSizeLookup() { @Override public int getSpanSize(int position) { return adapter.isHeader(position) ? manager.getSpanCount() : 1; } }); mRecyclerView.setLayoutManager(manager); mRecyclerView.setAdapter(adapter); The same approach can be used add a footer in addition to or instead of a header. Source: Chiu-Ki Chan's Square Island blog Section 19.2: GridLayoutManager with dynamic span count When creating a recyclerview with a gridlayout layout manager you have to specify the span count in the constructor. Span count refers to the number of columns. This is fairly clunky and doesn't take into account larger screen sizes or screen orientation. One approach is to create multiple layouts for the various screen sizes. Another more dynamic approach can be seen below. First we create a custom RecyclerView class as follows: public class AutofitRecyclerView extends RecyclerView { private GridLayoutManager manager; private int columnWidth = -1; public AutofitRecyclerView(Context context) { super(context); init(context, null); } public AutofitRecyclerView(Context context, AttributeSet attrs) { super(context, attrs); init(context, attrs); } public AutofitRecyclerView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(context, attrs); } private void init(Context context, AttributeSet attrs) { if (attrs != null) { int[] attrsArray = { android.R.attr.columnWidth }; TypedArray array = context.obtainStyledAttributes(attrs, attrsArray); columnWidth = array.getDimensionPixelSize(0, -1); array.recycle(); } GoalKicker.com Android Notes for Professionals 142 manager = new GridLayoutManager(getContext(), 1); setLayoutManager(manager); } @Override protected void onMeasure(int widthSpec, int heightSpec) { super.onMeasure(widthSpec, heightSpec); if (columnWidth > 0) { int spanCount = Math.max(1, getMeasuredWidth() / columnWidth); manager.setSpanCount(spanCount); } } } This class determines how many columns can t into the recyclerview. To use it you will need to put it into your layout.xml as follows: <?xml version="1.0" encoding="utf-8"?> <com.path.to.your.class.autofitRecyclerView.AutofitRecyclerView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/auto_fit_recycler_view" android:layout_width="match_parent" android:layout_height="match_parent" android:columnWidth="200dp" android:clipToPadding="false" /> Notice that we use the columnWidth attribute. The recyclerview will need it to determine how many columns will t into the available space. In your activity/fragment you just get a reference to the recylerview and set an adapter to it (and any item decorations or animations that you want to add). DO NOT SET A LAYOUT MANAGER RecyclerView recyclerView = (RecyclerView) findViewById(R.id.auto_fit_recycler_view); recyclerView.setAdapter(new MyAdapter()); (where MyAdapter is your adapter class) You now have a recyclerview that will adjust the spancount (ie columns) to t the screen size. As a nal addition you might want to center the columns in the recyclerview (by default they are aligned to layout_start). You can do that by modifying the AutotRecyclerView class a little. Start by creating an inner class in the recyclerview. This will be a class that extends from GridLayoutManager. It will add enough padding to the left and right in order to center the rows: public class AutofitRecyclerView extends RecyclerView { // etc see above private class CenteredGridLayoutManager extends GridLayoutManager { public CenteredGridLayoutManager(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) { super(context, attrs, defStyleAttr, defStyleRes); } public CenteredGridLayoutManager(Context context, int spanCount) { super(context, spanCount); } GoalKicker.com Android Notes for Professionals 143 public CenteredGridLayoutManager(Context context, int spanCount, int orientation, boolean reverseLayout) { super(context, spanCount, orientation, reverseLayout); } @Override public int getPaddingLeft() { final int totalItemWidth = columnWidth * getSpanCount(); if (totalItemWidth >= AutofitRecyclerView.this.getMeasuredWidth()) { return super.getPaddingLeft(); // do nothing } else { return Math.round((AutofitRecyclerView.this.getMeasuredWidth() / (1f + getSpanCount())) - (totalItemWidth / (1f + getSpanCount()))); } } @Override public int getPaddingRight() { return getPaddingLeft(); } } } Then when you set the LayoutManager in the AutotRecyclerView use the CenteredGridLayoutManager as follows: private void init(Context context, AttributeSet attrs) { if (attrs != null) { int[] attrsArray = { android.R.attr.columnWidth }; TypedArray array = context.obtainStyledAttributes(attrs, attrsArray); columnWidth = array.getDimensionPixelSize(0, -1); array.recycle(); } manager = new CenteredGridLayoutManager(getContext(), 1); setLayoutManager(manager); } And that's it! You have a dynamic spancount, center aligned gridlayoutmanager based recyclerview. Sources: Chiu-Ki Chan's Square Island blog StackOverow Section 19.3: Simple list with LinearLayoutManager This example adds a list of places with image and name by using an ArrayList of custom Place objects as dataset. Activity layout The layout of the activity / fragment or where the RecyclerView is used only has to contain the RecyclerView. There is no ScrollView or a specic layout needed. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> GoalKicker.com Android Notes for Professionals 144 <android.support.v7.widget.RecyclerView android:id="@+id/my_recycler_view" android:layout_width="match_parent" android:layout_height="match_parent" /> </RelativeLayout> Dene the data model You could use any class or primitive data type as a model, like int, String, float[] or CustomObject. The RecyclerView will refer to a List of this objects / primitives. When a list item refers to dierent data types like text, numbers, images (as in this example with places), it is often a good idea to use a custom object. public class Place { // these fields will be shown in a list item private Bitmap image; private String name; // typical constructor public Place(Bitmap image, String name) { this.image = image; this.name = name; } // getters public Bitmap getImage() { return image; } public String getName() { return name; } } List item layout You have to specify a xml layout le that will be used for each list item. In this example, an ImageView is used for the image and a TextView for the name. The LinearLayout positions the ImageView at the left and the TextView right to the image. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center_vertical" android:orientation="horizontal" android:padding="8dp"> <ImageView android:id="@+id/image" android:layout_width="36dp" android:layout_height="36dp" android:layout_marginEnd="8dp" android:layout_marginRight="8dp" /> <TextView android:id="@+id/name" android:layout_width="wrap_content" android:layout_height="wrap_content" /> GoalKicker.com Android Notes for Professionals 145 </LinearLayout> Create a RecyclerView adapter and ViewHolder Next, you have to inherit the RecyclerView.Adapter and the RecyclerView.ViewHolder. A usual class structure would be: public class PlaceListAdapter extends RecyclerView.Adapter<PlaceListAdapter.ViewHolder> { // ... public class ViewHolder extends RecyclerView.ViewHolder { // ... } } First, we implement the ViewHolder. It only inherits the default constructor and saves the needed views into some elds: public class ViewHolder extends RecyclerView.ViewHolder { private ImageView imageView; private TextView nameView; public ViewHolder(View itemView) { super(itemView); imageView = (ImageView) itemView.findViewById(R.id.image); nameView = (TextView) itemView.findViewById(R.id.name); } } The adapter's constructor sets the used dataset: public class PlaceListAdapter extends RecyclerView.Adapter<PlaceListAdapter.ViewHolder> { private List<Place> mPlaces; public PlaceListAdapter(List<Place> contacts) { mPlaces = contacts; } // ... } To use our custom list item layout, we override the method onCreateViewHolder(...). In this example, the layout le is called place_list_item.xml. public class PlaceListAdapter extends RecyclerView.Adapter<PlaceListAdapter.ViewHolder> { // ... @Override public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()).inflate( R.layout.place_list_item, parent, false ); return new ViewHolder(view); } // ... GoalKicker.com Android Notes for Professionals 146 } In the onBindViewHolder(...), we actually set the views' contents. We get the used model by nding it in the List at the given position and then set image and name on the ViewHolder's views. public class PlaceListAdapter extends RecyclerView.Adapter<PlaceListAdapter.ViewHolder> { // ... @Override public void onBindViewHolder(PlaceListAdapter.ViewHolder viewHolder, int position) { Place place = mPlaces.get(position); viewHolder.nameView.setText(place.getName()); viewHolder.imageView.setImageBitmap(place.getImage()); } // ... } We also need to implement getItemCount(), which simply return the List's size. public class PlaceListAdapter extends RecyclerView.Adapter<PlaceListAdapter.ViewHolder> { // ... @Override public int getItemCount() { return mPlaces.size(); } // ... } (Generate random data) For this example, we'll generate some random places. @Override protected void onCreate(Bundle savedInstanceState) { // ... List<Place> places = randomPlaces(5); // ... } private List<Place> randomPlaces(int amount) { List<Place> places = new ArrayList<>(); for (int i = 0; i < amount; i++) { places.add(new Place( BitmapFactory.decodeResource(getResources(), Math.random() > 0.5 ? R.drawable.ic_account_grey600_36dp : R.drawable.ic_android_grey600_36dp ), "Place #" + (int) (Math.random() * 1000) )); } return places; } Connect the RecyclerView with the PlaceListAdapter and the dataset GoalKicker.com Android Notes for Professionals 147 Connecting a RecyclerView with an adapter is very easy. You have to set the LinearLayoutManager as layout manager to achieve the list layout. @Override protected void onCreate(Bundle savedInstanceState) { // ... RecyclerView recyclerView = (RecyclerView) findViewById(R.id.my_recycler_view); recyclerView.setAdapter(new PlaceListAdapter(places)); recyclerView.setLayoutManager(new LinearLayoutManager(this)); } Done! Section 19.4: StaggeredGridLayoutManager 1. Create your RecyclerView in your layout xml le: <android.support.v7.widget.RecyclerView android:id="@+id/recycleView" android:layout_width="match_parent" android:layout_height="match_parent" /> 2. Create your Model class for holding your data: public class PintrestItem { String url; public PintrestItem(String url,String name){ this.url=url; this.name=name; } public String getUrl() { return url; } public String getName(){ return name; } String name; } 3. Create a layout le to hold RecyclerView items: <ImageView android:layout_width="match_parent" android:layout_height="wrap_content" android:adjustViewBounds="true" android:scaleType="centerCrop" android:id="@+id/imageView"/> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center" android:id="@+id/name" android:layout_gravity="center" android:textColor="@android:color/white"/> 4. Create the adapter class for the RecyclerView: GoalKicker.com Android Notes for Professionals 148 public class PintrestAdapter extends RecyclerView.Adapter<PintrestAdapter.PintrestViewHolder>{ private ArrayList<PintrestItem>images; Picasso picasso; Context context; public PintrestAdapter(ArrayList<PintrestItem>images,Context context){ this.images=images; picasso=Picasso.with(context); this.context=context; } @Override public PintrestViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view= LayoutInflater.from(parent.getContext()).inflate(R.layout.pintrest_layout_item,parent,false); return new PintrestViewHolder(view); } @Override public void onBindViewHolder(PintrestViewHolder holder, int position) { picasso.load(images.get(position).getUrl()).into(holder.imageView); holder.tv.setText(images.get(position).getName()); } @Override public int getItemCount() { return images.size(); } public class PintrestViewHolder extends RecyclerView.ViewHolder{ ImageView imageView; TextView tv; public PintrestViewHolder(View itemView) { super(itemView); imageView=(ImageView)itemView.findViewById(R.id.imageView); tv=(TextView)itemView.findViewById(R.id.name); } } } 5. Instantiate the RecyclerView in your activity or fragment: RecyclerView recyclerView = (RecyclerView)findViewById(R.id.recyclerView); //Create the instance of StaggeredGridLayoutManager with 2 rows i.e the span count and provide the orientation StaggeredGridLayoutManager layoutManager=new new StaggeredGridLayoutManager(2, StaggeredGridLayoutManager.VERTICAL); recyclerView.setLayoutManager(layoutManager); // Create Dummy Data and Add to your List<PintrestItem> List<PintrestItem>items=new ArrayList<PintrestItem> items.add(new PintrestItem("url of image you want to show","imagename")); items.add(new PintrestItem("url of image you want to show","imagename")); items.add(new PintrestItem("url of image you want to show","imagename")); recyclerView.setAdapter(new PintrestAdapter(items,getContext() ); Don't forgot to add the Picasso dependency in your build.gradle le: GoalKicker.com Android Notes for Professionals 149 compile 'com.squareup.picasso:picasso:2.5.2' GoalKicker.com Android Notes for Professionals 150 Chapter 20: Pagination in RecyclerView Pagination is a common issue with for a lot of mobile apps that need to deal with lists of data. Most of the mobile apps are now starting to take up the "endless page" model, where scrolling automatically loads in new content. CWAC Endless Adapter makes it really easy to use this pattern in Android applications Section 20.1: MainActivity.java import android.os.Bundle; import android.os.Handler; import android.support.v7.app.AppCompatActivity; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.support.v7.widget.Toolbar; import android.util.Log; import android.widget.TextView; import com.android.volley.Request; import com.android.volley.Response; import com.android.volley.VolleyError; import com.android.volley.VolleyLog; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; public class MainActivity extends AppCompatActivity { private static final String TAG = "MainActivity"; private Toolbar toolbar; private TextView tvEmptyView; private RecyclerView mRecyclerView; private DataAdapter mAdapter; private LinearLayoutManager mLayoutManager; private int mStart=0,mEnd=20; private List<Student> studentList; private List<Student> mTempCheck; public static int pageNumber; public int total_size=0; protected Handler handler; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); pageNumber = 1; toolbar = (Toolbar) findViewById(R.id.toolbar); tvEmptyView = (TextView) findViewById(R.id.empty_view); mRecyclerView = (RecyclerView) findViewById(R.id.my_recycler_view); studentList = new ArrayList<>(); mTempCheck=new ArrayList<>(); GoalKicker.com Android Notes for Professionals 151 handler = new Handler(); if (toolbar != null) { setSupportActionBar(toolbar); getSupportActionBar().setTitle("Android Students"); } mRecyclerView.setHasFixedSize(true); mLayoutManager = new LinearLayoutManager(this); mRecyclerView.setLayoutManager(mLayoutManager); mAdapter = new DataAdapter(studentList, mRecyclerView); mRecyclerView.setAdapter(mAdapter); GetGroupData("" + mStart, "" + mEnd); mAdapter.setOnLoadMoreListener(new OnLoadMoreListener() { @Override public void onLoadMore() { if( mTempCheck.size()> 0) { studentList.add(null); mAdapter.notifyItemInserted(studentList.size() - 1); int start = pageNumber * 20; start = start + 1; ++ pageNumber; mTempCheck.clear(); GetData("" + start,""+ mEnd); } } }); } public void GetData(final String LimitStart, final String LimitEnd) { Map<String, String> params = new HashMap<>(); params.put("LimitStart", LimitStart); params.put("Limit", LimitEnd); Custom_Volly_Request jsonObjReq = new Custom_Volly_Request(Request.Method.POST, "Your php file link", params, new Response.Listener<JSONObject>() { @Override public void onResponse(JSONObject response) { Log.d("ResponseSuccess",response.toString()); // handle the data from the servoce } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { VolleyLog.d("ResponseErrorVolly: " + error.getMessage()); }}); } // load initial data private void loadData(int start,int end,boolean notifyadapter) { for (int i = start; i <= end; i++) { studentList.add(new Student("Student " + i, "androidstudent" + i + <EMAIL>")); if(notifyadapter) mAdapter.notifyItemInserted(studentList.size()); } } } OnLoadMoreListener.java GoalKicker.com Android Notes for Professionals 152 public interface OnLoadMoreListener { void onLoadMore(); } DataAdapter.java import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.view.LayoutInflater; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.widget.ProgressBar; import android.widget.TextView; import android.widget.Toast; import java.util.List; public class DataAdapter extends RecyclerView.Adapter { private final int VIEW_ITEM = 1; private final int VIEW_PROG = 0; private List<Student> studentList; // The minimum amount of items to have below your current scroll position // before loading more. private int visibleThreshold = 5; private int lastVisibleItem, totalItemCount; private boolean loading; private OnLoadMoreListener onLoadMoreListener; public DataAdapter(List<Student> students, RecyclerView recyclerView) { studentList = students; if (recyclerView.getLayoutManager() instanceof LinearLayoutManager) { final LinearLayoutManager linearLayoutManager = (LinearLayoutManager) recyclerView.getLayoutManager(); recyclerView.addOnScrollListener(new RecyclerView.OnScrollListener() { @Override public void onScrolled(RecyclerView recyclerView, int dx, int dy) { super.onScrolled(recyclerView, dx, dy); totalItemCount = linearLayoutManager.getItemCount(); lastVisibleItem = linearLayoutManager.findLastVisibleItemPosition(); if (! loading && totalItemCount <= (lastVisibleItem + visibleThreshold)) { if (onLoadMoreListener != null) { onLoadMoreListener.onLoadMore(); } loading = true; } } }); } } @Override public int getItemViewType(int position) { return studentList.get(position) != null ? VIEW_ITEM : VIEW_PROG; } @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { GoalKicker.com Android Notes for Professionals 153 RecyclerView.ViewHolder vh; if (viewType == VIEW_ITEM) { View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.list_row, parent, false); vh = new StudentViewHolder(v); } else { View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.progress_item, parent, false); vh = new ProgressViewHolder(v); } return vh; } @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { if (holder instanceof StudentViewHolder) { Student singleStudent=studentList.get(position); ((StudentViewHolder) holder).tvName.setText(singleStudent.getName()); ((StudentViewHolder) holder).tvEmailId.setText(singleStudent.getEmailId()); ((StudentViewHolder) holder).student= singleStudent; } else { ((ProgressViewHolder) holder).progressBar.setIndeterminate(true); } } public void setLoaded(boolean state) { loading = state; } @Override public int getItemCount() { return studentList.size(); } public void setOnLoadMoreListener(OnLoadMoreListener onLoadMoreListener) { this.onLoadMoreListener = onLoadMoreListener; } // public static class StudentViewHolder extends RecyclerView.ViewHolder { public TextView tvName; public TextView tvEmailId; public Student student; public StudentViewHolder(View v) { super(v); tvName = (TextView) v.findViewById(R.id.tvName); tvEmailId = (TextView) v.findViewById(R.id.tvEmailId); } } public static class ProgressViewHolder extends RecyclerView.ViewHolder { public ProgressBar progressBar; public ProgressViewHolder(View v) { super(v); progressBar = (ProgressBar) v.findViewById(R.id.progressBar1); GoalKicker.com Android Notes for Professionals 154 } } } GoalKicker.com Android Notes for Professionals 155 Chapter 21: ImageView Parameter resId Description your Image le name in the res folder (usually in drawable folder) ImageView (android.widget.ImageView) is a View for displaying and manipulating image resources, such as Drawables and Bitmaps. Some eects, discussed in this topic, can be applied to the image. The image source can be set in XML le (layout folder) or by programmatically in Java code. Section 21.1: Set tint Set a tinting color for the image. By default, the tint will blend using SRC_ATOP mode. set tint using XML attribute: android:tint="#009c38" Note: Must be a color value, in the form of "#rgb", "#argb", "#rrggbb", or "#aarrggbb". set tint programmatically: imgExample.setColorFilter(Color.argb(255, 0, 156, 38)); and you can clear this color lter: imgExample.clearColorFilter(); Example: GoalKicker.com Android Notes for Professionals 156 Section 21.2: Set alpha "alpha" is used to specify the opacity for an image. set alpha using XML attribute: android:alpha="0.5" Note: takes oat value from 0 (transparent) to 1 (fully visible) set alpha programmatically: imgExample.setAlpha(0.5f); Section 21.3: Set Scale Type Controls how the image should be resized or moved to match the size of ImageView. GoalKicker.com Android Notes for Professionals 157 XML attribute: android:scaleType="..." i will illustrate dierent scale types with a square ImageView which has a black background and we want to display a rectangular drawable in white background in ImageView. <ImageView android:id="@+id/imgExample" android:layout_width="200dp" android:layout_height="200dp" android:background="#000" android:src="@drawable/android2" android:scaleType="..."/> scaleType must be one of the following values: 1. center:Center the image in the view, but perform no scaling. 2. centerCrop: Scale the image uniformly (maintain the image's aspect ratio) so both dimensions (width and height) of the image will be equal to or larger than the corresponding dimension of the view (minus padding). The image is then centered in the view. GoalKicker.com Android Notes for Professionals 158 3. centerInside: Scale the image uniformly (maintain the image's aspect ratio) so that both dimensions (width and height) of the image will be equal to or less than the corresponding dimension of the view (minus padding). The image is then centered in the view. 4. matrix : Scale using the image matrix when drawing. GoalKicker.com Android Notes for Professionals 159 5. fitXY: Scale the image using FILL. 6. fitStart: Scale the image using START. GoalKicker.com Android Notes for Professionals 160 7. fitCenter: Scale the image using CENTER. 8. fitEnd: Scale the image using END. GoalKicker.com Android Notes for Professionals 161 Section 21.4: ImageView ScaleType - Center The image contained in the ImageView may not t the exact size given to the container. In that case, the framework allows you to resize the image in a number of ways. Center <ImageView android:layout_width="20dp" android:layout_height="20dp" android:src="@mipmap/ic_launcher" android:id="@+id/imageView" android:scaleType="center" android:background="@android:color/holo_orange_light"/> This will not resize the image, and it will center it inside the container (Orange = container) GoalKicker.com Android Notes for Professionals 162 In case that the ImageView is smaller than the image, the image will not be resized and you will only be able to see a part of it GoalKicker.com Android Notes for Professionals 163 strong text Section 21.5: ImageView ScaleType - CenterCrop Scale the image uniformly (maintain the image's aspect ratio) so that both dimensions (width and height) of the image will be equal to or larger than the corresponding dimension of the view (minus padding). Ocial Docs When the image matches the proportions of the container: GoalKicker.com Android Notes for Professionals 164 When the image is wider than the container it will expand it to the bigger size (in this case height) and adjust the width of the image without changing it's proportions, causing it to crop. GoalKicker.com Android Notes for Professionals 165 Section 21.6: ImageView ScaleType - CenterInside Scale the image uniformly (maintain the image's aspect ratio) so that both dimensions (width and height) of the image will be equal to or less than the corresponding dimension of the view (minus padding). Ocial Docs It will center the image and resize it to the smaller size, if both container sizes are bigger it will act the same as center. GoalKicker.com Android Notes for Professionals 166 But if one of the sizes are small, it will t to that size. GoalKicker.com Android Notes for Professionals 167 Section 21.7: ImageView ScaleType - FitStart and FitEnd Scale the image using START. Scale the image using END. Ocial Docs FitStart This will t to the smallest size of the container, and it will align it to the start. <ImageView android:layout_width="200dp" android:layout_height="200dp" GoalKicker.com Android Notes for Professionals 168 android:src="@mipmap/ic_launcher" android:id="@+id/imageView" android:scaleType="fitStart" android:layout_gravity="center" android:background="@android:color/holo_orange_light"/> GoalKicker.com Android Notes for Professionals 169 FitEnd This will t to the smallest size of the container, and it will align it to the end. <ImageView android:layout_width="200dp" android:layout_height="100dp" android:src="@mipmap/ic_launcher" android:id="@+id/imageView" android:scaleType="fitEnd" android:layout_gravity="center" android:background="@android:color/holo_orange_light"/> GoalKicker.com Android Notes for Professionals 170 GoalKicker.com Android Notes for Professionals 171 Section 21.8: ImageView ScaleType - FitCenter Scale the image using CENTER. Ocial Docs This expands the image to try to match the container and it will align it to the center, it will t to the smaller size. Bigger height ( t to width ) GoalKicker.com Android Notes for Professionals 172 Same width and height. GoalKicker.com Android Notes for Professionals 173 Section 21.9: Set Image Resource <ImageView android:id="@+id/imgExample" android:layout_width="wrap_content" android:layout_height="wrap_content" ... /> set a drawable as content of ImageView using XML attribute: android:src="@drawable/android2" set a drawable programmatically: ImageView imgExample = (ImageView) findViewById(R.id.imgExample); GoalKicker.com Android Notes for Professionals 174 imgExample.setImageResource(R.drawable.android2); Section 21.10: ImageView ScaleType - FitXy Scale the image using FILL. Ocial Docs <ImageView android:layout_width="100dp" android:layout_height="200dp" android:src="@mipmap/ic_launcher" android:id="@+id/imageView" android:scaleType="fitXY" android:layout_gravity="center" android:background="@android:color/holo_orange_light"/> GoalKicker.com Android Notes for Professionals 175 GoalKicker.com Android Notes for Professionals 176 Section 21.11: MLRoundedImageView.java Copy and Paste following class in your package: public class MLRoundedImageView extends ImageView { public MLRoundedImageView(Context context) { super(context); } public MLRoundedImageView(Context context, AttributeSet attrs) { super(context, attrs); } public MLRoundedImageView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } GoalKicker.com Android Notes for Professionals 177 @Override protected void onDraw(Canvas canvas) { Drawable drawable = getDrawable(); if (drawable == null) { return; } if (getWidth() == 0 || getHeight() == 0) { return; } Bitmap b = ((BitmapDrawable) drawable).getBitmap(); Bitmap bitmap = b.copy(Bitmap.Config.ARGB_8888, true); int w = getWidth(), h = getHeight(); Bitmap roundBitmap = getCroppedBitmap(bitmap, w); canvas.drawBitmap(roundBitmap, 0, 0, null); } public static Bitmap getCroppedBitmap(Bitmap bmp, int radius) { Bitmap sbmp; if (bmp.getWidth() != radius || bmp.getHeight() != radius) { float smallest = Math.min(bmp.getWidth(), bmp.getHeight()); float factor = smallest / radius; sbmp = Bitmap.createScaledBitmap(bmp, (int)(bmp.getWidth() / factor), (int)(bmp.getHeight() / factor), false); } else { sbmp = bmp; } Bitmap output = Bitmap.createBitmap(radius, radius, Config.ARGB_8888); Canvas canvas = new Canvas(output); final int color = 0xffa19774; final Paint paint = new Paint(); final Rect rect = new Rect(0, 0, radius, radius); paint.setAntiAlias(true); paint.setFilterBitmap(true); paint.setDither(true); canvas.drawARGB(0, 0, 0, 0); paint.setColor(Color.parseColor("#BAB399")); canvas.drawCircle(radius / 2 + 0.7f, radius / 2 + 0.7f, radius / 2 + 0.1f, paint); paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN)); canvas.drawBitmap(sbmp, rect, rect, paint); return output; } } Use this Class in XML with package name instead of ImageView <com.androidbuts.example.MLRoundedImageView android:layout_width="wrap_content" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 178 android:src="@mipmap/ic_launcher" /> GoalKicker.com Android Notes for Professionals 179 Chapter 22: VideoView Section 22.1: Play video from URL with using VideoView videoView.setVideoURI(Uri.parse("http://example.com/examplevideo.mp4")); videoView.requestFocus(); videoView.setOnCompletionListener(new MediaPlayer.OnCompletionListener() { @Override public void onCompletion(MediaPlayer mediaPlayer) { } }); videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { @Override public void onPrepared(MediaPlayer mediaPlayer) { videoView.start(); mediaPlayer.setOnVideoSizeChangedListener(new MediaPlayer.OnVideoSizeChangedListener() { @Override public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { MediaController mediaController = new MediaController(ActivityName.this); videoView.setMediaController(mediaController); mediaController.setAnchorView(videoView); } }); } }); videoView.setOnErrorListener(new MediaPlayer.OnErrorListener() { @Override public boolean onError(MediaPlayer mediaPlayer, int i, int i1) { return false; } }); Section 22.2: VideoView Create Find VideoView in Activity and add video into it. VideoView videoView = (VideoView) .findViewById(R.id.videoView); videoView.setVideoPath(pathToVideo); Start playing video. videoView.start(); Dene VideoView in XML Layout le. <VideoView android:id="@+id/videoView" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" /> GoalKicker.com Android Notes for Professionals 180 Chapter 23: Optimized VideoView Playing a video using a VideoView which extends SurfaceView inside of a row of a ListView seems to work at rst, until the user tries to scroll the list. As soon as the list starts to scroll, the video turns black (sometimes displays white). It keeps playing in the background but you cant see it anymore because it renders the rest of the video as a black box. With the custom Optimized VideoView, the videos will play on scroll in the ListView just like our Instagram, Facebook, Twitter. Section 23.1: Optimized VideoView in ListView This the custom VideoView that you need to have it in your package. Custom VideoView Layout: <your.packagename.VideoView android:id="@+id/video_view" android:layout_width="300dp" android:layout_height="300dp" /> Code for custom Optimized VideoView: package your.package.com.whateveritis; import android.content.Context; import android.content.Intent; import android.graphics.SurfaceTexture; import android.media.AudioManager; import android.media.MediaPlayer; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnErrorListener; import android.media.MediaPlayer.OnInfoListener; import android.net.Uri; import android.util.AttributeSet; import android.util.Log; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.Surface; import android.view.TextureView; import android.view.View; import android.widget.MediaController; import android.widget.MediaController.MediaPlayerControl; import java.io.IOException; /** * VideoView is used to play video, just like * {@link android.widget.VideoView VideoView}. We define a custom view, because * we could not use {@link android.widget.VideoView VideoView} in ListView. <br/> * VideoViews inside ScrollViews do not scroll properly. Even if you use the * workaround to set the background color, the MediaController does not scroll * along with the VideoView. Also, the scrolling video looks horrendous with the * workaround, lots of flickering. * * @author leo */ public class VideoView extends TextureView implements MediaPlayerControl { GoalKicker.com Android Notes for Professionals 181 private static final String TAG = "tag"; // all possible internal states private static final int STATE_ERROR = -1; private static final int STATE_IDLE = 0; private static final int STATE_PREPARING = 1; private static final int STATE_PREPARED = 2; private static final int STATE_PLAYING = 3; private static final int STATE_PAUSED = 4; private static final int STATE_PLAYBACK_COMPLETED = 5; // currentState is a VideoView object's current state. // targetState is the state that a method caller intends to reach. // For instance, regardless the VideoView object's current state, // calling pause() intends to bring the object to a target state // of STATE_PAUSED. private int mCurrentState = STATE_IDLE; private int mTargetState = STATE_IDLE; // Stuff we need for playing and showing a video private MediaPlayer mMediaPlayer; private int mVideoWidth; private int mVideoHeight; private int mSurfaceWidth; private int mSurfaceHeight; private SurfaceTexture mSurfaceTexture; private Surface mSurface; private MediaController mMediaController; private MediaPlayer.OnCompletionListener mOnCompletionListener; private MediaPlayer.OnPreparedListener mOnPreparedListener; private MediaPlayer.OnErrorListener mOnErrorListener; private MediaPlayer.OnInfoListener mOnInfoListener; private int mSeekWhenPrepared; // recording the seek position while // preparing private int mCurrentBufferPercentage; private int mAudioSession; private Uri mUri; private Context mContext; public VideoView(final Context context) { super(context); mContext = context; initVideoView(); } public VideoView(final Context context, final AttributeSet attrs) { super(context, attrs); mContext = context; initVideoView(); } public VideoView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); mContext = context; initVideoView(); } public void initVideoView() { mVideoHeight = 0; GoalKicker.com Android Notes for Professionals 182 mVideoWidth = 0; setFocusable(false); setSurfaceTextureListener(mSurfaceTextureListener); } public int resolveAdjustedSize(int desiredSize, int measureSpec) { int result = desiredSize; int specMode = MeasureSpec.getMode(measureSpec); int specSize = MeasureSpec.getSize(measureSpec); switch (specMode) { case MeasureSpec.UNSPECIFIED: /* * Parent says we can be as big as we want. Just don't be larger * than max size imposed on ourselves. */ result = desiredSize; break; case MeasureSpec.AT_MOST: /* * Parent says we can be as big as we want, up to specSize. Don't be * larger than specSize, and don't be larger than the max size * imposed on ourselves. */ result = Math.min(desiredSize, specSize); break; case MeasureSpec.EXACTLY: // No choice. Do what we are told. result = specSize; break; } return result; } public void setVideoPath(String path) { Log.d(TAG, "Setting video path to: " + path); setVideoURI(Uri.parse(path)); } public void setVideoURI(Uri _videoURI) { mUri = _videoURI; mSeekWhenPrepared = 0; requestLayout(); invalidate(); openVideo(); } public Uri getUri() { return mUri; } public void setSurfaceTexture(SurfaceTexture _surfaceTexture) { mSurfaceTexture = _surfaceTexture; } public void openVideo() { if ((mUri == null) || (mSurfaceTexture == null)) { Log.d(TAG, "Cannot open video, uri or surface texture is null."); return; } GoalKicker.com Android Notes for Professionals 183 // Tell the music playback service to pause // TODO: these constants need to be published somewhere in the // framework. Intent i = new Intent("com.android.music.musicservicecommand"); i.putExtra("command", "pause"); mContext.sendBroadcast(i); release(false); try { mSurface = new Surface(mSurfaceTexture); mMediaPlayer = new MediaPlayer(); if (mAudioSession != 0) { mMediaPlayer.setAudioSessionId(mAudioSession); } else { mAudioSession = mMediaPlayer.getAudioSessionId(); } mMediaPlayer.setOnBufferingUpdateListener(mBufferingUpdateListener); mMediaPlayer.setOnCompletionListener(mCompleteListener); mMediaPlayer.setOnPreparedListener(mPreparedListener); mMediaPlayer.setOnErrorListener(mErrorListener); mMediaPlayer.setOnInfoListener(mOnInfoListener); mMediaPlayer.setOnVideoSizeChangedListener(mVideoSizeChangedListener); mMediaPlayer.setSurface(mSurface); mCurrentBufferPercentage = 0; mMediaPlayer.setDataSource(mContext, mUri); mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); mMediaPlayer.setScreenOnWhilePlaying(true); mMediaPlayer.prepareAsync(); mCurrentState = STATE_PREPARING; } catch (IllegalStateException e) { mCurrentState = STATE_ERROR; mTargetState = STATE_ERROR; String msg = (e.getMessage()==null)?"":e.getMessage(); Log.i("",msg); // TODO auto-generated catch block } catch (IOException e) { mCurrentState = STATE_ERROR; mTargetState = STATE_ERROR; String msg = (e.getMessage()==null)?"":e.getMessage(); Log.i("",msg); // TODO auto-generated catch block } } public void stopPlayback() { if (mMediaPlayer != null) { mMediaPlayer.stop(); mMediaPlayer.release(); mMediaPlayer = null; if (null != mMediaControllListener) { mMediaControllListener.onStop(); } } } public void setMediaController(MediaController controller) { if (mMediaController != null) { mMediaController.hide(); } mMediaController = controller; attachMediaController(); GoalKicker.com Android Notes for Professionals 184 } private void attachMediaController() { if (mMediaPlayer != null && mMediaController != null) { mMediaController.setMediaPlayer(this); View anchorView = this.getParent() instanceof View ? (View) this.getParent() : this; mMediaController.setAnchorView(anchorView); mMediaController.setEnabled(isInPlaybackState()); } } private void release(boolean cleartargetstate) { Log.d(TAG, "Releasing media player."); if (mMediaPlayer != null) { mMediaPlayer.reset(); mMediaPlayer.release(); mMediaPlayer = null; mCurrentState = STATE_IDLE; if (cleartargetstate) { mTargetState = STATE_IDLE; } } else { Log.d(TAG, "Media player was null, did not release."); } } @Override protected void onMeasure(final int widthMeasureSpec, final int heightMeasureSpec) { // Will resize the view if the video dimensions have been found. // video dimensions are found after onPrepared has been called by // MediaPlayer int width = getDefaultSize(mVideoWidth, widthMeasureSpec); int height = getDefaultSize(mVideoHeight, heightMeasureSpec); if ((mVideoWidth > 0) && (mVideoHeight > 0)) { if ((mVideoWidth * height) > (width * mVideoHeight)) { Log.d(TAG, "Video too tall, change size."); height = (width * mVideoHeight) / mVideoWidth; } else if ((mVideoWidth * height) < (width * mVideoHeight)) { Log.d(TAG, "Video too wide, change size."); width = (height * mVideoWidth) / mVideoHeight; } else { Log.d(TAG, "Aspect ratio is correct."); } } setMeasuredDimension(width, height); } @Override public boolean onTouchEvent(MotionEvent ev) { if (isInPlaybackState() && mMediaController != null) { toggleMediaControlsVisiblity(); } return false; } @Override public boolean onTrackballEvent(MotionEvent ev) { if (isInPlaybackState() && mMediaController != null) { toggleMediaControlsVisiblity(); } return false; } GoalKicker.com Android Notes for Professionals 185 @Override public boolean onKeyDown(int keyCode, KeyEvent event) { boolean isKeyCodeSupported = keyCode != KeyEvent.KEYCODE_BACK && keyCode != KeyEvent.KEYCODE_VOLUME_UP && keyCode != KeyEvent.KEYCODE_VOLUME_DOWN && keyCode != KeyEvent.KEYCODE_VOLUME_MUTE && keyCode != KeyEvent.KEYCODE_MENU && keyCode != KeyEvent.KEYCODE_CALL && keyCode != KeyEvent.KEYCODE_ENDCALL; if (isInPlaybackState() && isKeyCodeSupported && mMediaController != null) { if (keyCode == KeyEvent.KEYCODE_HEADSETHOOK || keyCode == KeyEvent.KEYCODE_MEDIA_PLAY_PAUSE) { if (mMediaPlayer.isPlaying()) { pause(); mMediaController.show(); } else { start(); mMediaController.hide(); } return true; } else if (keyCode == KeyEvent.KEYCODE_MEDIA_PLAY) { if (!mMediaPlayer.isPlaying()) { start(); mMediaController.hide(); } return true; } else if (keyCode == KeyEvent.KEYCODE_MEDIA_STOP || keyCode == KeyEvent.KEYCODE_MEDIA_PAUSE) { if (mMediaPlayer.isPlaying()) { pause(); mMediaController.show(); } return true; } else { toggleMediaControlsVisiblity(); } } return super.onKeyDown(keyCode, event); } private void toggleMediaControlsVisiblity() { if (mMediaController.isShowing()) { mMediaController.hide(); } else { mMediaController.show(); } } public void start() { // This can potentially be called at several points, it will go through // when all conditions are ready // 1. When setting the video URI // 2. When the surface becomes available // 3. From the activity if (isInPlaybackState()) { mMediaPlayer.start(); mCurrentState = STATE_PLAYING; if (null != mMediaControllListener) { mMediaControllListener.onStart(); } } else { Log.d(TAG, "Could not start. Current state " + mCurrentState); } GoalKicker.com Android Notes for Professionals 186 mTargetState = STATE_PLAYING; } public void pause() { if (isInPlaybackState()) { if (mMediaPlayer.isPlaying()) { mMediaPlayer.pause(); mCurrentState = STATE_PAUSED; if (null != mMediaControllListener) { mMediaControllListener.onPause(); } } } mTargetState = STATE_PAUSED; } public void suspend() { release(false); } public void resume() { openVideo(); } @Override public int getDuration() { if (isInPlaybackState()) { return mMediaPlayer.getDuration(); } return -1; } @Override public int getCurrentPosition() { if (isInPlaybackState()) { return mMediaPlayer.getCurrentPosition(); } return 0; } @Override public void seekTo(int msec) { if (isInPlaybackState()) { mMediaPlayer.seekTo(msec); mSeekWhenPrepared = 0; } else { mSeekWhenPrepared = msec; } } @Override public boolean isPlaying() { return isInPlaybackState() && mMediaPlayer.isPlaying(); } @Override public int getBufferPercentage() { if (mMediaPlayer != null) { return mCurrentBufferPercentage; } return 0; GoalKicker.com Android Notes for Professionals 187 } private boolean isInPlaybackState() { return ((mMediaPlayer != null) && (mCurrentState != STATE_ERROR) && (mCurrentState != STATE_IDLE) && (mCurrentState != STATE_PREPARING)); } @Override public boolean canPause() { return false; } @Override public boolean canSeekBackward() { return false; } @Override public boolean canSeekForward() { return false; } @Override public int getAudioSessionId() { if (mAudioSession == 0) { MediaPlayer foo = new MediaPlayer(); mAudioSession = foo.getAudioSessionId(); foo.release(); } return mAudioSession; } // Listeners private MediaPlayer.OnBufferingUpdateListener mBufferingUpdateListener = new MediaPlayer.OnBufferingUpdateListener() { @Override public void onBufferingUpdate(final MediaPlayer mp, final int percent) { mCurrentBufferPercentage = percent; } }; private MediaPlayer.OnCompletionListener mCompleteListener = new MediaPlayer.OnCompletionListener() { @Override public void onCompletion(final MediaPlayer mp) { mCurrentState = STATE_PLAYBACK_COMPLETED; mTargetState = STATE_PLAYBACK_COMPLETED; mSurface.release(); if (mMediaController != null) { mMediaController.hide(); } if (mOnCompletionListener != null) { mOnCompletionListener.onCompletion(mp); } if (mMediaControllListener != null) { mMediaControllListener.onComplete(); } } }; GoalKicker.com Android Notes for Professionals 188 private MediaPlayer.OnPreparedListener mPreparedListener = new MediaPlayer.OnPreparedListener() { @Override public void onPrepared(final MediaPlayer mp) { mCurrentState = STATE_PREPARED; mMediaController = new MediaController(getContext()); if (mOnPreparedListener != null) { mOnPreparedListener.onPrepared(mMediaPlayer); } if (mMediaController != null) { mMediaController.setEnabled(true); //mMediaController.setAnchorView(getRootView()); } mVideoWidth = mp.getVideoWidth(); mVideoHeight = mp.getVideoHeight(); int seekToPosition = mSeekWhenPrepared; // mSeekWhenPrepared may be // changed after seekTo() // call if (seekToPosition != 0) { seekTo(seekToPosition); } requestLayout(); invalidate(); if ((mVideoWidth != 0) && (mVideoHeight != 0)) { if (mTargetState == STATE_PLAYING) { mMediaPlayer.start(); if (null != mMediaControllListener) { mMediaControllListener.onStart(); } } } else { if (mTargetState == STATE_PLAYING) { mMediaPlayer.start(); if (null != mMediaControllListener) { mMediaControllListener.onStart(); } } } } }; private MediaPlayer.OnVideoSizeChangedListener mVideoSizeChangedListener = new MediaPlayer.OnVideoSizeChangedListener() { @Override public void onVideoSizeChanged(final MediaPlayer mp, final int width, final int height) { mVideoWidth = mp.getVideoWidth(); mVideoHeight = mp.getVideoHeight(); if (mVideoWidth != 0 && mVideoHeight != 0) { requestLayout(); } } }; private MediaPlayer.OnErrorListener mErrorListener = new MediaPlayer.OnErrorListener() { @Override public boolean onError(final MediaPlayer mp, final int what, final int extra) { Log.d(TAG, "Error: " + what + "," + extra); GoalKicker.com Android Notes for Professionals 189 mCurrentState = STATE_ERROR; mTargetState = STATE_ERROR; if (mMediaController != null) { mMediaController.hide(); } /* If an error handler has been supplied, use it and finish. */ if (mOnErrorListener != null) { if (mOnErrorListener.onError(mMediaPlayer, what, extra)) { return true; } } /* * Otherwise, pop up an error dialog so the user knows that * something bad has happened. Only try and pop up the dialog if * we're attached to a window. When we're going away and no longer * have a window, don't bother showing the user an error. */ if (getWindowToken() != null) { // new AlertDialog.Builder(mContext).setMessage("Error: " + what + "," + extra).setPositiveButton("OK", new DialogInterface.OnClickListener() { // public void onClick(DialogInterface dialog, int whichButton) { // /* // * If we get here, there is no onError listener, so at // * least inform them that the video is over. // */ // if (mOnCompletionListener != null) { // mOnCompletionListener.onCompletion(mMediaPlayer); // } // } // }).setCancelable(false).show(); } return true; } }; SurfaceTextureListener mSurfaceTextureListener = new SurfaceTextureListener() { @Override public void onSurfaceTextureAvailable(final SurfaceTexture surface, final int width, final int height) { Log.d(TAG, "onSurfaceTextureAvailable."); mSurfaceTexture = surface; openVideo(); } @Override public void onSurfaceTextureSizeChanged(final SurfaceTexture surface, final int width, final int height) { Log.d(TAG, "onSurfaceTextureSizeChanged: " + width + '/' + height); mSurfaceWidth = width; mSurfaceHeight = height; boolean isValidState = (mTargetState == STATE_PLAYING); boolean hasValidSize = (mVideoWidth == width && mVideoHeight == height); if (mMediaPlayer != null && isValidState && hasValidSize) { if (mSeekWhenPrepared != 0) { seekTo(mSeekWhenPrepared); } start(); } GoalKicker.com Android Notes for Professionals 190 } @Override public boolean onSurfaceTextureDestroyed(final SurfaceTexture surface) { mSurface = null; if (mMediaController != null) mMediaController.hide(); release(true); return true; } @Override public void onSurfaceTextureUpdated(final SurfaceTexture surface) { } }; /** * Register a callback to be invoked when the media file is loaded and ready * to go. * * @param l The callback that will be run */ public void setOnPreparedListener(MediaPlayer.OnPreparedListener l) { mOnPreparedListener = l; } /** * Register a callback to be invoked when the end of a media file has been * reached during playback. * * @param l The callback that will be run */ public void setOnCompletionListener(OnCompletionListener l) { mOnCompletionListener = l; } /** * Register a callback to be invoked when an error occurs during playback or * setup. If no listener is specified, or if the listener returned false, * VideoView will inform the user of any errors. * * @param l The callback that will be run */ public void setOnErrorListener(OnErrorListener l) { mOnErrorListener = l; } /** * Register a callback to be invoked when an informational event occurs * during playback or setup. * * @param l The callback that will be run */ public void setOnInfoListener(OnInfoListener l) { mOnInfoListener = l; } public static interface MediaControllListener { public void onStart(); GoalKicker.com Android Notes for Professionals 191 public void onPause(); public void onStop(); public void onComplete(); } MediaControllListener mMediaControllListener; public void setMediaControllListener(MediaControllListener mediaControllListener) { mMediaControllListener = mediaControllListener; } @Override public void setVisibility(int visibility) { System.out.println("setVisibility: " + visibility); super.setVisibility(visibility); } } Help from this gitub repository. Though It has some issues as it was written 3 years ago I managed to x them on my own as written above. GoalKicker.com Android Notes for Professionals 192 Chapter 24: WebView WebView is a view that display web pages inside your application. By this you can add your own URL. Section 24.1: Troubleshooting WebView by printing console messages or by remote debugging Printing webview console messages to logcat To handle console messages from web page you can override onConsoleMessage in WebChromeClient: final class ChromeClient extends WebChromeClient { @Override public boolean onConsoleMessage(ConsoleMessage msg) { Log.d( "WebView", String.format("%s %s:%d", msg.message(), msg.lineNumber(), msg.sourceId()) ); return true; } } And set it in your activity or fragment: webView.setWebChromeClient(new ChromeClient()); So this sample page: <html> <head> <script type="text/javascript"> console.log('test message'); </script> </head> <body> </body> </html> will write log 'test message' to logcat: WebView: test message sample.html:4 console.info(), console.warn() and console.error() are also supported by chrome-client. Remote debugging android devices with Chrome Your can remote debug webview based application from you desktop Chrome. Enable USB debugging on your Android device On your Android device, open up Settings, nd the Developer options section, and enable USB debugging. Connect and discover your Android device GoalKicker.com Android Notes for Professionals 193 Open page in chrome following page: chrome://inspect/#devices From the Inspect Devices dialog, select your device and press inspect. A new instance of Chrome's DevTools opens up on your development machine. More detailed guideline and description of DevTools can be found on developers.google.com Section 24.2: Communication from Javascript to Java (Android) Android Activity package com.example.myapp; import android.os.Bundle; import android.app.Activity; import android.webkit.WebView; public class WebViewActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebView webView = new WebView(this); setContentView(webView); /* * Note the label Android, this is used in the Javascript side of things * You can of course change this. */ webView.addJavascriptInterface(new JavascriptHandler(), "Android"); webView.loadUrl("http://example.com"); } } Java Javascript Handler import android.webkit.JavascriptInterface; public class JavascriptHandler { /** * Key point here is the annotation @JavascriptInterface * */ @JavascriptInterface public void jsCallback() { // Do something } @JavascriptInterface public void jsCallbackTwo(String dummyData) { // Do something } } Web Page, Javascript call GoalKicker.com Android Notes for Professionals 194 <script> ... Android.jsCallback(); ... Android.jsCallback('hello test'); ... </script> Extra Tip Passing in a complex data structure, a possible solution is use JSON. Android.jsCallback('{ "fake-var" : "fake-value", "fake-array" : [0,1,2] }'); On the Android side use your favorite JSON parser ie: JSONObject Section 24.3: Communication from Java to Javascript Basic Example package com.example.myapp; import android.os.Bundle; import android.app.Activity; import android.webkit.WebView; public class WebViewActivity extends Activity { private Webview webView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); webView = new WebView(this); webView.getSettings().setJavaScriptEnabled(true); setContentView(webView); webView.loadUrl("http://example.com"); /* * Invoke Javascript function */ webView.loadUrl("javascript:testJsFunction('Hello World!')"); } /** * Invoking a Javascript function */ public void doSomething() { this.webView.loadUrl("javascript:testAnotherFunction('Hello World Again!')"); } } Section 24.4: Open dialer example If the web page a contains phone number you can make a call using your phone's dialer. This code checks for the GoalKicker.com Android Notes for Professionals 195 url which starts with tel: then make an intent to open dialer and you can make a call to the clicked phone number: public boolean shouldOverrideUrlLoading(WebView view, String url) { if (url.startsWith("tel:")) { Intent intent = new Intent(Intent.ACTION_DIAL, Uri.parse(url)); startActivity(intent); }else if(url.startsWith("http:") || url.startsWith("https:")) { view.loadUrl(url); } return true; } Section 24.5: Open Local File / Create dynamic content in Webview Layout.xml <WebView android:id="@+id/WebViewToDisplay" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:fadeScrollbars="false" /> Load data into WebViewToDisplay WebView webViewDisplay; StringBuffer LoadWEb1; webViewDisplay = (WebView) findViewById(R.id.WebViewToDisplay); LoadWEb1 = new StringBuffer(); LoadWEb1.append("<html><body><h1>My First Heading</h1><p>My first paragraph.</p>"); //Sample code to read parameters at run time String strName = "Test Paragraph"; LoadWEb1.append("<br/><p>"+strName+"</p>"); String result = LoadWEb1.append("</body></html>").toString(); WebSettings webSettings = webViewDisplay.getSettings(); webSettings.setJavaScriptEnabled(true); webViewDisplay.getSettings().setBuiltInZoomControls(true); if (android.os.Build.VERSION.SDK_INT >= 11){ webViewDisplay.setLayerType(View.LAYER_TYPE_SOFTWARE, null); webViewDisplay.getSettings().setDisplayZoomControls(false); } webViewDisplay.loadDataWithBaseURL(null, result, "text/html", "utf-8", null); //To load local file directly from assets folder use below code //webViewDisplay.loadUrl("file:///android_asset/aboutapp.html"); Section 24.6: JavaScript alert dialogs in WebView - How to make them work By default, WebView does not implement JavaScript alert dialogs, ie. alert() will do nothing. In order to make you need to rstly enable JavaScript (obviously..), and then set a WebChromeClient to handle requests for alert dialogs from the page: GoalKicker.com Android Notes for Professionals 196 webView.setWebChromeClient(new WebChromeClient() { //Other methods for your WebChromeClient here, if needed.. @Override public boolean onJsAlert(WebView view, String url, String message, JsResult result) { return super.onJsAlert(view, url, message, result); } }); Here, we override onJsAlert, and then we call through to the super implementation, which gives us a standard Android dialog. You can also use the message and URL yourself, for example if you want to create a custom styled dialog or if you want to log them. GoalKicker.com Android Notes for Professionals 197 Chapter 25: SearchView Section 25.1: Setting Theme for SearchView Basically to apply a theme for SearchView extracted as app:actionViewClass from the menu.xml, we need understand that it depends completely on the style applied to the underlying Toolbar. To achieve themeing the Toolbar apply the following steps. Create a style in the styles.xml <style name="ActionBarThemeOverlay"> <item name="android:textColorPrimary">@color/prim_color</item> <item name="colorControlNormal">@color/normal_color</item> <item name="colorControlHighlight">@color/high_color</item> <item name="android:textColorHint">@color/hint_color</item> </style> Apply the style to the Toolbar. <android.support.v7.widget.Toolbar android:id="@+id/toolbar" app:theme="@style/ActionBarThemeOverlay" app:popupTheme="@style/ActionBarThemeOverlay" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="@color/colorPrimary" android:title="@string/title" tools:targetApi="m" /> This gives the desired color to the all the views corresponding to the Toolbar (back button, Menu icons and SearchView). Section 25.2: SearchView in Toolbar with Fragment menu.xml - (res -> menu) <menu xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" tools:context=".HomeActivity"> <item android:id="@+id/action_search" android:icon="@android:drawable/ic_menu_search" android:title="Search" app:actionViewClass="android.support.v7.widget.SearchView" app:showAsAction="always" /> </menu> MainFragment.java public class MainFragment extends Fragment { private SearchView searchView = null; private SearchView.OnQueryTextListener queryTextListener; GoalKicker.com Android Notes for Professionals 198 @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_main, container, false); } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setHasOptionsMenu(true); } @Override public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) { inflater.inflate(R.menu.menu, menu); MenuItem searchItem = menu.findItem(R.id.action_search); SearchManager searchManager = (SearchManager) getActivity().getSystemService(Context.SEARCH_SERVICE); if (searchItem != null) { searchView = (SearchView) searchItem.getActionView(); } if (searchView != null) { searchView.setSearchableInfo(searchManager.getSearchableInfo(getActivity().getComponentName())); queryTextListener = new SearchView.OnQueryTextListener() { @Override public boolean onQueryTextChange(String newText) { Log.i("onQueryTextChange", newText); return true; } @Override public boolean onQueryTextSubmit(String query) { Log.i("onQueryTextSubmit", query); return true; } }; searchView.setOnQueryTextListener(queryTextListener); } super.onCreateOptionsMenu(menu, inflater); } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.action_search: // Not implemented here return false; default: break; } searchView.setOnQueryTextListener(queryTextListener); return super.onOptionsItemSelected(item); } } Reference screenshot: GoalKicker.com Android Notes for Professionals 199 Section 25.3: Appcompat SearchView with RxBindings watcher build.gradle: dependencies { compile 'com.android.support:appcompat-v7:23.3.0' compile 'com.jakewharton.rxbinding:rxbinding-appcompat-v7:0.4.0' } menu/menu.xml: <menu xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto"> <item android:id="@+id/action_search" android:title="Search" android:icon="@android:drawable/ic_menu_search" app:actionViewClass="android.support.v7.widget.SearchView" app:showAsAction="always"/> </menu> MainActivity.java: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu, menu); MenuItem searchMenuItem = menu.findItem(R.id.action_search); setupSearchView(searchMenuItem ); return true; } private void setupSearchView(MenuItem searchMenuItem) { SearchView searchView = (SearchView) searchMenuItem.getActionView(); searchView.setQueryHint(getString(R.string.search_hint)); // your hint here SearchAdapter searchAdapter = new SearchAdapter(this); searchView.setSuggestionsAdapter(searchAdapter); // optional: set the letters count after which the search will begin to 1 // the default is 2 try { int autoCompleteTextViewID = getResources().getIdentifier("android:id/search_src_text", null, null); GoalKicker.com Android Notes for Professionals 200 AutoCompleteTextView searchAutoCompleteTextView = (AutoCompleteTextView) searchView.findViewById(autoCompleteTextViewID); searchAutoCompleteTextView.setThreshold(1); } catch (Exception e) { Logs.e(TAG, "failed to set search view letters threshold"); } searchView.setOnSearchClickListener(v -> { // optional actions to search view expand }); searchView.setOnCloseListener(() -> { // optional actions to search view close return false; }); RxSearchView.queryTextChanges(searchView) .doOnEach(notification -> { CharSequence query = (CharSequence) notification.getValue(); searchAdapter.filter(query); }) .debounce(300, TimeUnit.MILLISECONDS) // to skip intermediate letters .flatMap(query -> MyWebService.search(query)) // make a search request .retry(3) .subscribe(results -> { searchAdapter.populateAdapter(results); }); //optional: collapse the searchView on close searchView.setOnQueryTextFocusChangeListener((view, queryTextFocused) -> { if (!queryTextFocused) { collapseSearchView(); } }); } SearchAdapter.java public class SearchAdapter extends CursorAdapter { private List<SearchResult> items = Collections.emptyList(); public SearchAdapter(Activity activity) { super(activity, null, CursorAdapter.FLAG_REGISTER_CONTENT_OBSERVER); } public void populateAdapter(List<SearchResult> items) { this.items = items; final MatrixCursor c = new MatrixCursor(new String[]{BaseColumns._ID}); for (int i = 0; i < items.size(); i++) { c.addRow(new Object[]{i}); } changeCursor(c); notifyDataSetChanged(); } public void filter(CharSequence query) { final MatrixCursor c = new MatrixCursor(new String[]{BaseColumns._ID}); for (int i = 0; i < items.size(); i++) { SearchResult result = items.get(i); if (result.getText().startsWith(query.toString())) { c.addRow(new Object[]{i}); } GoalKicker.com Android Notes for Professionals 201 } changeCursor(c); notifyDataSetChanged(); } @Override public void bindView(View view, Context context, Cursor cursor) { ViewHolder holder = (ViewHolder) view.getTag(); int position = cursor.getPosition(); if (position < items.size()) { SearchResult result = items.get(position); // bind your view here } } @Override public View newView(Context context, Cursor cursor, ViewGroup parent) { LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); View v = inflater.inflate(R.layout.search_list_item, parent, false); ViewHolder holder = new ViewHolder(v); v.setTag(holder); return v; } private static class ViewHolder { public final TextView text; public ViewHolder(View v) { this.text= (TextView) v.findViewById(R.id.text); } } } GoalKicker.com Android Notes for Professionals 202 Chapter 26: BottomNavigationView The Bottom Navigation View has been in the material design guidelines for some time, but it hasnt been easy for us to implement it into our apps. Some applications have built their own solutions, whilst others have relied on third-party open-source libraries to get the job done. Now the design support library is seeing the addition of this bottom navigation bar, lets take a dive into how we can use it! Section 26.1: Basic implemetation To add the BottomNavigationView follow these steps: 1. Add in your build.gradle the dependency: compile 'com.android.support:design:25.1.0' 2. Add the BottomNavigationView in your layout: <android.support.design.widget.BottomNavigationView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/bottom_navigation" android:layout_width="match_parent" android:layout_height="wrap_content" app:menu="@menu/bottom_navigation_menu"/> 3. Create the menu to populate the view: <!-- res/menu/bottom_navigation_menu.xml --> <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto"> <item android:id="@+id/my_action1" android:enabled="true" android:icon="@drawable/my_drawable" android:title="@string/text" app:showAsAction="ifRoom" /> .... </menu> 4. Attach a listener for the click events: //Get the view BottomNavigationView bottomNavigationView = (BottomNavigationView) findViewById(R.id.bottom_navigation); //Attach the listener bottomNavigationView.setOnNavigationItemSelectedListener( new BottomNavigationView.OnNavigationItemSelectedListener() { @Override public boolean onNavigationItemSelected(@NonNull MenuItem item) { switch (item.getItemId()) { case R.id.my_action1: GoalKicker.com Android Notes for Professionals 203 //Do something... break; //... } return true;//returning false disables the Navigation bar animations } }); Checkout demo code at BottomNavigation-Demo Section 26.2: Customization of BottomNavigationView Note : I am assuming that you know about how to use BottomNavigationView. This example I will explain how to add selector for BottomNavigationView. So you can state on UI for icons and texts. Create drawable bottom_navigation_view_selector.xml as <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:color="@color/bottom_nv_menu_selected" android:state_checked="true" /> <item android:color="@color/bottom_nv_menu_default" /> </selector> And use below attributes into BottomNavigationView in layout le app:itemIconTint="@drawable/bottom_navigation_view_selector" app:itemTextColor="@drawable/bottom_navigation_view_selector" In above example, I have used same selector bottom_navigation_view_selector for app:itemIconTint and app:itemTextColor both to keep text and icon colors same. But if your design has dierent color for text and icon, you can dene 2 dierent selectors and use them. Output will be similar to below Section 26.3: Handling Enabled / Disabled states Create Selector for Enable/Disable Menu Item. selector.xml GoalKicker.com Android Notes for Professionals 204 <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:color="@color/white" android:state_enabled="true" /> <item android:color="@color/colorPrimaryDark" android:state_enabled="false" /> </selector> design.xml <android.support.design.widget.BottomNavigationView android:id="@+id/bottom_navigation" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" app:itemBackground="@color/colorPrimary" app:itemIconTint="@drawable/nav_item_color_state" app:itemTextColor="@drawable/nav_item_color_state" app:menu="@menu/bottom_navigation_main" /> Section 26.4: Allowing more than 3 menus This example is strictly a workaround since, currently there is no way to disable a behaviour known as ShiftMode. Create a function as such. public static void disableMenuShiftMode(BottomNavigationView view) { BottomNavigationMenuView menuView = (BottomNavigationMenuView) view.getChildAt(0); try { Field shiftingMode = menuView.getClass().getDeclaredField("mShiftingMode"); shiftingMode.setAccessible(true); shiftingMode.setBoolean(menuView, false); shiftingMode.setAccessible(false); for (int i = 0; i < menuView.getChildCount(); i++) { BottomNavigationItemView item = (BottomNavigationItemView) menuView.getChildAt(i); //noinspection RestrictedApi item.setShiftingMode(false); // set once again checked value, so view will be updated //noinspection RestrictedApi item.setChecked(item.getItemData().isChecked()); } } catch (NoSuchFieldException e) { Log.e("BNVHelper", "Unable to get shift mode field", e); } catch (IllegalAccessException e) { Log.e("BNVHelper", "Unable to change value of shift mode", e); } } This disables the Shifting behaviour of the menu when item count exceeds 3 nos. USAGE BottomNavigationView navView = (BottomNavigationView) findViewById(R.id.bottom_navigation_bar); disableMenuShiftMode(navView); Proguard Issue : Add following line proguard conguration le as well else, this wouldn't work. -keepclassmembers class android.support.design.internal.BottomNavigationMenuView { boolean mShiftingMode; } GoalKicker.com Android Notes for Professionals 205 Alternatively, you can create a Class and access this method from there. See Original Reply Here NOTE : This is a Reection based HOTFIX, please update this once Google's support library is updated with a direct function call. GoalKicker.com Android Notes for Professionals 206 Chapter 27: Canvas drawing using SurfaceView Section 27.1: SurfaceView with drawing thread This example describes how to create a SurfaceView with a dedicated drawing thread. This implementation also handles edge cases such as manufacture specic issues as well as starting/stopping the thread to save cpu time. import android.content.Context; import android.graphics.Canvas; import android.graphics.Paint; import android.util.AttributeSet; import android.util.Log; import android.view.MotionEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; /** * Defines a custom SurfaceView class which handles the drawing thread **/ public class BaseSurface extends SurfaceView implements SurfaceHolder.Callback, View.OnTouchListener, Runnable { /** * Holds the surface frame */ private SurfaceHolder holder; /** * Draw thread */ private Thread drawThread; /** * True when the surface is ready to draw */ private boolean surfaceReady = false; /** * Drawing thread flag */ private boolean drawingActive = false; /** * Paint for drawing the sample rectangle */ private Paint samplePaint = new Paint(); /** * Time per frame for 60 FPS */ private static final int MAX_FRAME_TIME = (int) (1000.0 / 60.0); private static final String LOGTAG = "surface"; public BaseSurface(Context context, AttributeSet attrs) GoalKicker.com Android Notes for Professionals 207 { super(context, attrs); SurfaceHolder holder = getHolder(); holder.addCallback(this); setOnTouchListener(this); // red samplePaint.setColor(0xffff0000); // smooth edges samplePaint.setAntiAlias(true); } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { if (width == 0 || height == 0) { return; } // resize your UI } @Override public void surfaceCreated(SurfaceHolder holder) { this.holder = holder; if (drawThread != null) { Log.d(LOGTAG, "draw thread still active.."); drawingActive = false; try { drawThread.join(); } catch (InterruptedException e) { // do nothing } } surfaceReady = true; startDrawThread(); Log.d(LOGTAG, "Created"); } @Override public void surfaceDestroyed(SurfaceHolder holder) { // Surface is not used anymore - stop the drawing thread stopDrawThread(); // and release the surface holder.getSurface().release(); this.holder = null; surfaceReady = false; Log.d(LOGTAG, "Destroyed"); } @Override public boolean onTouch(View v, MotionEvent event) { // Handle touch events GoalKicker.com Android Notes for Professionals 208 return true; } /** * Stops the drawing thread */ public void stopDrawThread() { if (drawThread == null) { Log.d(LOGTAG, "DrawThread is null"); return; } drawingActive = false; while (true) { try { Log.d(LOGTAG, "Request last frame"); drawThread.join(5000); break; } catch (Exception e) { Log.e(LOGTAG, "Could not join with draw thread"); } } drawThread = null; } /** * Creates a new draw thread and starts it. */ public void startDrawThread() { if (surfaceReady && drawThread == null) { drawThread = new Thread(this, "Draw thread"); drawingActive = true; drawThread.start(); } } @Override public void run() { Log.d(LOGTAG, "Draw thread started"); long frameStartTime; long frameTime; /* * In order to work reliable on Nexus 7, we place ~500ms delay at the start of drawing thread * (AOSP - Issue 58385) */ if (android.os.Build.BRAND.equalsIgnoreCase("google") && android.os.Build.MANUFACTURER.equalsIgnoreCase("asus") && android.os.Build.MODEL.equalsIgnoreCase("Nexus 7")) { Log.w(LOGTAG, "Sleep 500ms (Device: Asus Nexus 7)"); try { Thread.sleep(500); } catch (InterruptedException ignored) GoalKicker.com Android Notes for Professionals 209 { } } try { while (drawingActive) { if (holder == null) { return; } frameStartTime = System.nanoTime(); Canvas canvas = holder.lockCanvas(); if (canvas != null) { // clear the screen using black canvas.drawARGB(255, 0, 0, 0); try { // Your drawing here canvas.drawRect(0, 0, getWidth() / 2, getHeight() / 2, samplePaint); } finally { holder.unlockCanvasAndPost(canvas); } } // calculate the time required to draw the frame in ms frameTime = (System.nanoTime() - frameStartTime) / 1000000; if (frameTime < MAX_FRAME_TIME) // faster than the max fps - limit the FPS { try { Thread.sleep(MAX_FRAME_TIME - frameTime); } catch (InterruptedException e) { // ignore } } } } catch (Exception e) { Log.w(LOGTAG, "Exception while locking/unlocking"); } Log.d(LOGTAG, "Draw thread finished"); } } This layout only contains the custom SurfaceView and maximizes it to the screen size. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="sample.devcore.org.surfaceviewsample.MainActivity"> GoalKicker.com Android Notes for Professionals 210 <sample.devcore.org.surfaceviewsample.BaseSurface android:id="@+id/baseSurface" android:layout_width="match_parent" android:layout_height="match_parent"/> </LinearLayout> The activity which uses the SurfaceView is responsible for starting and stopping the drawing thread. This approach saves battery as the drawing is stopped as soon as the activity gets in the background. import android.app.Activity; import android.os.Bundle; public class MainActivity extends Activity { /** * Surface object */ private BaseSurface surface; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); surface = (BaseSurface) findViewById(R.id.baseSurface); } @Override protected void onResume() { super.onResume(); // start the drawing surface.startDrawThread(); } @Override protected void onPause() { // stop the drawing to save cpu time surface.stopDrawThread(); super.onPause(); } } GoalKicker.com Android Notes for Professionals 211 Chapter 28: Creating Custom Views Section 28.1: Creating Custom Views If you need a completely customized view, you'll need to subclass View (the superclass of all Android views) and provide your custom sizing (onMeasure(...)) and drawing (onDraw(...)) methods: 1. Create your custom view skeleton: this is basically the same for every custom view. Here we create the skeleton for a custom view that can draw a smiley, called SmileyView: public class SmileyView extends View { private Paint mCirclePaint; private Paint mEyeAndMouthPaint; private float mCenterX; private float mCenterY; private float mRadius; private RectF mArcBounds = new RectF(); public SmileyView(Context context) { this(context, null, 0); } public SmileyView(Context context, AttributeSet attrs) { this(context, attrs, 0); } public SmileyView(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); initPaints(); } private void initPaints() {/* ... */} @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {/* ... */} @Override protected void onDraw(Canvas canvas) {/* ... */} } 2. Initialize your paints: the Paint objects are the brushes of your virtual canvas dening how your geometric objects are rendered (e.g. color, ll and stroke style, etc.). Here we create two Paints, one yellow lled paint for the circle and one black stroke paint for the eyes and the mouth: private void initPaints() { mCirclePaint = new Paint(Paint.ANTI_ALIAS_FLAG); mCirclePaint.setStyle(Paint.Style.FILL); mCirclePaint.setColor(Color.YELLOW); mEyeAndMouthPaint = new Paint(Paint.ANTI_ALIAS_FLAG); mEyeAndMouthPaint.setStyle(Paint.Style.STROKE); mEyeAndMouthPaint.setStrokeWidth(16 * getResources().getDisplayMetrics().density); mEyeAndMouthPaint.setStrokeCap(Paint.Cap.ROUND); mEyeAndMouthPaint.setColor(Color.BLACK); } 3. Implement your own onMeasure(...) method: this is required so that the parent layouts (e.g. GoalKicker.com Android Notes for Professionals 212 FrameLayout) can properly align your custom view. It provides a set of measureSpecs that you can use to determine your view's height and width. Here we create a square by making sure that the height and width are the same: @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) { int w = MeasureSpec.getSize(widthMeasureSpec); int h = MeasureSpec.getSize(heightMeasureSpec); int size = Math.min(w, h); setMeasuredDimension(size, size); } Note that onMeasure(...) must contain at least one call to setMeasuredDimension(..) or else your custom view will crash with an IllegalStateException. 4. Implement your own onSizeChanged(...) method: this allows you to catch the current height and width of your custom view to properly adjust your rendering code. Here we just calculate our center and our radius: @Override protected void onSizeChanged(int w, int h, int oldw, int oldh) { mCenterX = w / 2f; mCenterY = h / 2f; mRadius = Math.min(w, h) / 2f; } 5. Implement your own onDraw(...) method: this is where you implement the actual rendering of your view. It provides a Canvas object that you can draw on (see the ocial Canvas documentation for all drawing methods available). @Override protected void onDraw(Canvas canvas) { // draw face canvas.drawCircle(mCenterX, mCenterY, mRadius, mCirclePaint); // draw eyes float eyeRadius = mRadius / 5f; float eyeOffsetX = mRadius / 3f; float eyeOffsetY = mRadius / 3f; canvas.drawCircle(mCenterX - eyeOffsetX, mCenterY - eyeOffsetY, eyeRadius, mEyeAndMouthPaint); canvas.drawCircle(mCenterX + eyeOffsetX, mCenterY - eyeOffsetY, eyeRadius, mEyeAndMouthPaint); // draw mouth float mouthInset = mRadius /3f; mArcBounds.set(mouthInset, mouthInset, mRadius * 2 - mouthInset, mRadius * 2 mouthInset); canvas.drawArc(mArcBounds, 45f, 90f, false, mEyeAndMouthPaint); } 6. Add your custom view to a layout: the custom view can now be included in any layout les that you have. Here we just wrap it inside a FrameLayout: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> GoalKicker.com Android Notes for Professionals 213 <com.example.app.SmileyView android:layout_width="match_parent" android:layout_height="match_parent" /> </FrameLayout> Note that it is recommended to build your project after the view code is nished. Without building it you won't be able to see the view on a preview screen in Android Studio. After putting everything together, you should be greeted with the following screen after launching the activity containing the above layout: Section 28.2: Adding attributes to views Custom views can also take custom attributes which can be used in Android layout resource les. To add attributes to your custom view you need to do the following: 1. Dene the name and type of your attributes: this is done inside res/values/attrs.xml (create it if necessary). The following le denes a color attribute for our smiley's face color and an enum attribute for the smiley's expression: <resources> <declare-styleable name="SmileyView"> <attr name="smileyColor" format="color" /> <attr name="smileyExpression" format="enum"> <enum name="happy" value="0"/> <enum name="sad" value="1"/> </attr> </declare-styleable> <!-- attributes for other views --> </resources> 2. Use your attributes inside your layout: this can be done inside any layout les that use your custom view. The following layout le creates a screen with a happy yellow smiley: GoalKicker.com Android Notes for Professionals 214 <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_height="match_parent" android:layout_width="match_parent"> <com.example.app.SmileyView android:layout_height="56dp" android:layout_width="56dp" app:smileyColor="#ffff00" app:smileyExpression="happy" /> </FrameLayout> Tip: Custom attributes do not work with the tools: prex in Android Studio 2.1 and older (and possibly in future versions). In this example, replacing app:smileyColor with tools:smileyColor would result in smileyColor neither being set during runtime nor at design time. 3. Read your attributes: this is done inside your custom view source code. The following snippet of SmileyView demonstrates how the attributes can be extracted: public class SmileyView extends View { // ... public SmileyView(Context context) { this(context, null); } public SmileyView(Context context, AttributeSet attrs) { this(context, attrs, 0); } public SmileyView(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); TypedArray a = context.obtainStyledAttributes(attrs, R.styleable.SmileyView, defStyleAttr, 0); mFaceColor = a.getColor(R.styleable.SmileyView_smileyColor, Color.TRANSPARENT); mFaceExpression = a.getInteger(R.styleable.SmileyView_smileyExpression, Expression.HAPPY); // Important: always recycle the TypedArray a.recycle(); // initPaints(); ... } } 4. (Optional) Add default style: this is done by adding a style with the default values and loading it inside your custom view. The following default smiley style represents a happy yellow one: <!-- styles.xml --> <style name="DefaultSmileyStyle"> <item name="smileyColor">#ffff00</item> <item name="smileyExpression">happy</item> </style> Which gets applied in our SmileyView by adding it as the last parameter of the call to obtainStyledAttributes (see code in step 3): GoalKicker.com Android Notes for Professionals 215 TypedArray a = context.obtainStyledAttributes(attrs, R.styleable.SmileyView, defStyleAttr, R.style.DefaultSmileyViewStyle); Note that any attribute values set in the inated layout le (see code in step 2) will override the corresponding values of the default style. 5. (Optional) Provide styles inside themes: this is done by adding a new style reference attribute which can be used inside your themes and providing a style for that attribute. Here we simply name our reference attribute smileyStyle: <!-- attrs.xml --> <attr name="smileyStyle" format="reference" /> Which we then provide a style for in our app theme (here we just reuse the default style from step 4): <!-- themes.xml --> <style name="AppTheme" parent="AppBaseTheme"> <item name="smileyStyle">@style/DefaultSmileyStyle</item> </style> Section 28.3: CustomView performance tips Do not allocate new objects in onDraw @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); Paint paint = new Paint(); //Do not allocate here } Instead of drawing drawables in canvas... drawable.setBounds(boundsRect); drawable.draw(canvas); Use a Bitmap for faster drawing: canvas.drawBitmap(bitmap, srcRect, boundsRect, paint); Do not redraw the entire view to update just a small part of it. Instead redraw the specic part of view. invalidate(boundToBeRefreshed); If your view is doing some continuous animation, for instance a watch-face showing each and every second, at least stop the animation at onStop() of the activity and start it back on onStart() of the activity. Do not do any calculations inside the onDraw method of a view, you should instead nish drawing before calling invalidate(). By using this technique you can avoid frame dropping in your view. Rotations The basic operations of a view are translate, rotate, etc... Almost every developer has faced this problem when they GoalKicker.com Android Notes for Professionals 216 use bitmap or gradients in their custom view. If the view is going to show a rotated view and the bitmap has to be rotated in that custom view, many of us will think that it will be expensive. Many think that rotating a bitmap is very expensive because in order to do that, you need to translate the bitmap's pixel matrix. But the truth is that it is not that tough! Instead of rotating the bitmap, just rotate the canvas itself! // Save the canvas state int save = canvas.save(); // Rotate the canvas by providing the center point as pivot and angle canvas.rotate(pivotX, pivotY, angle); // Draw whatever you want // Basically whatever you draw here will be drawn as per the angle you rotated the canvas canvas.drawBitmap(...); // Now restore your your canvas to its original state canvas.restore(save); // Unless canvas is restored to its original state, further draw will also be rotated. Section 28.4: Creating a compound view A compound view is a custom ViewGroup that's treated as a single view by the surrounding program code. Such a ViewGroup can be really useful in DDD-like design, because it can correspond to an aggregate, in this example, a Contact. It can be reused everywhere that contact is displayed. This means that the surrounding controller code, an Activity, Fragment or Adapter, can simply pass the data object to the view without picking it apart into a number of dierent UI widgets. This facilitates code reuse and makes for a better design according to SOLID priciples. The layout XML This is usually where you start. You have an existing bit of XML that you nd yourself reusing, perhaps as an <include/>. Extract it into a separate XML le and wrap the root tag in a <merge> element: <?xml version="1.0" encoding="utf-8"?> <merge xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:id="@+id/photo" android:layout_width="48dp" android:layout_height="48dp" android:layout_alignParentRight="true" /> <TextView android:id="@+id/name" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_toLeftOf="@id/photo" /> <TextView android:id="@+id/phone_number" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/name" android:layout_toLeftOf="@id/photo" /> </merge> This XML le keeps working in the Layout Editor in Android Studio perfectly ne. You can treat it like any other GoalKicker.com Android Notes for Professionals 217 layout. The compound ViewGroup Once you have the XML le, create the custom view group. import android.annotation.TargetApi; import android.content.Context; import android.os.Build; import android.util.AttributeSet; import android.view.LayoutInflater; import android.view.View; import android.widget.RelativeLayout; import android.widget.ImageView; import android.widget.TextView; import myapp.R; /** * A compound view to show contacts. * * This class can be put into an XML layout or instantiated programmatically, it * will work correctly either way. */ public class ContactView extends RelativeLayout { // This class extends RelativeLayout because that comes with an automatic // (MATCH_PARENT, MATCH_PARENT) layout for its child item. You can extend // the raw android.view.ViewGroup class if you want more control. See the // note in the layout XML why you wouldn't want to extend a complex view // such as RelativeLayout. // 1. Implement superclass constructors. public ContactView(Context context) { super(context); init(context, null); } // two extra constructors left out to keep the example shorter @TargetApi(Build.VERSION_CODES.LOLLIPOP) public ContactView(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) { super(context, attrs, defStyleAttr, defStyleRes); init(context, attrs); } // 2. Initialize the view by inflating an XML using `this` as parent private TextView mName; private TextView mPhoneNumber; private ImageView mPhoto; private void init(Context context, AttributeSet attrs) { LayoutInflater.from(context).inflate(R.layout.contact_view, this, true); mName = (TextView) findViewById(R.id.name); mPhoneNumber = (TextView) findViewById(R.id.phone_number); mPhoto = (ImageView) findViewById(R.id.photo); } // 3. Define a setter that's expressed in your domain model. This is what the example is // all about. All controller code can just invoke this setter instead of fiddling with // lots of strings, visibility options, colors, animations, etc. If you don't use a GoalKicker.com Android Notes for Professionals 218 // custom view, this code will usually end up in a static helper method (bad) or copies // of this code will be copy-pasted all over the place (worse). public void setContact(Contact contact) { mName.setText(contact.getName()); mPhoneNumber.setText(contact.getPhoneNumber()); if (contact.hasPhoto()) { mPhoto.setVisibility(View.VISIBLE); mPhoto.setImageBitmap(contact.getPhoto()); } else { mPhoto.setVisibility(View.GONE); } } } The init(Context, AttributeSet) method is where you would read any custom XML attributes as explained in Adding Attributes to Views. With these pieces in place, you can use it in your app. Usage in XML Here's an example fragment_contact_info.xml that illustrates how you'd put a single ContactView on top of a list of messages: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <!-- The compound view becomes like any other view XML element --> <myapp.ContactView android:id="@+id/contact" android:layout_width="match_parent" android:layout_height="wrap_content"/> <android.support.v7.widget.RecyclerView android:id="@+id/message_list" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1"/> </LinearLayout> Usage in Code Here's an example RecyclerView.Adapter that shows a list of contacts. This example illustrates just how much cleaner the controller code gets when it's completely free of View manipulation. package myapp; import android.content.Context; import android.support.v7.widget.RecyclerView; import android.view.ViewGroup; public class ContactsAdapter extends RecyclerView.Adapter<ContactsViewHolder> { private final Context context; public ContactsAdapter(final Context context) { this.context = context; GoalKicker.com Android Notes for Professionals 219 } @Override public ContactsViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { ContactView v = new ContactView(context); // <--- this return new ContactsViewHolder(v); } @Override public void onBindViewHolder(ContactsViewHolder holder, int position) { Contact contact = this.getItem(position); holder.setContact(contact); // <--- this } static class ContactsViewHolder extends RecyclerView.ViewHolder { public ContactsViewHolder(ContactView itemView) { super(itemView); } public void setContact(Contact contact) { ((ContactView) itemView).setContact(contact); // <--- this } } } Section 28.5: Compound view for SVG/VectorDrawable as drawableRight Main motive to develop this compound view is, below 5.0 devices does not support svg in drawable inside TextView/EditText. One more pros is, we can set height and width of drawableRight inside EditText. I have separated it from my project and created in separate module. Module Name : custom_edit_drawable (short name for prex- c_d_e) "c_d_e_" prex to use so that app module resources should not override them by mistake. Example : "abc" prex is used by google in support library. build.gradle dependencies { compile 'com.android.support:appcompat-v7:25.3.1' } use AppCompat >= 23 Layout le : c_e_d_compound_view.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content"> <EditText android:id="@+id/edt_search" android:layout_width="match_parent" android:layout_height="wrap_content" android:inputType="text" android:maxLines="1" GoalKicker.com Android Notes for Professionals 220 android:paddingEnd="40dp" android:paddingLeft="5dp" android:paddingRight="40dp" android:paddingStart="5dp" /> <!--make sure you are not using ImageView instead of this--> <android.support.v7.widget.AppCompatImageView android:id="@+id/drawbleRight_search" android:layout_width="30dp" android:layout_height="30dp" android:layout_gravity="right|center_vertical" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" /> </FrameLayout> Custom Attributes : attrs.xml <?xml version="1.0" encoding="utf-8"?> <resources> <declare-styleable name="EditTextWithDrawable"> <attr name="c_e_d_drawableRightSVG" format="reference" /> <attr name="c_e_d_hint" format="string" /> <attr name="c_e_d_textSize" format="dimension" /> <attr name="c_e_d_textColor" format="color" /> </declare-styleable> </resources> Code : EditTextWithDrawable.java public class EditTextWithDrawable extends FrameLayout { public AppCompatImageView mDrawableRight; public EditText mEditText; public EditTextWithDrawable(Context context) { super(context); init(null); } public EditTextWithDrawable(Context context, AttributeSet attrs) { super(context, attrs); init(attrs); } public EditTextWithDrawable(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); init(attrs); } @TargetApi(Build.VERSION_CODES.LOLLIPOP) public EditTextWithDrawable(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) { super(context, attrs, defStyleAttr, defStyleRes); init(attrs); } private void init(AttributeSet attrs) { if (attrs != null && !isInEditMode()) { LayoutInflater inflater = (LayoutInflater) getContext() .getSystemService(Context.LAYOUT_INFLATER_SERVICE); inflater.inflate(R.layout.c_e_d_compound_view, this, true); mDrawableRight = (AppCompatImageView) ((FrameLayout) getChildAt(0)).getChildAt(1); GoalKicker.com Android Notes for Professionals 221 mEditText = (EditText) ((FrameLayout) getChildAt(0)).getChildAt(0); TypedArray attributeArray = getContext().obtainStyledAttributes( attrs, R.styleable.EditTextWithDrawable); int drawableRes = attributeArray.getResourceId( R.styleable.EditTextWithDrawable_c_e_d_drawableRightSVG, -1); if (drawableRes != -1) { mDrawableRight.setImageResource(drawableRes); } mEditText.setHint(attributeArray.getString( R.styleable.EditTextWithDrawable_c_e_d_hint)); mEditText.setTextColor(attributeArray.getColor( R.styleable.EditTextWithDrawable_c_e_d_textColor, Color.BLACK)); int textSize = attributeArray.getDimensionPixelSize(R.styleable.EditTextWithDrawable_c_e_d_textSize, 15); mEditText.setTextSize(TypedValue.COMPLEX_UNIT_PX, textSize); android.view.ViewGroup.LayoutParams layoutParams = mDrawableRight.getLayoutParams(); layoutParams.width = (textSize * 3) / 2; layoutParams.height = (textSize * 3) / 2; mDrawableRight.setLayoutParams(layoutParams); attributeArray.recycle(); } } } Example : How to use above view Layout : activity_main.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical"> <com.customeditdrawable.AppEditTextWithDrawable android:id="@+id/edt_search_emp" android:layout_width="match_parent" android:layout_height="wrap_content" app:c_e_d_drawableRightSVG="@drawable/ic_svg_search" app:c_e_d_hint="@string/hint_search_here" app:c_e_d_textColor="@color/text_color_dark_on_light_bg" app:c_e_d_textSize="@dimen/text_size_small" /> </LinearLayout> Activity : MainActivity.java public class MainActivity extends AppCompatActivity { EditTextWithDrawable mEditTextWithDrawable; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mEditTextWithDrawable= (EditTextWithDrawable) findViewById(R.id.edt_search_emp); } } GoalKicker.com Android Notes for Professionals 222 Section 28.6: Responding to Touch Events Many custom views need to accept user interaction in the form of touch events. You can get access to touch events by overriding onTouchEvent. There are a number of actions you can lter out. The main ones are ACTION_DOWN: This is triggered once when your nger rst touches the view. ACTION_MOVE: This is called every time your nger moves a little across the view. It gets called many times. ACTION_UP: This is the last action to be called as you lift your nger o the screen. You can add the following method to your view and then observe the log output when you touch and move your nger around your view. @Override public boolean onTouchEvent(MotionEvent event) { int x = (int) event.getX(); int y = (int) event.getY(); int action = event.getAction(); switch (action) { case MotionEvent.ACTION_DOWN: Log.i("CustomView", "onTouchEvent: ACTION_DOWN: x = " + x + ", y = " + y); break; case MotionEvent.ACTION_MOVE: Log.i("CustomView", "onTouchEvent: ACTION_MOVE: x = " + x + ", y = " + y); break; case MotionEvent.ACTION_UP: Log.i("CustomView", "onTouchEvent: ACTION_UP: x = " + x + ", y = " + y); break; } return true; } Further reading: Android ocial documentation: Responding to Touch Events GoalKicker.com Android Notes for Professionals 223 Chapter 29: Getting Calculated View Dimensions Section 29.1: Calculating initial View dimensions in an Activity package com.example; import android.os.Bundle; import android.support.annotation.Nullable; import android.util.Log; import android.view.View; import android.view.ViewTreeObserver; public class ExampleActivity extends Activity { @Override protected void onCreate(@Nullable final Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_example); final View viewToMeasure = findViewById(R.id.view_to_measure); // viewToMeasure dimensions are not known at this point. // viewToMeasure.getWidth() and viewToMeasure.getHeight() both return 0, // regardless of on-screen size. viewToMeasure.getViewTreeObserver().addOnPreDrawListener(new ViewTreeObserver.OnPreDrawListener() { @Override public boolean onPreDraw() { // viewToMeasure is now measured and laid out, and displayed dimensions are known. logComputedViewDimensions(viewToMeasure.getWidth(), viewToMeasure.getHeight()); // Remove this listener, as we have now successfully calculated the desired dimensions. viewToMeasure.getViewTreeObserver().removeOnPreDrawListener(this); // Always return true to continue drawing. return true; } }); } private void logComputedViewDimensions(final int width, final int height) { Log.d("example", "viewToMeasure has width " + width); Log.d("example", "viewToMeasure has height " + height); } } GoalKicker.com Android Notes for Professionals 224 Chapter 30: Adding a FuseView to an Android Project Export a Fuse.View from fusetools and use it inside an existing android project. Our goal is to export the entire hikr sample app and use it inside an Activity. Final work can be found @lucamtudor/hikr-fuse-view Section 30.1: hikr app, just another android.view.View Prerequisites you should have fuse installed (https://www.fusetools.com/downloads) you should have done the introduction tutorial in terminal: fuse install android in terminal: uno install Fuse.Views Step 1 git clone https://github.com/fusetools/hikr Step 2 : Add package reference to Fuse.Views Find hikr.unoproj le inside the project root folder and add "Fuse.Views" to the "Packages" array. { "RootNamespace":"", "Packages": [ "Fuse", "FuseJS", "Fuse.Views" ], "Includes": [ "*", "Modules/*.js:Bundle" ] } Step 3 : Make HikrApp component to hold the entire app 3.1 In the project root folder make a new le called HikrApp.ux and paste the contents of MainView.ux. HikrApp.ux <App Background="#022328"> <iOS.StatusBarConfig Style="Light" /> <Android.StatusBarConfig Color="#022328" /> <Router ux:Name="router" /> <ClientPanel> <Navigator DefaultPath="splash"> GoalKicker.com Android Notes for Professionals 225 <SplashPage ux:Template="splash" router="router" /> <HomePage ux:Template="home" router="router" /> <EditHikePage ux:Template="editHike" router="router" /> </Navigator> </ClientPanel> </App> 3.2 In HikrApp.ux replace the <App> tags with <Page> add ux:Class="HikrApp" to the opening <Page> remove <ClientPanel>, we don't have to worry anymore about the status bar or the bottom nav buttons HikrApp.ux <Page ux:Class="HikrApp" Background="#022328"> <iOS.StatusBarConfig Style="Light" /> <Android.StatusBarConfig Color="#022328" /> <Router ux:Name="router" /> <Navigator DefaultPath="splash"> <SplashPage ux:Template="splash" router="router" /> <HomePage ux:Template="home" router="router" /> <EditHikePage ux:Template="editHike" router="router" /> </Navigator> </Page> 3.3 Use the newly created HikrApp component inside MainView.ux Replace the content of MainView.ux le with: <App> <HikrApp/> </App> Our app is back to its normal behavior, but we now have extracted it to a separate component called HikrApp Step 4 Inside MainView.ux replace the <App> tags with <ExportedViews> and add ux:Template="HikrAppView" to <HikrApp /> <ExportedViews> <HikrApp ux:Template="HikrAppView" /> </ExportedViews> Remember the template HikrAppView, because we'll need it to get a reference to our view from Java. Note. From the fuse docs: ExportedViews will behave as App when doing normal fuse preview and uno build Not true. You will get this error when previewing from Fuse Studio: GoalKicker.com Android Notes for Professionals 226 Error: Couldn't nd an App tag in any of the included UX les. Have you forgot to include the UX le that contains the app tag? Step 5 Wrap SplashPage.ux's <DockPanel> in a <GraphicsView> <Page ux:Class="SplashPage"> <Router ux:Dependency="router" /> <JavaScript File="SplashPage.js" /> <GraphicsView> <DockPanel ClipToBounds="true"> <Video Layer="Background" File="../Assets/nature.mp4" IsLooping="true" AutoPlay="true" StretchMode="UniformToFill" Opacity="0.5"> <Blur Radius="4.75" /> </Video> <hikr.Text Dock="Bottom" Margin="10" Opacity=".5" TextAlignment="Center" FontSize="12">original video by <NAME></hikr.Text> <Grid RowCount="2"> <StackPanel Alignment="VerticalCenter"> <hikr.Text Alignment="HorizontalCenter" FontSize="70">hikr</hikr.Text> <hikr.Text Alignment="HorizontalCenter" Opacity=".5">get out there</hikr.Text> </StackPanel> <hikr.Button Text="Get Started" FontSize="18" Margin="50,0" Alignment="VerticalCenter" Clicked="{goToHomePage}" /> </Grid> </DockPanel> </GraphicsView> </Page> Step 6 Export the fuse project as an aar library in terminal, in root project folder: uno clean in terminal, in root project folder: uno build -t=android -DLIBRARY Step 7 Prepare your android project copy the aar from .../rootHikeProject/build/Android/Debug/app/build/outputs/aar/app-debug.aar to .../androidRootProject/app/libs add flatDir { dirs 'libs' } to the root build.gradle le // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { ... } ... allprojects { repositories { jcenter() GoalKicker.com Android Notes for Professionals 227 flatDir { dirs 'libs' } } } ... add compile(name: 'app-debug', ext: 'aar') to dependencies in app/build.gradle apply plugin: 'com.android.application' android { compileSdkVersion 25 buildToolsVersion "25.0.2" defaultConfig { applicationId "com.shiftstudio.fuseviewtest" minSdkVersion 16 targetSdkVersion 25 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { compile(name: 'app-debug', ext: 'aar') compile fileTree(dir: 'libs', include: ['*.jar']) androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support:appcompat-v7:25.3.1' testCompile 'junit:junit:4.12' } add the following properties to the activity inside AndroidManifest.xml android:launchMode="singleTask" android:taskAffinity="" android:configChanges="orientation|keyboardHidden|screenSize|smallestScreenSize" Your AndroidManifest.xml will look like this: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.shiftstudio.fuseviewtest"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity" GoalKicker.com Android Notes for Professionals 228 android:launchMode="singleTask" android:taskAffinity="" android:configChanges="orientation|keyboardHidden|screenSize|smallestScreenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Step 8: Show the Fuse.View HikrAppView in your Activity note that your Activity needs to inherit FuseViewsActivity public class MainActivity extends FuseViewsActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final ViewHandle fuseHandle = ExportedViews.instantiate("HikrAppView"); final FrameLayout root = (FrameLayout) findViewById(R.id.fuse_root); final View fuseApp = fuseHandle.getView(); root.addView(fuseApp); } } activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.shiftstudio.fuseviewtest.MainActivity"> <TextView android:layout_width="wrap_content" android:layout_gravity="center_horizontal" android:textSize="24sp" android:textStyle="bold" android:layout_height="wrap_content" android:text="Hello World, from Kotlin" /> <FrameLayout android:id="@+id/fuse_root" android:layout_width="match_parent" android:layout_height="match_parent"> GoalKicker.com Android Notes for Professionals 229 <TextView android:layout_width="wrap_content" android:text="THIS IS FROM NATIVE.\nBEHIND FUSE VIEW" android:layout_gravity="center" android:textStyle="bold" android:textSize="30sp" android:background="@color/colorAccent" android:textAlignment="center" android:layout_height="wrap_content" /> </FrameLayout> </LinearLayout> Note When you press the back button, on android, the app crashes. You can follow the issue on the fuse forum. A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0xdeadcab1 in tid 18026 (io.fuseviewtest) [ 05-25 11:52:33.658 16567:16567 W/ ] debuggerd: handling request: pid=18026 uid=10236 gid=10236 tid=18026 And the nal result is something like this. You can also nd a short clip on github. GoalKicker.com Android Notes for Professionals 230 GoalKicker.com Android Notes for Professionals 231 Chapter 31: Supporting Screens With Dierent Resolutions, Sizes Section 31.1: Using conguration qualiers Android supports several conguration qualiers that allow you to control how the system selects your alternative resources based on the characteristics of the current device screen. A conguration qualier is a string that you can append to a resource directory in your Android project and species the conguration for which the resources inside are designed. To use a conguration qualier: 1. Create a new directory in your project's res/ directory and name it using the format: <resources_name><qualifier>. <resources_name> is the standard resource name (such as drawable or layout). 2. <qualifier> is a conguration qualier, specifying the screen conguration for which these resources are to be used (such as hdpi or xlarge). For example, the following application resource directories provide dierent layout designs for dierent screen sizes and dierent drawables. Use the mipmap/ folders for launcher icons. res/layout/my_layout.xml res/layout-large/my_layout.xml res/layout-xlarge/my_layout.xml res/layout-xlarge-land/my_layout.xml // layout for normal screen size ("default") // layout for large screen size // layout for extra-large screen size // layout for extra-large in landscape orientation res/drawable-mdpi/graphic.png res/drawable-hdpi/graphic.png res/drawable-xhdpi/graphic.png res/drawable-xxhdpi/graphic.png // bitmap for medium-density // bitmap for high-density // bitmap for extra-high-density // bitmap for extra-extra-high-density res/mipmap-mdpi/my_icon.png res/mipmap-hdpi/my_icon.png res/mipmap-xhdpi/my_icon.png res/mipmap-xxhdpi/my_icon.png res/mipmap-xxxhdpi/my_icon.png // launcher icon for medium-density // launcher icon for high-density // launcher icon for extra-high-density // launcher icon for extra-extra-high-density // launcher icon for extra-extra-extra-high-density Section 31.2: Converting dp and sp to pixels When you need to set a pixel value for something like Paint.setTextSize but still want it be scaled based on the device, you can convert dp and sp values. DisplayMetrics metrics = Resources.getSystem().getDisplayMetrics(); float pixels = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_SP, 12f, metrics); DisplayMetrics metrics = Resources.getSystem().getDisplayMetrics(); float pixels = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 12f, metrics); Alternatively, you can convert a dimension resource to pixels if you have a context to load the resource from. <?xml version="1.0" encoding="utf-8"?> <resources> <dimen name="size_in_sp">12sp</dimen> <dimen name="size_in_dp">12dp</dimen> </resources> GoalKicker.com Android Notes for Professionals 232 // Get the exact dimension specified by the resource float pixels = context.getResources().getDimension(R.dimen.size_in_sp); float pixels = context.getResources().getDimension(R.dimen.size_in_dp); // Get the dimension specified by the resource for use as a size. // The value is rounded down to the nearest integer but is at least 1px. int pixels = context.getResources().getDimensionPixelSize(R.dimen.size_in_sp); int pixels = context.getResources().getDimensionPixelSize(R.dimen.size_in_dp); // Get the dimension specified by the resource for use as an offset. // The value is rounded down to the nearest integer and can be 0px. int pixels = context.getResources().getDimensionPixelOffset(R.dimen.size_in_sp); int pixels = context.getResources().getDimensionPixelOffset(R.dimen.size_in_dp); Section 31.3: Text size and dierent android screen sizes Sometimes, it's better to have only three options style="@android:style/TextAppearance.Small" style="@android:style/TextAppearance.Medium" style="@android:style/TextAppearance.Large" Use small and large to dierentiate from normal screen size. <TextView android:id="@+id/TextViewTopBarTitle" android:layout_width="wrap_content" android:layout_height="wrap_content" style="@android:style/TextAppearance.Small"/> For normal, you don't have to specify anything. <TextView android:id="@+id/TextViewTopBarTitle" android:layout_width="wrap_content" android:layout_height="wrap_content"/> Using this, you can avoid testing and specifying dimensions for dierent screen sizes. GoalKicker.com Android Notes for Professionals 233 Chapter 32: ViewFlipper A ViewFlipper is a ViewAnimator that switches between two or more views that have been added to it. Only one child is shown at a time. If requested, the ViewFlipper can automatically ip between each child at a regular interval. Section 32.1: ViewFlipper with image sliding XML le: <ViewFlipper android:id="@+id/viewflip" android:layout_width="match_parent" android:layout_height="250dp" android:layout_weight="1" /> Java code: public class BlankFragment extends Fragment{ ViewFlipper viewFlipper; FragmentManager fragmentManager; int gallery_grid_Images[] = {drawable.image1, drawable.image2, drawable.image3, drawable.image1, drawable.image2, drawable.image3, drawable.image1, drawable.image2, drawable.image3, drawable.image1 }; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState){ View rootView = inflater.inflate(fragment_blank, container, false); viewFlipper = (ViewFlipper)rootView.findViewById(R.id.viewflip); for(int i=0; i<gallery_grid_Images.length; i++){ // This will create dynamic image views and add them to the ViewFlipper. setFlipperImage(gallery_grid_Images[i]); } return rootView; } private void setFlipperImage(int res) { Log.i("Set Filpper Called", res+""); ImageView image = new ImageView(getContext()); image.setBackgroundResource(res); viewFlipper.addView(image); viewFlipper.setFlipInterval(1000); viewFlipper.setAutoStart(true); } } GoalKicker.com Android Notes for Professionals 234 Chapter 33: Design Patterns Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. Design patterns can speed up the development process by providing tested, proven development paradigms. Reusing design patterns helps to prevent subtle issues that can cause major problems, and it also improves code readability for coders and architects who are familiar with the patterns. Section 33.1: Observer pattern The observer pattern is a common pattern, which is widely used in many contexts. A real example can be taken from YouTube: When you like a channel and want to get all news and watch new videos from this channel, you have to subscribe to that channel. Then, whenever this channel publishes any news, you (and all other subscribers) will receive a notication. An observer will have two components. One is a broadcaster (channel) and the other is a receiver (you or any other subscriber). The broadcaster will handle all receiver instances that subscribed to it. When the broadcaster res a new event, it will announce this to all receiver instances. When the receiver receives an event, it will have to react to that event, for example, by turning on YouTube and playing the new video. Implementing the observer pattern 1. The broadcaster has to provide methods that permit receivers to subscribe and unsubscribe to it. When the broadcaster res an event, subscribers need to be notied that an event has occurred: class Channel{ private List<Subscriber> subscribers; public void subscribe(Subscriber sub) { // Add new subscriber. } public void unsubscribe(Subscriber sub) { // Remove subscriber. } public void newEvent() { // Notification event for all subscribers. } } 2. The receiver needs to implement a method that handles the event from the broadcaster: interface Subscriber { void doSubscribe(Channel channel); void doUnsubscribe(Channel channel); void handleEvent(); // Process the new event. } Section 33.2: Singleton Class Example Java Singleton Pattern To implement Singleton pattern, we have dierent approaches but all of them have following common concepts. GoalKicker.com Android Notes for Professionals 235 Private constructor to restrict instantiation of the class from other classes. Private static variable of the same class that is the only instance of the class. Public static method that returns the instance of the class, this is the global access point for outer world to get the instance of the singleton class. /** * Singleton class. */ public final class Singleton { /** * Private constructor so nobody can instantiate the class. */ private Singleton() {} /** * Static to class instance of the class. */ private static final Singleton INSTANCE = new Singleton(); /** * To be called by user to obtain instance of the class. * * @return instance of the singleton. */ public static Singleton getInstance() { return INSTANCE; } } GoalKicker.com Android Notes for Professionals 236 Chapter 34: Activity Parameter Details Intent Can be used with startActivity to launch an Activity Bundle A mapping from String keys to various Parcelable values. Context Interface to global information about an application environment. An Activity represents a single screen with a user interface(UI). An Android App may have more than one Activity, for example, An email App can have one activity to list all the emails, another activity to show email contents, yet another activity to compose new email. All the activities in an App work together to create perfect user experience. Section 34.1: Activity launchMode Launch mode denes the behaviour of new or existing activity in the task. There are possible launch modes: standard singleTop singleTask singleInstance It should be dened in android manifest in <activity/> element as android:launchMode attribute. <activity android:launchMode=["standard" | "singleTop" | "singleTask" | "singleInstance"] /> Standard: Default value. If this mode set, new activity will always be created for each new intent. So it's possible to get many activities of same type. New activity will be placed on the top of the task. There is some dierence for dierent android version: if activity is starting from another application, on androids <= 4.4 it will be placed on same task as starter application, but on >= 5.0 new task will be created. SingleTop: This mode is almost the same as standard. Many instances of singleTop activity could be created. The dierence is, if an instance of activity already exists on the top of the current stack, onNewIntent() will be called instead of creating new instance. SingleTask: Activity with this launch mode can have only one instance in the system. New task for activity will be created, if it doesn't exist. Otherwise, task with activity will be moved to front and onNewIntent will be called. SingleInstance: This mode is similar to singleTask. The dierence is task that holds an activity with singleInstance could have only this activity and nothing more. When singleInstance activity create another activity, new task will be created to place that activity. GoalKicker.com Android Notes for Professionals 237 Section 34.2: Exclude an activity from back-stack history Let there be Activity B that can be opened, and can further start more Activities. But, user should not encounter it when navigating back in task activities. The simplest solution is to set the attribute noHistory to true for that <activity> tag in AndroidManifest.xml: <activity android:name=".B" android:noHistory="true"> This same behavior is also possible from code if B calls finish() before starting the next activity: finish(); startActivity(new Intent(context, C.class)); Typical usage of noHistory ag is with "Splash Screen" or Login Activities. Section 34.3: Android Activity LifeCycle Explained Assume an application with a MainActivity which can call the Next Activity using a button click. public class MainActivity extends AppCompatActivity { private final String LOG_TAG = MainActivity.class.getSimpleName(); @Override protected void onCreate(Bundle savedInstanceState) { GoalKicker.com Android Notes for Professionals 238 super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Log.d(LOG_TAG, "calling onCreate from MainActivity"); } @Override protected void onStart() { super.onStart(); Log.d(LOG_TAG, "calling onStart from MainActivity"); } @Override protected void onResume() { super.onResume(); Log.d(LOG_TAG, "calling onResume from MainActivity"); } @Override protected void onPause() { super.onPause(); Log.d(LOG_TAG, "calling onPause } @Override protected void onStop() { super.onStop(); Log.d(LOG_TAG, "calling onStop } from MainActivity"); from MainActivity"); @Override protected void onDestroy() { super.onDestroy(); Log.d(LOG_TAG, "calling onDestroy } from MainActivity"); @Override protected void onRestart() { super.onRestart(); Log.d(LOG_TAG, "calling onRestart from MainActivity"); } public void toNextActivity(){ Log.d(LOG_TAG, "calling Next Activity"); Intent intent = new Intent(this, NextActivity.class); startActivity(intent); } } and public class NextActivity extends AppCompatActivity { private final String LOG_TAG = NextActivity.class.getSimpleName(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_next); Log.d(LOG_TAG, "calling onCreate from Next Activity"); } @Override protected void onStart() { super.onStart(); Log.d(LOG_TAG, "calling onStart from Next Activity"); } @Override protected void onResume() { GoalKicker.com Android Notes for Professionals 239 super.onResume(); Log.d(LOG_TAG, "calling onResume from Next Activity"); } @Override protected void onPause() { super.onPause(); Log.d(LOG_TAG, "calling onPause } @Override protected void onStop() { super.onStop(); Log.d(LOG_TAG, "calling onStop } from Next Activity"); from Next Activity"); @Override protected void onDestroy() { super.onDestroy(); Log.d(LOG_TAG, "calling onDestroy } @Override protected void onRestart() { super.onRestart(); Log.d(LOG_TAG, "calling onRestart } } from Next Activity"); from Next Activity"); When app is rst created D/MainActivity: calling onCreate from MainActivity D/MainActivity: calling onStart from MainActivity D/MainActivity: calling onResume from MainActivity are called When screen sleeps 08:11:03.142 D/MainActivity: calling onPause from MainActivity 08:11:03.192 D/MainActivity: calling onStop from MainActivity are called. And again when it wakes up 08:11:55.922 D/MainActivity: calling onRestart from MainActivity 08:11:55.962 D/MainActivity: calling onStart from MainActivity 08:11:55.962 D/MainActivity: calling onResume from MainActivity are called Case1: When Next Activity is called from Main Activity D/MainActivity: calling Next Activity D/MainActivity: calling onPause from MainActivity D/NextActivity: calling onCreate from Next Activity D/NextActivity: calling onStart from Next Activity D/NextActivity: calling onResume from Next Activity D/MainActivity: calling onStop from MainActivity When Returning back to the Main Activity from Next Activity using back button D/NextActivity: calling onPause from Next Activity D/MainActivity: calling onRestart from MainActivity D/MainActivity: calling onStart from MainActivity D/MainActivity: calling onResume from MainActivity GoalKicker.com Android Notes for Professionals 240 D/NextActivity: calling onStop from Next Activity D/NextActivity: calling onDestroy from Next Activity Case2: When Activity is partially obscured (When overview button is pressed) or When app goes to background and another app completely obscures it D/MainActivity: calling onPause from MainActivity D/MainActivity: calling onStop from MainActivity and when the app is back in the foreground ready to accept User inputs, D/MainActivity: calling onRestart from MainActivity D/MainActivity: calling onStart from MainActivity D/MainActivity: calling onResume from MainActivity are called Case3: When an activity is called to fulll implicit intent and user has make a selection. For eg., when share button is pressed and user has to select an app from the list of applications shown D/MainActivity: calling onPause from MainActivity The activity is visible but not active now. When the selection is done and app is active D/MainActivity: calling onResume from MainActivity is called Case4: When the app is killed in the background(to free resources for another foreground app), onPause(for prehoneycomb device) or onStop(for since honeycomb device) will be the last to be called before the app is terminated. onCreate and onDestroy will be called utmost once each time the application is run. But the onPause, onStop, onRestart, onStart, onResume maybe called many times during the lifecycle. Section 34.4: End Application with exclude from Recents First dene an ExitActivity in the AndroidManifest.xml <activity android:name="com.your_example_app.activities.ExitActivity" android:autoRemoveFromRecents="true" android:theme="@android:style/Theme.NoDisplay" /> Afterwards the ExitActivity-class /** * Activity to exit Application without staying in the stack of last opened applications */ public class ExitActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (Utils.hasLollipop()) { finishAndRemoveTask(); } else if (Utils.hasJellyBean()) { finishAffinity(); } else { finish(); } } GoalKicker.com Android Notes for Professionals 241 /** * Exit Application and Exclude from Recents * * @param context Context to use */ public static void exitApplication(ApplicationContext context) { Intent intent = new Intent(context, ExitActivity.class); intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK | Intent.FLAG_ACTIVITY_NO_ANIMATION | Intent.FLAG_ACTIVITY_EXCLUDE_FROM_RECENTS); context.startActivity(intent); } } Section 34.5: Presenting UI with setContentView Activity class takes care of creating a window for you in which you can place your UI with setContentView. There are three setContentView methods: setContentView(int layoutResID) - Set the activity content from a layout resource. setContentView(View view) - Set the activity content to an explicit view. setContentView(View view, ViewGroup.LayoutParams params) - Set the activity content to an explicit view with provided params. When setContentView is called, this view is placed directly into the activity's view hierarchy. It can itself be a complex view hierarchy. Examples Set content from resource le: Add resource le (main.xml in this example) with view hierarchy: <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello" /> </FrameLayout> Set it as content in activity: public final class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // The resource will be inflated, // adding all top-level views to the activity. setContentView(R.layout.main); } } Set content to an explicit view: GoalKicker.com Android Notes for Professionals 242 public final class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Creating view with container final FrameLayout root = new FrameLayout(this); final TextView text = new TextView(this); text.setText("Hello"); root.addView(text); // Set container as content view setContentView(root); } } Section 34.6: Up Navigation for Activities Up navigation is done in android by adding android:parentActivityName="" in Manifest.xml to the activity tag. Basically with this tag you tell the system about the parent activity of a activity. How is it done? <uses-permission android:name="android.permission.INTERNET" /> <application android:name=".SkillSchoolApplication" android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".ui.activities.SplashActivity" android:theme="@style/SplashTheme"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".ui.activities.MainActivity" /> <activity android:name=".ui.activities.HomeActivity" android:parentActivityName=".ui.activities.MainActivity/> // HERE I JUST TOLD THE SYSTEM THAT MainActivity is the parent of HomeActivity </application> Now when I will click on the arrow inside the toolbar of HomeActivity it will take me back to the parent activity. Java Code Here I will write the appropriate Java code required for this functionality. public class HomeActivity extends AppCompatActivity { @BindView(R.id.toolbar) Toolbar toolbar; GoalKicker.com Android Notes for Professionals 243 @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home); ButterKnife.bind(this); //Since i am using custom tool bar i am setting refernce of that toolbar to Actionbar. If you are not using custom then you can simple leave this and move to next line setSupportActionBar(toolbar); getSupportActionBar.setDisplayHomeAsUpEnabled(true); // this will show the back arrow in the tool bar. } } If you run this code you will see when you press back button it will take you back to MainActivity. For futher understanding of Up Navigation i would recommend reading docs You can more customize this behaviour upon your needs by overriding @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { // Respond to the action bar's Up/Home button case android.R.id.home: NavUtils.navigateUpFromSameTask(this); // Here you will write your logic for handling up navigation return true; } return super.onOptionsItemSelected(item); } Simple Hack This is simple hack which is mostly used to navigate to parent activity if parent is in backstack. By calling onBackPressed() if id is equal to android.R.id.home @Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); switch (id) { case android.R.id.home: onBackPressed(); return true; } return super.onOptionsItemSelected(item); } Section 34.7: Clear your current Activity stack and launch a new Activity If you want to clear your current Activity stack and launch a new Activity (for example, logging out of the app and launching a log in Activity), there appears to be two approaches. 1. Target (API >= 16) Calling finishAffinity() from an Activity 2. Target (11 <= API < 16) GoalKicker.com Android Notes for Professionals 244 Intent intent = new Intent(this, LoginActivity.class); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK |Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(intent); finish(); GoalKicker.com Android Notes for Professionals 245 Chapter 35: Activity Recognition Activity recognition is the detection of a user's physical activity in order to perform certain actions on the device, such as taking points when a drive is detected, turn wi o when a phone is still, or putting the ring volume to max when the user is walking. Section 35.1: Google Play ActivityRecognitionAPI This is a just a simple example of how to use GooglePlay Service's ActivityRecognitionApi. Although this is a great library, it does not work on devices that do not have Google Play Services installed. Docs for ActivityRecognition API Manifest <!-- This is needed to use Activity Recognition! --> <uses-permission android:name="com.google.android.gms.permission.ACTIVITY_RECOGNITION" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name=".ActivityReceiver" /> </application> MainActivity.java public class MainActivity extends AppCompatActivity implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener { private GoogleApiClient apiClient; private LocalBroadcastManager localBroadcastManager; private BroadcastReceiver localActivityReceiver; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); apiClient = new GoogleApiClient.Builder(this) .addApi(ActivityRecognition.API) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .build(); //This just gets the activity intent from the ActivityReceiver class localBroadcastManager = LocalBroadcastManager.getInstance(this); GoalKicker.com Android Notes for Professionals 246 localActivityReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { ActivityRecognitionResult recognitionResult = ActivityRecognitionResult.extractResult(intent); TextView textView = (TextView) findViewById(R.id.activityText); //This is just to get the activity name. Use at your own risk. textView.setText(DetectedActivity.zzkf(recognitionResult.getMostProbableActivity().getType())); } }; } @Override protected void onResume() { super.onResume(); //Register local broadcast receiver localBroadcastManager.registerReceiver(localActivityReceiver, new IntentFilter("activity")); //Connect google api client apiClient.connect(); } @Override protected void onPause() { super.onPause(); //Unregister for activity recognition ActivityRecognition.ActivityRecognitionApi.removeActivityUpdates(apiClient, PendingIntent.getBroadcast(this, 0, new Intent(this, ActivityReceiver.class), PendingIntent.FLAG_UPDATE_CURRENT)); //Disconnects api client apiClient.disconnect(); //Unregister local receiver localBroadcastManager.unregisterReceiver(localActivityReceiver); } @Override public void onConnected(@Nullable Bundle bundle) { //Only register for activity recognition if google api client has connected ActivityRecognition.ActivityRecognitionApi.requestActivityUpdates(apiClient, 0, PendingIntent.getBroadcast(this, 0, new Intent(this, ActivityReceiver.class), PendingIntent.FLAG_UPDATE_CURRENT)); } @Override public void onConnectionSuspended(int i) { } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { } } ActivityReceiver GoalKicker.com Android Notes for Professionals 247 public class ActivityReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { LocalBroadcastManager.getInstance(context).sendBroadcast(intent.setAction("activity")); } } Section 35.2: PathSense Activity Recognition PathSense activity recognition is another good library for devices which don't have Google Play Services, as they have built their own activity recognition model, but requires developers register at http://developer.pathsense.com to get an API key and Client ID. Manifest <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name=".ActivityReceiver" /> <!-- You need to acquire these from their website (http://developer.pathsense.com) --> <meta-data android:name="com.pathsense.android.sdk.CLIENT_ID" android:value="YOUR_CLIENT_ID" /> <meta-data android:name="com.pathsense.android.sdk.API_KEY" android:value="YOUR_API_KEY" /> </application> MainActivity.java public class MainActivity extends AppCompatActivity { private PathsenseLocationProviderApi pathsenseLocationProviderApi; private LocalBroadcastManager localBroadcastManager; private BroadcastReceiver localActivityReceiver; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); pathsenseLocationProviderApi = PathsenseLocationProviderApi.getInstance(this); //This just gets the activity intent from the ActivityReceiver class localBroadcastManager = LocalBroadcastManager.getInstance(this); GoalKicker.com Android Notes for Professionals 248 localActivityReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { //The detectedActivities object is passed as a serializable PathsenseDetectedActivities detectedActivities = (PathsenseDetectedActivities) intent.getSerializableExtra("ps"); TextView textView = (TextView) findViewById(R.id.activityText); textView.setText(detectedActivities.getMostProbableActivity().getDetectedActivity().name()); } }; } @Override protected void onResume() { super.onResume(); //Register local broadcast receiver localBroadcastManager.registerReceiver(localActivityReceiver, new IntentFilter("activity")); //This gives an update every time it receives one, even if it was the same as the last update pathsenseLocationProviderApi.requestActivityUpdates(ActivityReceiver.class); // // This gives updates only when it changes (ON_FOOT -> IN_VEHICLE for example) pathsenseLocationProviderApi.requestActivityChanges(ActivityReceiver.class); } @Override protected void onPause() { super.onPause(); pathsenseLocationProviderApi.removeActivityUpdates(); // pathsenseLocationProviderApi.removeActivityChanges(); //Unregister local receiver localBroadcastManager.unregisterReceiver(localActivityReceiver); } } ActivityReceiver.java // You don't have to use their broadcastreceiver, but it's best to do so, and just pass the result // as needed to another class. public class ActivityReceiver extends PathsenseActivityRecognitionReceiver { @Override protected void onDetectedActivities(Context context, PathsenseDetectedActivities pathsenseDetectedActivities) { Intent intent = new Intent("activity").putExtra("ps", pathsenseDetectedActivities); LocalBroadcastManager.getInstance(context).sendBroadcast(intent); } } GoalKicker.com Android Notes for Professionals 249 Chapter 36: Split Screen / Multi-Screen Activities Section 36.1: Split Screen introduced in Android Nougat implemented Set this attribute in your manifest's or element to enable or disable multi-window display: android:resizeableActivity=["true" | "false"] If this attribute is set to true, the activity can be launched in split-screen and freeform modes. If the attribute is set to false, the activity does not support multi-window mode. If this value is false, and the user attempts to launch the activity in multi-window mode, the activity takes over the full screen. If your app targets API level 24, but you do not specify a value for this attribute, the attribute's value defaults to true. The following code shows how to specify an activity's default size and location, and its minimum size, when the activity is displayed in freeform mode: <--These are default values suggested by google.--> <activity android:name=".MyActivity"> <layout android:defaultHeight="500dp" android:defaultWidth="600dp" android:gravity="top|end" android:minHeight="450dp" android:minWidth="300dp" /> </activity> Disabled features in multi-window mode Certain features are disabled or ignored when a device is in multi-window mode, because they dont make sense for an activity which may be sharing the device screen with other activities or apps. Such features include: 1. Some System UI customization options are disabled; for example, apps cannot hide the status bar if they are not running in full-screen mode. 2. The system ignores changes to the android:screenOrientation attribute. If your app targets API level 23 or lower If your app targets API level 23 or lower and the user attempts to use the app in multi-window mode, the system forcibly resizes the app unless the app declares a xed orientation. If your app does not declare a xed orientation, you should launch your app on a device running Android 7.0 or higher and attempt to put the app in split-screen mode. Verify that the user experience is acceptable when the app is forcibly resized. If the app declares a xed orientation, you should attempt to put the app in multi-window mode. Verify that when you do so, the app remains in full-screen mode. GoalKicker.com Android Notes for Professionals 250 Chapter 37: Material Design Material Design is a comprehensive guide for visual, motion, and interaction design across platforms and devices. Section 37.1: Adding a Toolbar A Toolbar is a generalization of ActionBar for use within application layouts. While an ActionBar is traditionally part of an Activity's opaque window decor controlled by the framework, a Toolbar may be placed at any arbitrary level of nesting within a view hierarchy. It can be added by performing the following steps: 1. Make sure the following dependency is added to your module's (e.g. app's) build.gradle le under dependencies: compile 'com.android.support:appcompat-v7:25.3.1' 2. Set the theme for your app to one that does not have an ActionBar. To do that, edit your styles.xml le under res/values, and set a Theme.AppCompat theme. In this example we are using Theme.AppCompat.NoActionBar as parent of your AppTheme: <style name="AppTheme" parent="Theme.AppCompat.NoActionBar"> <item name="colorPrimary">@color/primary</item> <item name="colorPrimaryDark">@color/primaryDark</item> <item name="colorAccent">@color/accent</item> </style> You can also use Theme.AppCompat.Light.NoActionBar or Theme.AppCompat.DayNight.NoActionBar, or any other theme that does not inherently have an ActionBar 3. Add the Toolbar to your activity layout: <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" android:elevation="4dp"/> Below the Toolbar you can add the rest of your layout. 4. In your Activity, set the Toolbar as the ActionBar for this Activity. Provided that you're using the appcompat library and an AppCompatActivity, you would use the setSupportActionBar() method: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); //... GoalKicker.com Android Notes for Professionals 251 } After performing the above steps, you can use the getSupportActionBar() method to manipulate the Toolbar that is set as the ActionBar. For example, you can set the title as shown below: getSupportActionBar().setTitle("Activity Title"); For example, you can also set title and background color as shown below: CharSequence title = "Your App Name"; SpannableString s = new SpannableString(title); s.setSpan(new ForegroundColorSpan(Color.RED), 0, title.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); getSupportActionBar().setTitle(s); getSupportActionBar().setBackgroundDrawable(new ColorDrawable(Color.argb(128, 0, 0, 0))); Section 37.2: Buttons styled with Material Design The AppCompat Support Library denes several useful styles for Buttons, each of which extend a base Widget.AppCompat.Button style that is applied to all buttons by default if you are using an AppCompat theme. This style helps ensure that all buttons look the same by default following the Material Design specication. In this case the accent color is pink. 1. Simple Button: @style/Widget.AppCompat.Button <Button style="@style/Widget.AppCompat.Button" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="16dp" android:text="@string/simple_button"/> 2. Colored Button: @style/Widget.AppCompat.Button.Colored The Widget.AppCompat.Button.Colored style extends the Widget.AppCompat.Button style and applies automatically the accent color you selected in your app theme. <Button style="@style/Widget.AppCompat.Button.Colored" android:layout_width="match_parent" GoalKicker.com Android Notes for Professionals 252 android:layout_height="wrap_content" android:layout_margin="16dp" android:text="@string/colored_button"/> If you want to customize the background color without changing the accent color in your main theme you can create a custom theme (extending the ThemeOverlay theme) for your Button and assign it to the button's android:theme attribute: <Button style="@style/Widget.AppCompat.Button.Colored" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="16dp" android:theme="@style/MyButtonTheme"/> Dene the theme in res/values/themes.xml: <style name="MyButtonTheme" parent="ThemeOverlay.AppCompat.Light"> <item name="colorAccent">@color/my_color</item> </style> 3. Borderless Button: @style/Widget.AppCompat.Button.Borderless <Button style="@style/Widget.AppCompat.Button.Borderless" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="16dp" android:text="@string/borderless_button"/> 4. Borderless Colored Button: @style/Widget.AppCompat.Button.Borderless.Colored <Button style="@style/Widget.AppCompat.Button.Borderless.Colored" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="16dp" android:text="@string/borderless_colored_button"/> Section 37.3: Adding a FloatingActionButton (FAB) In the material design, a Floating action button represents the primary action in an Activity. They are distinguished by a circled icon oating above the UI and have motion behaviors that include morphing, GoalKicker.com Android Notes for Professionals 253 launching, and a transferring anchor point. Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:design:25.3.1' Now add the FloatingActionButton to your layout le: <android.support.design.widget.FloatingActionButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="16dp" android:src="@drawable/some_icon"/> where the src attribute references the icon that should be used for the oating action. The result should look something like this (presuming your accent color is Material Pink): By default, the background color of your FloatingActionButton will be set to your theme's accent color. Also, note that a FloatingActionButton requires a margin around it to work properly. The recommended margin for the bottom is 16dp for phones and 24dp for tablets. Here are properties which you can use to customize the FloatingActionButton further (assuming xmlns:app="http://schemas.android.com/apk/res-auto is declared as namespace the top of your layout): app:fabSize: Can be set to normal or mini to switch between a normal sized or a smaller version. app:rippleColor: Sets the color of the ripple eect of your FloatingActionButton. Can be a color resource or hex string. app:elevation: Can be a string, integer, boolean, color value, oating point, dimension value. app:useCompatPadding: Enable compat padding. Maybe a boolean value, such as true or false. Set to true to use compat padding on api-21 and later, in order to maintain a consistent look with older api levels. You can nd more examples about FAB here. Section 37.4: RippleDrawable Ripple touch eect was introduced with material design in Android 5.0 (API level 21) and the animation is implemented by the new RippleDrawable class. Drawable that shows a ripple eect in response to state changes. The anchoring position of the ripple for a given state may be specied by calling setHotspot(float x, float y) with the corresponding state attribute identier. Version 5.0 In general, ripple eect for regular buttons works by default in API 21 and above, and for other touchable views, it can be achieved by specifying: GoalKicker.com Android Notes for Professionals 254 android:background="?android:attr/selectableItemBackground"> for ripples contained within the view or: android:background="?android:attr/selectableItemBackgroundBorderless" for ripples that extend beyond the view's bounds. For example, in the image below, B1 is a button that does not have any background, B2 is set up with android:background="android:attr/selectableItemBackground" B3 is set up with android:background="android:attr/selectableItemBackgroundBorderless" GoalKicker.com Android Notes for Professionals 255 (Image courtesy: http://blog.csdn.net/a396901990/article/details/40187203 ) You can achieve the same in code using: int[] attrs = new int[]{R.attr.selectableItemBackground}; TypedArray typedArray = getActivity().obtainStyledAttributes(attrs); int backgroundResource = typedArray.getResourceId(0, 0); GoalKicker.com Android Notes for Professionals 256 myView.setBackgroundResource(backgroundResource); Ripples can also be added to a view using the android:foreground attribute the same way as above. As the name suggests, in case the ripple is added to the foreground, the ripple will show up above any view it is added to (e.g. ImageView, a LinearLayout containing multiple views, etc). If you want to customize the ripple eect into a view, you need to create a new XML le, inside the drawable directory. Here are few examples: Example 1: An unbounded ripple <ripple xmlns:android="http://schemas.android.com/apk/res/android" android:color="#ffff0000" /> Example 2: Ripple with mask and background color <ripple android:color="#7777777" xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@android:id/mask" android:drawable="#ffff00" /> <item android:drawable="@android:color/white"/> </ripple> If there is view with a background already specied with a shape, corners and any other tags, to add a ripple to that view use a mask layer and set the ripple as the background of the view. Example: <?xml version="1.0" encoding="utf-8"?> <ripple xmlns:android="http://schemas.android.com/apk/res/android" android:color="?android:attr/colorControlHighlight"> <item android:id="@android:id/mask"> <shape android:shape="rectangle"> solid android:color="#000000"/> <corners android:radius="25dp"/> </shape> </item> <item android:drawable="@drawable/rounded_corners" /> </ripple> Example 3: Ripple on top a drawable resource <ripple xmlns:android="http://schemas.android.com/apk/res/android" android:color="#ff0000ff"> <item android:drawable="@drawable/my_drawable" /> </ripple> Usage: To attach your ripple xml le to any view, set it as background as following (assuming your ripple le is named my_ripple.xml): <View android:id="@+id/myViewId" android:layout_width="wrap_content" GoalKicker.com Android Notes for Professionals 257 android:layout_height="wrap_content" android:background="@drawable/my_ripple" /> Selector: The ripple drawable can also be used in place of color state list selectors if your target version is v21 or above (you can also place the ripple selector in the drawable-v21 folder): <!-- /drawable/button.xml: --> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@drawable/button_pressed"/> <item android:drawable="@drawable/button_normal"/> </selector> <!--/drawable-v21/button.xml:--> <?xml version="1.0" encoding="utf-8"?> <ripple xmlns:android="http://schemas.android.com/apk/res/android" android:color="?android:colorControlHighlight"> <item android:drawable="@drawable/button_normal" /> </ripple> In this case, the color of the default state of your view would be white and the pressed state would show the ripple drawable. Point to note: Using ?android:colorControlHighlight will give the ripple the same color as the built-in ripples in your app. To change just the ripple color, you can customize the color android:colorControlHighlight in your theme like so: <?xml version="1.0" encoding="utf-8"?> <resources> <style name="AppTheme" parent="android:Theme.Material.Light.DarkActionBar"> <item name="android:colorControlHighlight">@color/your_custom_color</item> </style> </resources> and then use this theme in your activities, etc. The eect would be like the image below: GoalKicker.com Android Notes for Professionals 258 (Image courtesy: http://blog.csdn.net/a396901990/article/details/40187203 ) Section 37.5: Adding a TabLayout TabLayout provides a horizontal layout to display tabs, and is commonly used in conjunction with a ViewPager. Make sure the following dependency is added to your app's build.gradle le under dependencies: GoalKicker.com Android Notes for Professionals 259 compile 'com.android.support:design:25.3.1' Now you can add items to a TabLayout in your layout using the TabItem class. For example: <android.support.design.widget.TabLayout android:layout_height="wrap_content" android:layout_width="match_parent" android:id="@+id/tabLayout"> <android.support.design.widget.TabItem android:text="@string/tab_text_1" android:icon="@drawable/ic_tab_1"/> <android.support.design.widget.TabItem android:text="@string/tab_text_2" android:icon="@drawable/ic_tab_2"/> </android.support.design.widget.TabLayout> Add an OnTabSelectedListener to be notied when a tab in the TabLayout is selected/unselected/reselected: TabLayout tabLayout = (TabLayout) findViewById(R.id.tabLayout); tabLayout.addOnTabSelectedListener(new TabLayout.OnTabSelectedListener() { @Override public void onTabSelected(TabLayout.Tab tab) { int position = tab.getPosition(); // Switch to view for this tab } @Override public void onTabUnselected(TabLayout.Tab tab) { } @Override public void onTabReselected(TabLayout.Tab tab) { } }); Tabs can also be added/removed from the TabLayout programmatically. TabLayout.Tab tab = tabLayout.newTab(); tab.setText(R.string.tab_text_1); tab.setIcon(R.drawable.ic_tab_1); tabLayout.addTab(tab); tabLayout.removeTab(tab); tabLayout.removeTabAt(0); tabLayout.removeAllTabs(); TabLayout has two modes, xed and scrollable. tabLayout.setTabMode(TabLayout.MODE_FIXED); tabLayout.setTabMode(TabLayout.MODE_SCROLLABLE); These can also be applied in XML: GoalKicker.com Android Notes for Professionals 260 <android.support.design.widget.TabLayout android:id="@+id/tabLayout" android:layout_width="match_parent" android:layout_height="wrap_content" app:tabMode="fixed|scrollable" /> Note: the TabLayout modes are mutually exclusive, meaning only one can be active at a time. The tab indicator color is the accent color dened for your Material Design theme. You can override this color by dening a custom style in styles.xml and then applying the style to your TabLayout: <style name="MyCustomTabLayoutStyle" parent="Widget.Design.TabLayout"> <item name="tabIndicatorColor">@color/your_color</item> </style> Then you can apply the style to the view using: <android.support.design.widget.TabLayout android:id="@+id/tabs" style="@style/MyCustomTabLayoutStyle" android:layout_width="match_parent" android:layout_height="wrap_content"> </android.support.design.widget.TabLayout> Section 37.6: Bottom Sheets in Design Support Library Bottom sheets slide up from the bottom of the screen to reveal more content. They were added to the Android Support Library in v25.1.0 version and supports above all the versions. Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:design:25.3.1' Persistent Bottom Sheets You can achieve a Persistent Bottom Sheet attaching a BottomSheetBehavior to a child View of a CoordinatorLayout: <android.support.design.widget.CoordinatorLayout > <!-- ..... --> <LinearLayout android:id="@+id/bottom_sheet" android:elevation="4dp" android:minHeight="120dp" app:behavior_peekHeight="120dp" ... app:layout_behavior="android.support.design.widget.BottomSheetBehavior"> <!-- ..... --> </LinearLayout> </android.support.design.widget.CoordinatorLayout> Then in your code you can create a reference using: GoalKicker.com Android Notes for Professionals 261 // The View with the BottomSheetBehavior View bottomSheet = coordinatorLayout.findViewById(R.id.bottom_sheet); BottomSheetBehavior mBottomSheetBehavior = BottomSheetBehavior.from(bottomSheet); You can set the state of your BottomSheetBehavior using the setState() method: mBottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED); You can use one of these states: STATE_COLLAPSED: this collapsed state is the default and shows just a portion of the layout along the bottom. The height can be controlled with the app:behavior_peekHeight attribute (defaults to 0) STATE_EXPANDED: the fully expanded state of the bottom sheet, where either the whole bottom sheet is visible (if its height is less than the containing CoordinatorLayout) or the entire CoordinatorLayout is lled STATE_HIDDEN: disabled by default (and enabled with the app:behavior_hideable attribute), enabling this allows users to swipe down on the bottom sheet to completely hide the bottom sheet Further to open or close the BottomSheet on click of a View of your choice, A Button let's say, here is how to toggle the sheet behavior and update view. mButton = (Button) findViewById(R.id.button_2); //On Button click we monitor the state of the sheet mButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if (mBottomSheetBehavior.getState() == BottomSheetBehavior.STATE_EXPANDED) { //If expanded then collapse it (setting in Peek mode). mBottomSheetBehavior.setState(BottomSheetBehavior.STATE_COLLAPSED); mButton.setText(R.string.button2_hide); } else if (mBottomSheetBehavior.getState() == BottomSheetBehavior.STATE_COLLAPSED) { //If Collapsed then hide it completely. mBottomSheetBehavior.setState(BottomSheetBehavior.STATE_HIDDEN); mButton.setText(R.string.button2); } else if (mBottomSheetBehavior.getState() == BottomSheetBehavior.STATE_HIDDEN) { //If hidden then Collapse or Expand, as the need be. mBottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED); mButton.setText(R.string.button2_peek); } } }); But BottomSheet behavior also has a feature where user can interact with the swipe UP or Down it with a DRAG motion. In such a case, we might not be able to update the dependent View (like the button above) If the Sheet state has changed. For that matter, youd like to receive callbacks of state changes, hence you can add a BottomSheetCallback to listen to user swipe events: mBottomSheetBehavior.setBottomSheetCallback(new BottomSheetCallback() { @Override public void onStateChanged(@NonNull View bottomSheet, int newState) { // React to state change and notify views of the current state } @Override public void onSlide(@NonNull View bottomSheet, float slideOffset) { // React to dragging events and animate views or transparency of dependent views } GoalKicker.com Android Notes for Professionals 262 }); And if you only want your Bottom Sheet to be visible only in COLLAPSED and EXPANDED mode toggles and never HIDE use: mBottomSheetBehavior2.setHideable(false); Bottom Sheet DialogFragment You can also display a BottomSheetDialogFragment in place of a View in the bottom sheet. To do this, you rst need to create a new class that extends BottomSheetDialogFragment. Within the setupDialog() method, you can inate a new layout le and retrieve the BottomSheetBehavior of the container view in your Activity. Once you have the behavior, you can create and associate a BottomSheetCallback with it to dismiss the Fragment when the sheet is hidden. public class BottomSheetDialogFragmentExample extends BottomSheetDialogFragment { private BottomSheetBehavior.BottomSheetCallback mBottomSheetBehaviorCallback = new BottomSheetBehavior.BottomSheetCallback() { @Override public void onStateChanged(@NonNull View bottomSheet, int newState) { if (newState == BottomSheetBehavior.STATE_HIDDEN) { dismiss(); } } @Override public void onSlide(@NonNull View bottomSheet, float slideOffset) { } }; @Override public void setupDialog(Dialog dialog, int style) { super.setupDialog(dialog, style); View contentView = View.inflate(getContext(), R.layout.fragment_bottom_sheet, null); dialog.setContentView(contentView); CoordinatorLayout.LayoutParams params = (CoordinatorLayout.LayoutParams) ((View) contentView.getParent()).getLayoutParams(); CoordinatorLayout.Behavior behavior = params.getBehavior(); if( behavior != null && behavior instanceof BottomSheetBehavior ) { ((BottomSheetBehavior) behavior).setBottomSheetCallback(mBottomSheetBehaviorCallback); } } } Finally, you can call show() on an instance of your Fragment to display it in the bottom sheet. BottomSheetDialogFragment bottomSheetDialogFragment = new BottomSheetDialogFragmentExample(); bottomSheetDialogFragment.show(getSupportFragmentManager(), bottomSheetDialogFragment.getTag()); You can nd more details in the dedicated topic GoalKicker.com Android Notes for Professionals 263 Section 37.7: Apply an AppCompat theme The AppCompat support library provides themes to build apps with the Material Design specication. A theme with a parent of Theme.AppCompat is also required for an Activity to extend AppCompatActivity. The rst step is to customize your themes color palette to automatically colorize your app. In your app's res/styles.xml you can dene: <!-- inherit from the AppCompat theme --> <style name="AppTheme" parent="Theme.AppCompat"> <!-- your app branding color for the app bar --> <item name="colorPrimary">#2196f3</item> <!-- darker variant for the status bar and contextual app bars --> <item name="colorPrimaryDark">#1976d2</item> <!-- theme UI controls like checkboxes and text fields --> <item name="colorAccent">#f44336</item> </style> Instead of Theme.AppCompat, which has a dark background, you can also use Theme.AppCompat.Light or Theme.AppCompat.Light.DarkActionBar. You can customize the theme with your own colours. Good choices are in the Material design specication colour chart, and Material Palette. The "500" colours are good choices for primary (blue 500 in this example); choose "700" of the same hue for the dark one; and an a shade from a dierent hue as the accent colour. The primary colour is used for your app's toolbar and its entry in the overview (recent apps) screen, the darker variant to tint the status bar, and the accent colour to highlight some controls. After creating this theme, apply it to your app in the AndroidManifest.xml and also apply the theme to any particular activity. This is useful for applying a AppTheme.NoActionBar theme, which lets you implement non-default toolbar congurations. <application android:theme="@style/AppTheme" ...> <activity android:name=".MainActivity" android:theme="@style/AppTheme" /> </application> You can also apply themes to individual Views using android:theme and a ThemeOverlay theme. For example with a Toolbar: <android.support.v7.widget.Toolbar android:layout_width="match_parent" android:layout_height="wrap_content" android:background="?attr/colorPrimary" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" /> or a Button: <Button style="@style/Widget.AppCompat.Button.Colored" android:layout_width="wrap_content" android:layout_height="wrap_content" android:theme="@style/MyButtonTheme"/> GoalKicker.com Android Notes for Professionals 264 <!-- res/values/themes.xml --> <style name="MyButtonTheme" parent="ThemeOverlay.AppCompat.Light"> <item name="colorAccent">@color/my_color</item> </style> Section 37.8: Add a Snackbar One of the main features in Material Design is the addition of a Snackbar, which in theory replaces the previous Toast. As per the Android documentation: Snackbars contain a single line of text directly related to the operation performed. They may contain a text action, but no icons. Toasts are primarily used for system messaging. They also display at the bottom of the screen, but may not be swiped o-screen. Toasts can still be used in Android to display messages to users, however if you have decided to opt for material design usage in your app, it is recommended that you actually use a snackbar. Instead of being displayed as an overlay on your screen, a Snackbar pops from the bottom. Here is how it is done: Snackbar snackbar = Snackbar .make(coordinatorLayout, "Here is your new Snackbar", Snackbar.LENGTH_LONG); snackbar.show(); As for the length of time to show the Snackbar, we have the options similar to the ones oered by a Toast or we could set a custom duration in milliseconds: LENGTH_SHORT LENGTH_LONG LENGTH_INDEFINITE setDuration() (since version 22.2.1) You can also add dynamic features to your Snackbar such as ActionCallback or custom color. However do pay attention to the design guideline oered by Android when customising a Snackbar. GoalKicker.com Android Notes for Professionals 265 Implementing the Snackbar has one limitation however. The parent layout of the view you are going to implement a Snackbar in needs to be a CoordinatorLayout. This is so that the actual popup from the bottom can be made. This is how to dene a CoordinatorLayout in your layout xml le: <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/coordinatorLayout" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> //any other widgets in your layout go here. </android.support.design.widget.CoordinatorLayout> The CoordinatorLayout then needs to be dened in your Activity's onCreate method, and then used when creating the Snackbar itself. For more information about about the Snackbar, please check the ocial documentation or the dedicated topic in the documentation. Section 37.9: Add a Navigation Drawer Navigation Drawers are used to navigate to top-level destinations in an app. Make sure that you have added design support library in your build.gradle le under dependencies: dependencies { // ... compile 'com.android.support:design:25.3.1' } Next, add the DrawerLayout and NavigationView in your XML layout resource le. The DrawerLayout is just a fancy container that allows the NavigationView, the actual navigation drawer, to slide out from the left or right of the screen. Note: for mobile devices, the standard drawer size is 320dp. <!-- res/layout/activity_main.xml --> <android.support.v4.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/navigation_drawer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:openDrawer="start"> <! -- You can use "end" to open drawer from the right side --> <android.support.design.widget.CoordinatorLayout android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true"> <android.support.design.widget.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/AppTheme.AppBarOverlay"> GoalKicker.com Android Notes for Professionals 266 <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/AppTheme.PopupOverlay" /> </android.support.design.widget.AppBarLayout> </android.support.design.widget.CoordinatorLayout> <android.support.design.widget.NavigationView android:id="@+id/navigation_drawer" android:layout_width="320dp" android:layout_height="match_parent" android:layout_gravity="start" android:fitsSystemWindows="true" app:headerLayout="@layout/drawer_header" app:menu="@menu/navigation_menu" /> </android.support.v4.widget.DrawerLayout> Now, if you wish, create a header le that will serve as the top of your navigation drawer. This is used to give a much more elegant look to the drawer. <!-- res/layout/drawer_header.xml --> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="190dp"> <ImageView android:id="@+id/header_image" android:layout_width="140dp" android:layout_height="120dp" android:layout_centerInParent="true" android:scaleType="centerCrop" android:src="@drawable/image" /> <TextView android:id="@+id/header_text_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@+id/header_image" android:text="<NAME>" android:textSize="20sp" /> </RelativeLayout> It is referenced in the NavigationView tag in the app:headerLayout="@layout/drawer_header" attribute. This app:headerLayout inates the specied layout into the header automatically. This can alternatively be done at runtime with: // Lookup navigation view NavigationView navigationView = (NavigationView) findViewById(R.id.navigation_drawer); // Inflate the header view at runtime View headerLayout = navigationView.inflateHeaderView(R.layout.drawer_header); To automatically populate your navigation drawer with material design-compliant navigation items, create a menu GoalKicker.com Android Notes for Professionals 267 le and add items as needed. Note: while icons for items aren't required, they are suggested in the Material Design specication. It is referenced in the NavigationView tag in the app:menu="@menu/navigation_menu" attribute. <!-- res/menu/menu_drawer.xml --> <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/nav_item_1" android:title="Item #1" android:icon="@drawable/ic_nav_1" /> <item android:id="@+id/nav_item_2" android:title="Item #2" android:icon="@drawable/ic_nav_2" /> <item android:id="@+id/nav_item_3" android:title="Item #3" android:icon="@drawable/ic_nav_3" /> <item android:id="@+id/nav_item_4" android:title="Item #4" android:icon="@drawable/ic_nav_4" /> </menu> To separate items into groups, put them into a <menu> nested in another <item> with an android:title attribute or wrap them with the <group> tag. Now that the layout is done, move on to the Activity code: // Find the navigation view NavigationView navigationView = (NavigationView) findViewById(R.id.navigation_drawer); navigationView.setNavigationItemSelectedListener(new NavigationView.OnNavigationItemSelectedListener() { @Override public boolean onNavigationItemSelected(MenuItem item) { // Get item ID to determine what to do on user click int itemId = item.getItemId(); // Respond to Navigation Drawer selections with a new Intent startActivity(new Intent(this, OtherActivity.class)); return true; } }); DrawerLayout drawer = (DrawerLayout) findViewById(R.id.navigation_drawer_layout); // Necessary for automatically animated navigation drawer upon open and close ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(this, drawer, "Open navigation drawer", "Close navigation drawer"); // The two Strings are not displayed to the user, but be sure to put them into a separate strings.xml file. drawer.addDrawerListener(toggle); toogle.syncState(); You can now do whatever you want in the header view of the NavigationView View headerView = navigationView.getHeaderView(); TextView headerTextView = (TextView) headerview.findViewById(R.id.header_text_view); ImageView headerImageView = (ImageView) headerview.findViewById(R.id.header_image); // Set navigation header text headerTextView.setText("<NAME>"); // Set navigation header image GoalKicker.com Android Notes for Professionals 268 headerImageView.setImageResource(R.drawable.header_image); The header view behaves like any other View, so once you use findViewById() and add some other Views to your layout le, you can set the properties of anything in it. You can nd more details and examples in the dedicated topic. Section 37.10: How to use TextInputLayout Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:design:25.3.1' Show the hint from an EditText as a oating label when a value is entered. <android.support.design.widget.TextInputLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <android.support.design.widget.TextInputEditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/form_username"/> </android.support.design.widget.TextInputLayout> For displaying the password display eye icon with TextInputLayout, we can make use of the following code: <android.support.design.widget.TextInputLayout android:id="@+id/input_layout_current_password" android:layout_width="match_parent" android:layout_height="wrap_content" app:passwordToggleEnabled="true"> <android.support.design.widget.TextInputEditText android:id="@+id/current_password" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/current_password" android:inputType="textPassword" /> </android.support.design.widget.TextInputLayout> where app:passwordToggleEnabled="true" & android:inputType="textPassword" parameters are required. app should use the namespace xmlns:app="http://schemas.android.com/apk/res-auto" You can nd more details and examples in the dedicated topic. GoalKicker.com Android Notes for Professionals 269 Chapter 38: Resources Section 38.1: Dene colors Colors are usually stored in a resource le named colors.xml in the /res/values/ folder. They are dened by <color> elements: <?xml version="1.0" encoding="utf-8"?> <resources> <color name="colorPrimary">#3F51B5</color> <color name="colorPrimaryDark">#303F9F</color> <color name="colorAccent">#FF4081</color> <color name="blackOverlay">#66000000</color> </resources> Colors are represented by hexadecimal color values for each color channel (0 - FF) in one of the formats: #RGB #ARGB #RRGGBB #AARRGGBB Legend A - alpha channel - 0 value is fully transparent, FF value is opaque R - red channel G - green channel B - blue channel Dened colors can be used in XML with following syntax @color/name_of_the_color For example: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/blackOverlay"> Using colors in code These examples assume this is an Activity reference. A Context reference can be used in its place as well. Version 1.6 int color = ContextCompat.getColor(this, R.color.black_overlay); view.setBackgroundColor(color); Version < 6.0 int color = this.getResources().getColor(this, R.color.black_overlay); view.setBackgroundColor(color); In above declaration colorPrimary, colorPrimaryDark and colorAccent are used to dene Material design colors that will be used in dening custom Android theme in styles.xml. They are automatically added when new project is created with Android Studio. GoalKicker.com Android Notes for Professionals 270 Section 38.2: Color Transparency(Alpha) Level Hex Opacity Values -----------------------------| Alpha(%) | Hex Value | -----------------------------| 100% | FF | | 95% | F2 | | 90% | E6 | | 85% | D9 | | 80% | CC | | 75% | BF | | 70% | B3 | | 65% | A6 | | 60% | 99 | | 55% | 8C | | 50% | 80 | | 45% | 73 | | 40% | 66 | | 35% | 59 | | 30% | 4D | | 25% | 40 | | 20% | 33 | | 15% | 26 | | 10% | 1A | | 5% | 0D | | 0% | 00 | ------------------------------ If you want to set 45% to red color. <color name="red_with_alpha_45">#73FF0000</color> hex value for red - #FF0000 You can add 73 for 45% opacity in prex - #73FF0000 Section 38.3: Dene String Plurals To dierentiate between plural and singular strings, you can dene a plural in your strings.xml le and list the dierent quantities, as shown in the example below: <?xml version="1.0" encoding="utf-8"?> <resources> <plurals name="hello_people"> <item quantity="one">Hello to %d person</item> <item quantity="other">Hello to %d people</item> </plurals> </resources> This denition can be accessed from Java code by using the getQuantityString() method of the Resources class, as shown in the following example: getResources().getQuantityString(R.plurals.hello_people, 3, 3); Here, the rst parameter R.plurals.hello_people is the resource name. The second parameter (3 in this example) GoalKicker.com Android Notes for Professionals 271 is used to pick the correct quantity string. The third parameter (also 3 in this example) is the format argument that will be used for substituting the format specier %d. Possible quantity values (listed in alphabetical order) are: few many one other two zero It is important to note that not all locales support every denomination of quantity. For example, the Chinese language does not have a concept of one item. English does not have a zero item, as it is grammatically the same as other. Unsupported instances of quantity will be agged by the IDE as Lint warnings, but won't cause complication errors if they are used. Section 38.4: Dene strings Strings are typically stored in the resource le strings.xml. They are dened using a <string> XML element. The purpose of strings.xml is to allow internationalisation. You can dene a strings.xml for each language iso code. Thus when the system looks for the string 'app_name' it rst checks the xml le corresponding to the current language, and if it is not found, looks for the entry in the default strings.xml le. This means you can choose to only localise some of your strings while not others. /res/values/strings.xml <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">Hello World App</string> <string name="hello_world">Hello World!</string> </resources> Once a string is dened in an XML resource le, it can be used by other parts of the app. An app's XML project les can use a <string> element by referring to @string/string_name. For example, an app's manifest (/manifests/AndroidManifest.xml) le includes the following line by default in Android Studio: android:label="@string/app_name" This tells android to look for a <string> resource called "app_name" to use as the name for the app when it is installed or displayed in a launcher. Another time you would use a <string> resource from an XML le in android would be in a layout le. For example, the following represents a TextView which displays the hello_world string we dened earlier: <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/hello_world"/> You can also access <string> resources from the java portion of your app. To recall our same hello_world string from above within an Activity class, use: GoalKicker.com Android Notes for Professionals 272 String helloWorld = getString(R.string.hello_world); Section 38.5: Dene dimensions Dimensions are typically stored in a resource le names dimens.xml. They are dened using a <dimen> element. res/values/dimens.xml <?xml version="1.0" encoding="utf-8"?> <resources> <dimen name="small_padding">5dp</dimen> <dimen name="medium_padding">10dp</dimen> <dimen name="large_padding">20dp</dimen> <dimen name="small_font">14sp</dimen> <dimen name="medium_font">16sp</dimen> <dimen name="large_font">20sp</dimen> </resources> You can use dierent units : sp : Scale-independent Pixels. For fonts. dp : Density-independent Pixels. For everything else. pt : Points px : Pixels mm : Millimeters im : Inches Dimensions can now be referenced in XML with the syntax @dimen/name_of_the_dimension. For example: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="@dimen/large_padding"> </RelativeLayout> Section 38.6: String formatting in strings.xml Dening Strings in the strings.xml le also allows for string formatting. The only caveat is that the String will need to be dealt with in code like below, versus simply attaching it to a layout. <string name="welcome_trainer">Hello Pokmon Trainer, %1$s! You have caught %2$d Pokmon.</string> String welcomePokemonTrainerText = getString(R.string.welcome_trainer, tranerName, pokemonCount); In above example, %1$s '%' separates from normal characters, '1' denotes rst parameter, '$' is used as separator between parameter number and type, 's' denotes string type ('d' is used for integer) GoalKicker.com Android Notes for Professionals 273 Note that getString() is a method of Context or Resources, i.e. you can use it directly within an Activity instance, or else you may use getActivity().getString() or getContext().getString() respectively. Section 38.7: Dene integer array In order to dene an integer array write in a resources le res/values/lename.xml <integer-array name="integer_array_name"> <item>integer_value</item> <item>@integer/integer_id</item> </integer-array> for example res/values/arrays.xml <?xml version="1.0" encoding="utf-8"?> <resources> <integer-array name="fibo"> <item>@integer/zero</item> <item>@integer/one</item> <item>@integer/one</item> <item>@integer/two</item> <item>@integer/three</item> <item>@integer/five</item> </integer-array> </resources> and use it from java like int[] values = getResources().getIntArray(R.array.fibo); Log.i("TAG",Arrays.toString(values))); Output I/TAG: [0, 1, 1, 2, 3, 5] Section 38.8: Dene a color state list Color state lists can be used as colors, but will change depending on the state of the view they are used for. To dene one, create a resource le in res/color/foo.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:color="#888888" android:state_enabled="false"/> <item android:color="@color/lightGray" android:state_selected="false"/> <item android:color="@android:color/white" /> </selector> Items are evaluated in the order they are dened, and the rst item whose specied states match the current state of the view is used. So it's a good practice to specify a catch-all at the end, without any state selectors specied. Each item can either use a color literal, or reference a color dened somewhere else. GoalKicker.com Android Notes for Professionals 274 Section 38.9: 9 Patches 9 Patches are stretchable images in which the areas which can be stretched are dened by black markers on a transparent border. There is a great tutorial here. Despite being so old, it's still so valuable and it helped many of us to deeply understand the 9 patch gear. Unfortunately, recently that page has been put down for a while (it's currently up again). Hence, the need to have a physical copy of that page for android developers on our reliable server/s. Here it is. A SIMPLE GUIDE TO 9-PATCH FOR ANDROID UI May 18, 2011 While I was working on my rst Android app, I found 9-patch (aka 9.png) to be confusing and poorly documented. After a little while, I nally picked up on how it works and decided to throw together something to help others gure it out. Basically, 9-patch uses png transparency to do an advanced form of 9-slice or scale9. The guides are straight, 1-pixel black lines drawn on the edge of your image that dene the scaling and ll of your image. By naming your image le name.9.png, Android will recognize the 9.png format and use the black guides to scale and ll your bitmaps. Heres a basic guide map: As you can see, you have guides on each side of your image. The TOP and LEFT guides are for scaling your image (i.e. 9-slice), while the RIGHT and BOTTOM guides dene the ll area. The black guide lines are cut-o/removed from your image they wont show in the app. Guides must only be one pixel wide, so if you want a 4848 button, your png will actually be 5050. Anything thicker than one pixel will remain part of your image. (My examples have 4-pixel wide guides for better visibility. They should really be only 1pixel). Your guides must be solid black (#000000). Even a slight dierence in color (#000001) or alpha will cause it to fail GoalKicker.com Android Notes for Professionals 275 and stretch normally. This failure wont be obvious either*, it fails silently! Yes. Really. Now you know. Also you should keep in mind that remaining area of the one-pixel outline must be completely transparent. This includes the four corners of the image those should always be clear. This can be a bigger problem than you realize. For example, if you scale an image in Photoshop it will add anti-aliased pixels which may include almostinvisible pixels which will also cause it to fail*. If you must scale in Photoshop, use the Nearest Neighbor setting in the Resample Image pulldown menu (at the bottom of the Image Size pop-up menu) to keep sharp edges on your guides. *(updated 1/2012) This is actually a x in the latest dev kit. Previously it would manifest itself as all of your other images and resources suddenly breaking, not the actually broken 9-patch image. The TOP and LEFT guides are used to dene the scalable portion of your image LEFT for scaling height, TOP for scaling width. Using a button image as an example, this means the button can stretch horizontally and vertically within the black portion and everything else, such as the corners, will remain the same size. The allows you to have buttons that can scale to any size and maintain a uniform look. Its important to note that 9-patch images dont scale down they only scale up. So its best to start as small as possible. Also, you can leave out portions in the middle of the scale line. So for example, if you have a button with a sharp glossy edge across the middle, you can leave out a few pixels in the middle of the LEFT guide. The center horizontal axis of your image wont scale, just the parts above and below it, so your sharp gloss wont get anti-aliased or fuzzy. GoalKicker.com Android Notes for Professionals 276 Fill area guides are optional and provide a way dene the area for stu like your text label. Fill determines how much room there is within your image to place text, or an icon, or other things. 9-patch isnt just for buttons, it works for background images as well. The above button & label example is exaggerated simply to explain the idea of ll the label isnt completely accurate. To be honest, I havent experienced how Android does multi-line labels since a button label is usually a single row of text. Finally, heres a good demonstration of how scale and ll guides can vary, such as a LinearLayout with a background image & fully rounded sides: With this example, the LEFT guide isnt used but were still required to have a guide. The background image dont scale vertically; it just scales horizontally (based on the TOP guide). Looking at the ll guides, the RIGHT and BOTTOM guides extend beyond where they meet the images curved edges. This allows me to place my round buttons close to the edges of the background for a tight, tted look. So thats it. 9-patch is super easy, once you get it. Its not a perfect way to do scaling, but the ll-area and multi-line scale-guides does oer more exibility than traditional 9-slice and scale9. Give it a try and youll gure it out quickly. GoalKicker.com Android Notes for Professionals 277 Section 38.10: Getting resources without "deprecated" warnings Using the Android API 23 or higher, very often such situation can be seen: This situation is caused by the structural change of the Android API regarding getting the resources. Now the function: public int getColor(@ColorRes int id, @Nullable Theme theme) throws NotFoundException should be used. But the android.support.v4 library has another solution. Add the following dependency to the build.gradle le: com.android.support:support-v4:24.0.0 Then all methods from support library are available: ContextCompat.getColor(context, R.color.colorPrimaryDark); ContextCompat.getDrawable(context, R.drawable.btn_check); ContextCompat.getColorStateList(context, R.color.colorPrimary); DrawableCompat.setTint(drawable); ContextCompat.getColor(context,R.color.colorPrimaryDark)); Moreover more methods from support library can be used: ViewCompat.setElevation(textView, 1F); ViewCompat.animate(textView); TextViewCompat.setTextAppearance(textView, R.style.AppThemeTextStyle); ... Section 38.11: Working with strings.xml le A string resource provides text strings for your application with optional text styling and formatting. There are three types of resources that can provide your application with strings: String XML resource that provides a single string. Syntax: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="string_name">text_string</string> </resources> And to use this string in layout: <TextView android:layout_width="fill_parent" GoalKicker.com Android Notes for Professionals 278 android:layout_height="wrap_content" android:text="@string/string_name" /> String Array XML resource that provides an array of strings. Syntax: <resources> <string-array name="planets_array"> <item>Mercury</item> <item>Venus</item> <item>Earth</item> <item>Mars</item> </string-array> Usage Resources res = getResources(); String[] planets = res.getStringArray(R.array.planets_array); Quantity Strings (Plurals) XML resource that carries different strings for pluralization. Syntax: <?xml version="1.0" encoding="utf-8"?> <resources> <plurals name="plural_name"> <item quantity=["zero" | "one" | "two" | "few" | "many" | "other"] >text_string</item> </plurals> </resources> Usage: int count = getNumberOfsongsAvailable(); Resources res = getResources(); String songsFound = res.getQuantityString(R.plurals.plural_name, count, count); Section 38.12: Dene string array In order to dene a string array write in a resources le res/values/lename.xml <string-array name="string_array_name"> <item>text_string</item> <item>@string/string_id</item> </string-array> for example GoalKicker.com Android Notes for Professionals 279 res/values/arrays.xml <?xml version="1.0" encoding="utf-8"?> <resources> <string-array name="string_array_example"> <item>@string/app_name</item> <item>@string/hello_world</item> </string-array> </resources> and use it from java like String[] strings = getResources().getStringArray(R.array.string_array_example; Log.i("TAG",Arrays.toString(strings))); Output I/TAG: [HelloWorld, Hello World!] Section 38.13: Dene integers Integers are typically stored in a resource le named integers.xml, but the le name can be chosen arbitrarily. Each integer is dened by using an <integer> element, as shown in the following le: res/values/integers.xml <?xml version="1.0" encoding="utf-8"?> <resources> <integer name="max">100</integer> </resources> Integers can now be referenced in XML with the syntax @integer/name_of_the_integer, as shown in the following example: <ProgressBar android:layout_width="match_parent" android:layout_height="match_parent" android:max="@integer/max"/> Section 38.14: Dene a menu resource and use it inside Activity/Fragment Dene a menu in res/menu <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto"> <item android:id="@+id/first_item_id" android:orderInCategory="100" android:title="@string/first_item_string" android:icon="@drawable/first_item_icon" app:showAsAction="ifRoom"/> GoalKicker.com Android Notes for Professionals 280 <item android:id="@+id/second_item_id" android:orderInCategory="110" android:title="@string/second_item_string" android:icon="@drawable/second_item_icon" app:showAsAction="ifRoom"/> </menu> For more options of conguration refer to: Menu resource Inside Activity: @Override public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) { ///Override defining menu resource inflater.inflate(R.menu.menu_resource_id, menu); super.onCreateOptionsMenu(menu, inflater); } @Override public void onPrepareOptionsMenu(Menu menu) { //Override for preparing items (setting visibility, change text, change icon...) super.onPrepareOptionsMenu(menu); } @Override public boolean onOptionsItemSelected(MenuItem item) { //Override it for handling items int menuItemId = item.getItemId(); switch (menuItemId) { case: R.id.first_item_id return true; //return true, if is handled } return super.onOptionsItemSelected(item); } For invoking the methods above during showing the view, call getActivity().invalidateOptionsMenu(); Inside Fragment one additional call is needed: @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { setHasOptionsMenu(true); super.onCreateView(inflater, container, savedInstanceState); } GoalKicker.com Android Notes for Professionals 281 Chapter 39: Data Binding Library Section 39.1: Basic text eld binding Gradle (Module:app) Conguration android { .... dataBinding { enabled = true } } Data model public class Item { public String name; public String description; public Item(String name, String description) { this.name = name; this.description = description; } } Layout XML The rst step is wrapping your layout in a <layout> tag, adding a <data> element, and adding a <variable> element for your data model. Then you can bind XML attributes to elds in the data model using @{model.fieldname}, where model is the variable's name and fieldname is the eld you want to access. item_detail_activity.xml: <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="item" type="com.example.Item"/> </data> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.name}"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.description}"/> </LinearLayout> GoalKicker.com Android Notes for Professionals 282 </layout> For each XML layout le properly congured with bindings, the Android Gradle plugin generates a corresponding class : bindings. Because we have a layout named item_detail_activity, the corresponding generated binding class is called ItemDetailActivityBinding. This binding can then be used in an Activity like so: public class ItemDetailActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ItemDetailActivityBinding binding = DataBindingUtil.setContentView(this, R.layout.item_detail_activity); Item item = new Item("Example item", "This is an example item."); binding.setItem(item); } } Section 39.2: Built-in two-way Data Binding Two-way Data-Binding supports the following attributes: Element Properties AbsListView android:selectedItemPosition CalendarView android:date CompoundButton android:checked DatePicker android:year android:month android:day EditText android:text NumberPicker android:value RadioGroup android:checkedButton RatingBar android:rating SeekBar android:progress TabHost android:currentTab TextView android:text TimePicker android:hour android:minute ToggleButton android:checked Switch android:checked Usage <layout ...> <data> <variable type="com.example.myapp.User" name="user"/> </data> <RelativeLayout ...> <EditText android:text="@={user.firstName}" .../> </RelativeLayout> </layout> Notice that the Binding expression @={} has an additional =, which is necessary for the two-way Binding. It is not possible to use methods in two-way Binding expressions. GoalKicker.com Android Notes for Professionals 283 Section 39.3: Custom event using lambda expression Dene Interface public interface ClickHandler { public void onButtonClick(User user); } Create Model class public class User { private String name; public User(String name) { this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Layout XML <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="handler" type="com.example.ClickHandler"/> <variable name="user" type="com.example.User"/> </data> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{user.name}" android:onClick="@{() -> handler.onButtonClick(user)}"/> </RelativeLayout> </layout> Activity code : public class MainActivity extends Activity implements ClickHandler { private ActivityMainBinding binding; @Override protected void onCreate(Bundle savedInstanceState) { GoalKicker.com Android Notes for Professionals 284 super.onCreate(savedInstanceState); binding = DataBindingUtil.setContentView(this,R.layout.activity_main); binding.setUser(new User("DataBinding User")); binding.setHandler(this); } @Override public void onButtonClick(User user) { Toast.makeText(MainActivity.this,"Welcome " + user.getName(),Toast.LENGTH_LONG).show(); } } For some view listener which is not available in xml code but can be set in Java code, it can be bind with custom binding. Custom class public class BindingUtil { @BindingAdapter({"bind:autoAdapter"}) public static void setAdapter(AutoCompleteTextView view, ArrayAdapter<String> pArrayAdapter) { view.setAdapter(pArrayAdapter); } @BindingAdapter({"bind:onKeyListener"}) public static void setOnKeyListener(AutoCompleteTextView view , View.OnKeyListener pOnKeyListener) { view.setOnKeyListener(pOnKeyListener); } } Handler class public class Handler extends BaseObservable { private ArrayAdapter<String> roleAdapter; public ArrayAdapter<String> getRoleAdapter() { return roleAdapter; } public void setRoleAdapter(ArrayAdapter<String> pRoleAdapter) { roleAdapter = pRoleAdapter; } } XML <layout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:bind="http://schemas.android.com/tools" > <data> <variable name="handler" type="com.example.Handler" /> </data> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > GoalKicker.com Android Notes for Professionals 285 <AutoCompleteTextView android:layout_width="match_parent" android:layout_height="wrap_content" android:singleLine="true" bind:autoAdapter="@{handler.roleAdapter}" /> </LinearLayout> </layout> Section 39.4: Default value in Data Binding The Preview pane displays default values for data binding expressions if provided. For example : android:layout_height="@{@dimen/main_layout_height, default=wrap_content}" It will take wrap_content while designing and will act as a wrap_content in preview pane. Another example is android:text="@{user.name, default=`Preview Text`}" It will display Preview Text in preview pane but when you run it in device/emulator actual text binded to it will be displayed Section 39.5: Databinding in Dialog public void doSomething() { DialogTestBinding binding = DataBindingUtil .inflate(LayoutInflater.from(context), R.layout.dialog_test, null, false); Dialog dialog = new Dialog(context); dialog.setContentView(binding.getRoot()); dialog.show(); } Section 39.6: Binding with an accessor method If your model has private methods, the databinding library still allows you to access them in your view without using the full name of the method. Data model public class Item { private String name; public String getName() { return name; } } Layout XML GoalKicker.com Android Notes for Professionals 286 <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="item" type="com.example.Item"/> </data> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <!-- Since the "name" field is private on our data model, this binding will utilize the public getName() method instead. --> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.name}"/> </LinearLayout> </layout> Section 39.7: Pass widget as reference in BindingAdapter Layout XML <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> </data> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <ProgressBar android:id="@+id/progressBar" style="?android:attr/progressBarStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content"/> <ImageView android:id="@+id/img" android:layout_width="match_parent" android:layout_height="100dp" app:imageUrl="@{url}" app:progressbar="@{progressBar}"/> </LinearLayout> </layout> BindingAdapter method @BindingAdapter({"imageUrl","progressbar"}) public static void loadImage(ImageView view, String imageUrl, ProgressBar progressBar){ Glide.with(view.getContext()).load(imageUrl) .listener(new RequestListener<String, GlideDrawable>() { @Override public boolean onException(Exception e, String model, Target<GlideDrawable> GoalKicker.com Android Notes for Professionals 287 target, boolean isFirstResource) { return false; } @Override public boolean onResourceReady(GlideDrawable resource, String model, Target<GlideDrawable> target, boolean isFromMemoryCache, boolean isFirstResource) { progressBar.setVisibility(View.GONE); return false; } }).into(view); } Section 39.8: Click listener with Binding Create interface for clickHandler public interface ClickHandler { public void onButtonClick(View v); } Layout XML <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="handler" type="com.example.ClickHandler"/> </data> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="click me" android:onClick="@{handler.onButtonClick}"/> </RelativeLayout> </layout> Handle event in your Activity public class MainActivity extends Activity implements ClickHandler { private ActivityMainBinding binding; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); binding = DataBindingUtil.setContentView(this,R.layout.activity_main); binding.setHandler(this); } @Override public void onButtonClick(View v) { Toast.makeText(context,"Button clicked",Toast.LENGTH_LONG).show(); GoalKicker.com Android Notes for Professionals 288 } } Section 39.9: Data binding in RecyclerView Adapter It's also possible to use data binding within your RecyclerView Adapter. Data model public class Item { private String name; public String getName() { return name; } } XML Layout <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.name}"/> Adapter class public class ListItemAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> { private Activity host; private List<Item> items; public ListItemAdapter(Activity activity, List<Item> items) { this.host = activity; this.items = items; } @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { // inflate layout and retrieve binding ListItemBinding binding = DataBindingUtil.inflate(host.getLayoutInflater(), R.layout.list_item, parent, false); return new ItemViewHolder(binding); } @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { Item item = items.get(position); ItemViewHolder itemViewHolder = (ItemViewHolder)holder; itemViewHolder.bindItem(item); } @Override public int getItemCount() { return items.size(); } private static class ItemViewHolder extends RecyclerView.ViewHolder { ListItemBinding binding; GoalKicker.com Android Notes for Professionals 289 ItemViewHolder(ListItemBinding binding) { super(binding.getRoot()); this.binding = binding; } void bindItem(Item item) { binding.setItem(item); binding.executePendingBindings(); } } } Section 39.10: Databinding in Fragment Data Model public class Item { private String name; public String getName() { return name; } public void setName(String name){ this.name = name; } } Layout XML <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="item" type="com.example.Item"/> </data> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.name}"/> </LinearLayout> </layout> Fragment @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { FragmentTest binding = DataBindingUtil.inflate(inflater, R.layout.fragment_test, container, false); Item item = new Item(); item.setName("Thomas"); GoalKicker.com Android Notes for Professionals 290 binding.setItem(item); return binding.getRoot(); } Section 39.11: DataBinding with custom variables(int,boolean) Sometimes we need to perform basic operations like hide/show view based on single value, for that single variable we cannot create model or it is not good practice to create model for that. DataBinding supports basic datatypes to perform those oprations. <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <import type="android.view.View" /> <variable name="selected" type="Boolean" /> </data> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World" android:visibility="@{selected ? View.VISIBLE : View.GONE}" /> </RelativeLayout> </layout> and set its value from java class. binding.setSelected(true); Section 39.12: Referencing classes Data model public class Item { private String name; public String getName() { return name; } } Layout XML You must import referenced classes, just as you would in Java. <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> GoalKicker.com Android Notes for Professionals 291 <import type="android.view.View"/> <variable name="item" type="com.example.Item"/> </data> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <!-- We reference the View class to set the visibility of this TextView --> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@{item.name}" android:visibility="@{item.name == null ? View.VISIBLE : View.GONE"/> </LinearLayout> </layout> Note: The package java.lang.* is imported automatically by the system. (The same is made by JVM for Java) GoalKicker.com Android Notes for Professionals 292 Chapter 40: SharedPreferences Parameter key defValue Details A non-null String identifying the parameter. It can contain whitespace or non-printables. This is only used inside your app (and in the XML le), so it doesn't have to be namespaced, but it's a good idea to have it as a constant in your source code. Don't localize it. All the get functions take a default value, which is returned if the given key is not present in the SharedPreferences. It's not returned if the key is present but the value has the wrong type: in that case you get a ClassCastException. SharedPreferences provide a way to save data to disk in the form of key-value pairs. Section 40.1: Implementing a Settings screen using SharedPreferences One use of SharedPreferences is to implement a "Settings" screen in your app, where the user can set their preferences / options. Like this: A PreferenceScreen saves user preferences in SharedPreferences. To create a PreferenceScreen, you need a few things: An XML le to dene the available options: This goes in /res/xml/preferences.xml, and for the above settings screen, it looks like this: <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> <PreferenceCategory android:title="General options"> <CheckBoxPreference android:key = "silent_mode" android:defaultValue="false" android:title="Silent Mode" GoalKicker.com Android Notes for Professionals 293 android:summary="Mute all sounds from this app" /> <SwitchPreference android:key="awesome_mode" android:defaultValue="false" android:switchTextOn="Yes" android:switchTextOff="No" android:title="Awesome mode" android:summary="Enable the Awesome Mode feature"/> <EditTextPreference android:key="custom_storage" android:defaultValue="/sdcard/data/" android:title="Custom storage location" android:summary="Enter the directory path where you want data to be saved. If it does not exist, it will be created." android:dialogTitle="Enter directory path (eg. /sdcard/data/ )"/> </PreferenceCategory> </PreferenceScreen> This denes the available options in the settings screen. There are many other types of Preference listed in the Android Developers documentation on the Preference Class. Next, we need an Activity to host our Preferences user interface. In this case, it's quite short, and looks like this: package com.example.preferences; import android.preference.PreferenceActivity; import android.os.Bundle; public class PreferencesActivity extends PreferenceActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); } } It extends PreferenceActivity, and provides the user interface for the preferences screen. It can be started just like a normal activity, in this case, with something like: Intent i = new Intent(this, PreferencesActivity.class); startActivity(i); Don't forget to add PreferencesActivity to your AndroidManifest.xml. Getting the values of the preferences inside your app is quite simple, just call setDefaultValues() rst, in order to set the default values dened in your XML, and then get the default SharedPreferences. An example: //set the default values we defined in the XML PreferenceManager.setDefaultValues(this, R.xml.preferences, false); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(this); //get the values of the settings options boolean silentMode = preferences.getBoolean("silent_mode", false); boolean awesomeMode = preferences.getBoolean("awesome_mode", false); String customStorage = preferences.getString("custom_storage", ""); GoalKicker.com Android Notes for Professionals 294 Section 40.2: Commit vs. Apply The editor.apply() method is asynchronous, while editor.commit() is synchronous. Obviously, you should call either apply() or commit(). Version 2.3 SharedPreferences settings = getSharedPreferences(PREFS_FILE, MODE_PRIVATE); SharedPreferences.Editor editor = settings.edit(); editor.putBoolean(PREF_CONST, true); // This will asynchronously save the shared preferences without holding the current thread. editor.apply(); SharedPreferences settings = getSharedPreferences(PREFS_FILE, MODE_PRIVATE); SharedPreferences.Editor editor = settings.edit(); editor.putBoolean(PREF_CONST, true); // This will synchronously save the shared preferences while holding the current thread until done and returning a success flag. boolean result = editor.commit(); apply() was added in 2.3 (API 9), it commits without returning a boolean indicating success or failure. commit() returns true if the save works, false otherwise. apply() was added as the Android dev team noticed that almost no one took notice of the return value, so apply is faster as it is asynchronous. Unlike commit(), which writes its preferences out to persistent storage synchronously, apply() commits its changes to the in-memory SharedPreferences immediately but starts an asynchronous commit to disk and you won't be notied of any failures. If another editor on this SharedPreferences does a regular commit() while a apply() is still outstanding, the commit() will block until all async commits(apply) are completed as well as any other sync commits that may be pending. Section 40.3: Read and write values to SharedPreferences public class MyActivity extends Activity { private static final String PREFS_FILE = "NameOfYourPreferenceFile"; // PREFS_MODE defines which apps can access the file private static final int PREFS_MODE = Context.MODE_PRIVATE; // you can use live template "key" for quickly creating keys private static final String KEY_BOOLEAN = "KEY_FOR_YOUR_BOOLEAN"; private static final String KEY_STRING = "KEY_FOR_YOUR_STRING"; private static final String KEY_FLOAT = "KEY_FOR_YOUR_FLOAT"; private static final String KEY_INT = "KEY_FOR_YOUR_INT"; private static final String KEY_LONG = "KEY_FOR_YOUR_LONG"; @Override protected void onStart() { super.onStart(); // Get the saved flag (or default value if it hasn't been saved yet) SharedPreferences settings = getSharedPreferences(PREFS_FILE, PREFS_MODE); // read a boolean value (default false) boolean booleanVal = settings.getBoolean(KEY_BOOLEAN, false); // read an int value (Default 0) int intVal = settings.getInt(KEY_INT, 0); // read a string value (default "my string") String str = settings.getString(KEY_STRING, "my string"); GoalKicker.com Android Notes for Professionals 295 // read a long value (default 123456) long longVal = settings.getLong(KEY_LONG, 123456); // read a float value (default 3.14f) float floatVal = settings.getFloat(KEY_FLOAT, 3.14f); } @Override protected void onStop() { super.onStop(); // Save the flag SharedPreferences settings = getSharedPreferences(PREFS_FILE, PREFS_MODE); SharedPreferences.Editor editor = settings.edit(); // write a boolean value editor.putBoolean(KEY_BOOLEAN, true); // write an integer value editor.putInt(KEY_INT, 123); // write a string editor.putString(KEY_STRING, "string value"); // write a long value editor.putLong(KEY_LONG, 456876451); // write a float value editor.putFloat(KEY_FLOAT, 1.51f); editor.apply(); } } getSharedPreferences() is a method from the Context class which Activity extends. If you need to access the getSharedPreferences() method from other classes, you can use context.getSharedPreferences() with a Context Object reference from an Activity, View, or Application. Section 40.4: Retrieve all stored entries from a particular SharedPreferences le The getAll() method retrieves all values from the preferences. We can use it, for instance, to log the current content of the SharedPreferences: private static final String PREFS_FILE = "MyPrefs"; public static void logSharedPreferences(final Context context) { SharedPreferences sharedPreferences = context.getSharedPreferences(PREFS_FILE, Context.MODE_PRIVATE); Map<String, ?> allEntries = sharedPreferences.getAll(); for (Map.Entry<String, ?> entry : allEntries.entrySet()) { final String key = entry.getKey(); final Object value = entry.getValue(); Log.d("map values", key + ": " + value); } } The documentation warns you about modifying the Collection returned by getAll: Note that you must not modify the collection returned by this method, or alter any of its contents. The consistency of your stored data is not guaranteed if you do. GoalKicker.com Android Notes for Professionals 296 Section 40.5: Reading and writing data to SharedPreferences with Singleton SharedPreferences Manager (Singleton) class to read and write all types of data. import android.content.Context; import android.content.SharedPreferences; import android.util.Log; import com.google.gson.Gson; import java.lang.reflect.Type; /** * Singleton Class for accessing SharedPreferences, * should be initialized once in the beginning by any application component using static * method initialize(applicationContext) */ public class SharedPrefsManager { private static final String TAG = SharedPrefsManager.class.getName(); private SharedPreferences prefs; private static SharedPrefsManager uniqueInstance; public static final String PREF_NAME = "com.example.app"; private SharedPrefsManager(Context appContext) { prefs = appContext.getSharedPreferences(PREF_NAME, Context.MODE_PRIVATE); } /** * Throws IllegalStateException if this class is not initialized * * @return unique SharedPrefsManager instance */ public static SharedPrefsManager getInstance() { if (uniqueInstance == null) { throw new IllegalStateException( "SharedPrefsManager is not initialized, call initialize(applicationContext) " + "static method first"); } return uniqueInstance; } /** * Initialize this class using application Context, * should be called once in the beginning by any application Component * * @param appContext application context */ public static void initialize(Context appContext) { if (appContext == null) { throw new NullPointerException("Provided application context is null"); } if (uniqueInstance == null) { synchronized (SharedPrefsManager.class) { if (uniqueInstance == null) { uniqueInstance = new SharedPrefsManager(appContext); } } } } GoalKicker.com Android Notes for Professionals 297 private SharedPreferences getPrefs() { return prefs; } /** * Clears all data in SharedPreferences */ public void clearPrefs() { SharedPreferences.Editor editor = getPrefs().edit(); editor.clear(); editor.commit(); } public void removeKey(String key) { getPrefs().edit().remove(key).commit(); } public boolean containsKey(String key) { return getPrefs().contains(key); } public String getString(String key, String defValue) { return getPrefs().getString(key, defValue); } public String getString(String key) { return getString(key, null); } public void setString(String key, String value) { SharedPreferences.Editor editor = getPrefs().edit(); editor.putString(key, value); editor.apply(); } public int getInt(String key, int defValue) { return getPrefs().getInt(key, defValue); } public int getInt(String key) { return getInt(key, 0); } public void setInt(String key, int value) { SharedPreferences.Editor editor = getPrefs().edit(); editor.putInt(key, value); editor.apply(); } public long getLong(String key, long defValue) { return getPrefs().getLong(key, defValue); } public long getLong(String key) { return getLong(key, 0L); } public void setLong(String key, long value) { SharedPreferences.Editor editor = getPrefs().edit(); editor.putLong(key, value); editor.apply(); } GoalKicker.com Android Notes for Professionals 298 public boolean getBoolean(String key, boolean defValue) { return getPrefs().getBoolean(key, defValue); } public boolean getBoolean(String key) { return getBoolean(key, false); } public void setBoolean(String key, boolean value) { SharedPreferences.Editor editor = getPrefs().edit(); editor.putBoolean(key, value); editor.apply(); } public boolean getFloat(String key) { return getFloat(key, 0f); } public boolean getFloat(String key, float defValue) { return getFloat(key, defValue); } public void setFloat(String key, Float value) { SharedPreferences.Editor editor = getPrefs().edit(); editor.putFloat(key, value); editor.apply(); } /** * Persists an Object in prefs at the specified key, class of given Object must implement Model * interface * * @param key String * @param modelObject Object to persist * @param <M> Generic for Object */ public <M extends Model> void setObject(String key, M modelObject) { String value = createJSONStringFromObject(modelObject); SharedPreferences.Editor editor = getPrefs().edit(); editor.putString(key, value); editor.apply(); } /** * Fetches the previously stored Object of given Class from prefs * * @param key String * @param classOfModelObject Class of persisted Object * @param <M> Generic for Object * @return Object of given class */ public <M extends Model> M getObject(String key, Class<M> classOfModelObject) { String jsonData = getPrefs().getString(key, null); if (null != jsonData) { try { Gson gson = new Gson(); M customObject = gson.fromJson(jsonData, classOfModelObject); return customObject; } catch (ClassCastException cce) { Log.d(TAG, "Cannot convert string obtained from prefs into collection of type " + classOfModelObject.getName() + "\n" + cce.getMessage()); } GoalKicker.com Android Notes for Professionals 299 } return null; } /** * Persists a Collection object in prefs at the specified key * * @param key String * @param dataCollection Collection Object * @param <C> Generic for Collection object */ public <C> void setCollection(String key, C dataCollection) { SharedPreferences.Editor editor = getPrefs().edit(); String value = createJSONStringFromObject(dataCollection); editor.putString(key, value); editor.apply(); } /** * Fetches the previously stored Collection Object of given type from prefs * * @param key String * @param typeOfC Type of Collection Object * @param <C> Generic for Collection Object * @return Collection Object which can be casted */ public <C> C getCollection(String key, Type typeOfC) { String jsonData = getPrefs().getString(key, null); if (null != jsonData) { try { Gson gson = new Gson(); C arrFromPrefs = gson.fromJson(jsonData, typeOfC); return arrFromPrefs; } catch (ClassCastException cce) { Log.d(TAG, "Cannot convert string obtained from prefs into collection of type " + typeOfC.toString() + "\n" + cce.getMessage()); } } return null; } public void registerPrefsListener(SharedPreferences.OnSharedPreferenceChangeListener listener) { getPrefs().registerOnSharedPreferenceChangeListener(listener); } public void unregisterPrefsListener( SharedPreferences.OnSharedPreferenceChangeListener listener) { getPrefs().unregisterOnSharedPreferenceChangeListener(listener); } public SharedPreferences.Editor getEditor() { return getPrefs().edit(); } private static String createJSONStringFromObject(Object object) { Gson gson = new Gson(); return gson.toJson(object); } } GoalKicker.com Android Notes for Professionals 300 Model interface which is implemented by classes going to Gson to avoid proguard obfuscation. public interface Model { } Proguard rules for Model interface: -keep interface com.example.app.Model -keep class * implements com.example.app.Model { *;} Section 40.6: getPreferences(int) VS getSharedPreferences(String, int) getPreferences(int) returns the preferences saved by Activity's class name as described in the docs : Retrieve a SharedPreferences object for accessing preferences that are private to this activity. This simply calls the underlying getSharedPreferences(String, int) method by passing in this activity's class name as the preferences name. While using getSharedPreferences (String name, int mode) method returns the prefs saved under the given name. As in the docs : Retrieve and hold the contents of the preferences le 'name', returning a SharedPreferences through which you can retrieve and modify its values. So if the value being saved in the SharedPreferences has to be used across the app, one should use getSharedPreferences (String name, int mode) with a xed name. As, using getPreferences(int) returns/saves the preferences belonging to the Activity calling it. Section 40.7: Listening for SharedPreferences changes SharedPreferences sharedPreferences = ...; sharedPreferences.registerOnSharedPreferenceChangeListener(mOnSharedPreferenceChangeListener); private final SharedPreferences.OnSharedPreferenceChangeListener mOnSharedPreferenceChangeListener = new SharedPreferences.OnSharedPreferenceChangeListener() { @Override public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) { //TODO } } Please note: The listener will re only if value was added or changed, setting the same value won't call it; The listener needs to be saved in a member variable and NOT with an anonymous class, because registerOnSharedPreferenceChangeListener stores it with a weak reference, so it would be garbage collected; GoalKicker.com Android Notes for Professionals 301 Instead of using a member variable, it can also be directly implemented by the class and then call registerOnSharedPreferenceChangeListener(this); Remember to unregister the listener when it is no more required using unregisterOnSharedPreferenceChangeListener. Section 40.8: Store, Retrieve, Remove and Clear Data from SharedPreferences Create SharedPreferences BuyyaPref SharedPreferences pref = getApplicationContext().getSharedPreferences("BuyyaPref", MODE_PRIVATE); Editor editor = pref.edit(); Storing data as KEY/VALUE pair editor.putBoolean("key_name1", true); // Saving boolean - true/false editor.putInt("key_name2", 10); // Saving integer editor.putFloat("key_name3", 10.1f); // Saving float editor.putLong("key_name4", 1000); // Saving long editor.putString("key_name5", "MyString"); // Saving string // Save the changes in SharedPreferences editor.commit(); // commit changes Get SharedPreferences data If value for key not exist then return second param value(In this case null, this is like default value) pref.getBoolean("key_name1", null); pref.getInt("key_name2", null); pref.getFloat("key_name3", null); pref.getLong("key_name4", null); pref.getString("key_name5", null); // getting boolean // getting Integer // getting Float // getting Long // getting String Deleting Key value from SharedPreferences editor.remove("key_name3"); // will delete key key_name3 editor.remove("key_name4"); // will delete key key_name4 // Save the changes in SharedPreferences editor.commit(); // commit changes Clear all data from SharedPreferences editor.clear(); editor.commit(); // commit changes Section 40.9: Add lter for EditTextPreference Create this class : public class InputFilterMinMax implements InputFilter { private int min, max; public InputFilterMinMax(int min, int max) { GoalKicker.com Android Notes for Professionals 302 this.min = min; this.max = max; } public InputFilterMinMax(String min, String max) { this.min = Integer.parseInt(min); this.max = Integer.parseInt(max); } @Override public CharSequence filter(CharSequence source, int start, int end, Spanned dest, int dstart, int dend) { try { int input = Integer.parseInt(dest.toString() + source.toString()); if (isInRange(min, max, input)) return null; } catch (NumberFormatException nfe) { } return ""; } private boolean isInRange(int a, int b, int c) { return b > a ? c >= a && c <= b : c >= b && c <= a; } } Use : EditText compressPic = ((EditTextPreference) findPreference(getString("pref_key_compress_pic"))).getEditText(); compressPic.setFilters(new InputFilter[]{ new InputFilterMinMax(1, 100) }); Section 40.10: Supported data types in SharedPreferences SharedPreferences allows you to store primitive data types only (boolean, float, long, int, String, and string set). You cannot store more complex objects in SharedPreferences, and as such is really meant to be a place to store user settings or similar, it's not meant to be a database to keep user data (like saving a todo list a user made for example). To store something in SharedPreferences you use a Key and a Value. The Key is how you can reference what you stored later and the Value data you want to store. String keyToUseToFindLater = "High Score"; int newHighScore = 12938; //getting SharedPreferences & Editor objects SharedPreferences sharedPref = getActivity().getPreferences(Context.MODE_PRIVATE); SharedPreferences.Editor editor = sharedPref.edit(); //saving an int in the SharedPreferences file editor.putInt(keyToUseToFindLater, newHighScore); editor.commit(); Section 40.11: Dierent ways of instantiating an object of SharedPreferences You can access SharedPreferences in several ways: Get the default SharedPreferences le: GoalKicker.com Android Notes for Professionals 303 import android.preference.PreferenceManager; SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this); Get a specic SharedPreferences le: public static final String PREF_FILE_NAME = "PrefFile"; SharedPreferences prefs = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); Get SharedPreferences from another app: // Note that the other app must declare prefs as MODE_WORLD_WRITEABLE final ArrayList<HashMap<String,String>> LIST = new ArrayList<HashMap<String,String>>(); Context contextOtherApp = createPackageContext("com.otherapp", Context.MODE_WORLD_WRITEABLE); SharedPreferences prefs = contextOtherApp.getSharedPreferences("pref_file_name", Context.MODE_WORLD_READABLE); Section 40.12: Removing keys private static final String MY_PREF = "MyPref"; // ... SharedPreferences prefs = ...; // ... SharedPreferences.Editor editor = prefs.edit(); editor.putString(MY_PREF, "value"); editor.remove(MY_PREF); editor.apply(); After the apply(), prefs contains "key" -> "value", in addition to whatever it contained already. Even though it looks like I added "key" and then removed it, the remove actually happens rst. The changes in the Editor are all applied in one go, not in the order you added them. All removes happen before all puts. Section 40.13: Support pre-Honeycomb with StringSet Here's the utility class: public class SharedPreferencesCompat { public static void putStringSet(SharedPreferences.Editor editor, String key, Set<String> values) { if (Build.VERSION.SDK_INT >= 11) { while (true) { try { editor.putStringSet(key, values).apply(); break; } catch (ClassCastException ex) { // Clear stale JSON string from before system upgrade editor.remove(key); } } } else putStringSetToJson(editor, key, values); } public static Set<String> getStringSet(SharedPreferences prefs, String key, Set<String> defaultReturnValue) { GoalKicker.com Android Notes for Professionals 304 if (Build.VERSION.SDK_INT >= 11) { try { return prefs.getStringSet(key, defaultReturnValue); } catch (ClassCastException ex) { // If user upgraded from Gingerbread to something higher read the stale JSON string return getStringSetFromJson(prefs, key, defaultReturnValue); } } else return getStringSetFromJson(prefs, key, defaultReturnValue); } private static Set<String> getStringSetFromJson(SharedPreferences prefs, String key, Set<String> defaultReturnValue) { final String input = prefs.getString(key, null); if (input == null) return defaultReturnValue; try { HashSet<String> set = new HashSet<>(); JSONArray json = new JSONArray(input); for (int i = 0, size = json.length(); i < size; i++) { String value = json.getString(i); set.add(value); } return set; } catch (JSONException e) { e.printStackTrace(); return defaultReturnValue; } } private static void putStringSetToJson(SharedPreferences.Editor editor, String key, Set<String> values) { JSONArray json = new JSONArray(values); if (Build.VERSION.SDK_INT >= 9) editor.putString(key, json.toString()).apply(); else editor.putString(key, json.toString()).commit(); } private SharedPreferencesCompat() {} } An example to save preferences as StringSet data type is: Set<String> sets = new HashSet<>(); sets.add("John"); sets.add("Nicko"); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(this); SharedPreferencesCompat.putStringSet(preferences.edit(), "pref_people", sets); To retrieve them back: Set<String> people = SharedPreferencesCompat.getStringSet(preferences, "pref_people", new HashSet<String>()); Reference: Android Support Preference GoalKicker.com Android Notes for Professionals 305 Chapter 41: Intent Parameter Details intent The intent to start requestCode Unique number to identify the request options Additional options for how the Activity should be started name The name of the extra data value The value of the extra data CHOOSE_CONTACT_REQUEST_CODE the code of the request, to identify it on onActivityResult method action Any action to perform via this intent, ex: Intent.ACTION_VIEW uri data uri to be used by intent to perform specied action packageContext Context to use to initialize the Intent cls Class to be used by this intent An Intent is a small message passed around the Android system. This message may hold information about our intention to perform a task. It is basically a passive data structure holding an abstract description of an action to be performed. Section 41.1: Getting a result from another Activity By using startActivityForResult(Intent intent, int requestCode) you can start another Activity and then receive a result from that Activity in the onActivityResult(int requestCode, int resultCode, Intent data) method. The result will be returned as an Intent. An intent can contain data via a Bundle In this example MainActivity will start a DetailActivity and then expect a result from it. Each request type should have its own int request code, so that in the overridden onActivityResult(int requestCode, int resultCode, Intent data) method in MainActivity , it can be determined which request to process by comparing values of requestCode and REQUEST_CODE_EXAMPLE (though in this example, there is only one). MainActivity: public class MainActivity extends Activity { // Use a unique request code for each use case private static final int REQUEST_CODE_EXAMPLE = 0x9345; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Create a new instance of Intent to start DetailActivity final Intent intent = new Intent(this, DetailActivity.class); // Start DetailActivity with the request code startActivityForResult(intent, REQUEST_CODE_EXAMPLE); } // onActivityResult only get called // when the other Activity previously started using startActivityForResult @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); GoalKicker.com Android Notes for Professionals 306 // First we need to check if the requestCode matches the one we used. if(requestCode == REQUEST_CODE_EXAMPLE) { // The resultCode is set by the DetailActivity // By convention RESULT_OK means that whatever // DetailActivity did was executed successfully if(resultCode == Activity.RESULT_OK) { // Get the result from the returned Intent final String result = data.getStringExtra(DetailActivity.EXTRA_DATA); // Use the data - in this case, display it in a Toast. Toast.makeText(this, "Result: " + result, Toast.LENGTH_LONG).show(); } else { // setResult wasn't successfully executed by DetailActivity // Due to some error or flow of control. No data to retrieve. } } } } DetailActivity: public class DetailActivity extends Activity { // Constant used to identify data sent between Activities. public static final String EXTRA_DATA = "EXTRA_DATA"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_detail); final Button button = (Button) findViewById(R.id.button); // When this button is clicked we want to return a result button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { // Create a new Intent object as container for the result final Intent data = new Intent(); // Add the required data to be returned to the MainActivity data.putExtra(EXTRA_DATA, "Some interesting data!"); // Set the resultCode as Activity.RESULT_OK to // indicate a success and attach the Intent // which contains our result data setResult(Activity.RESULT_OK, data); // With finish() we close the DetailActivity to // return back to MainActivity finish(); } }); } @Override public void onBackPressed() { // When the user hits the back button set the resultCode // as Activity.RESULT_CANCELED to indicate a failure setResult(Activity.RESULT_CANCELED); super.onBackPressed(); } GoalKicker.com Android Notes for Professionals 307 } A few things you need to be aware of: Data is only returned once you call finish(). You need to call setResult() before calling finish(), otherwise, no result will be returned. Make sure your Activity is not using android:launchMode="singleTask", or it will cause the Activity to run in a separate task and therefore you will not receive a result from it. If your Activity uses singleTask as launch mode, it will call onActivityResult() immediately with a result code of Activity.RESULT_CANCELED. Be careful when using android:launchMode="singleInstance". On devices before Lollipop (Android 5.0, API Level 21), Activities will not return a result. You can use explicit or implicit intents when you call startActivityForResult(). When starting one of your own activities to receive a result, you should use an explicit intent to ensure that you receive the expected result. An explicit intent is always delivered to its target, no matter what it contains; the filter is not consulted. But an implicit intent is delivered to a component only if it can pass through one of the component's lters. Section 41.2: Passing data between activities This example illustrates sending a String with value as "Some data!" from OriginActivity to DestinationActivity. NOTE: This is the most straightforward way of sending data between two activities. See the example on using the starter pattern for a more robust implementation. OriginActivity public class OriginActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_origin); // Create a new Intent object, containing DestinationActivity as target Activity. final Intent intent = new Intent(this, DestinationActivity.class); // Add data in the form of key/value pairs to the intent object by using putExtra() intent.putExtra(DestinationActivity.EXTRA_DATA, "Some data!"); // Start the target Activity with the intent object startActivity(intent); } } DestinationActivity public class DestinationActivity extends AppCompatActivity { public static final String EXTRA_DATA = "EXTRA_DATA"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_destination); // getIntent() returns the Intent object which was used to start this Activity final Intent intent = getIntent(); GoalKicker.com Android Notes for Professionals 308 // Retrieve the data from the intent object by using the same key that // was previously used to add data to the intent object in OriginActivity. final String data = intent.getStringExtra(EXTRA_DATA); } } It is also possible to pass other primitive data types as well as arrays, Bundle and Parcelable data. Passing Serializable is also possible, but should be avoided as it is more than three times slower than Parcelable. Serializable is a standard Java interface. You simply mark a class as Serializable by implementing the Serializable interface and Java will automatically serialize it during required situations. Parcelable is an Android specic interface which can be implemented on custom data types (i.e. your own objects / POJO objects ), it allows your object to be attened and reconstruct itself without the destination needing to do anything. There is a documentation example of making an object parcelable. Once you have a parcelable object you can send it like a primitive type, with an intent object: intent.putExtra(DestinationActivity.EXTRA_DATA, myParcelableObject); Or in a bundle / as an argument for a fragment: bundle.putParcelable(DestinationActivity.EXTRA_DATA, myParcelableObject); and then also read it from the intent at the destination using getParcelableExtra: final MyParcelableType data = intent.getParcelableExtra(EXTRA_DATA); Or when reading in a fragment from a bundle: final MyParcelableType data = bundle.getParcelable(EXTRA_DATA); Once you have a Serializable object you can put it in an intent object: bundle.putSerializable(DestinationActivity.EXTRA_DATA, mySerializableObject); and then also read it from the intent object at the destination as shown below: final SerializableType data = (SerializableType)bundle.getSerializable(EXTRA_DATA); Section 41.3: Open a URL in a browser Opening with the default browser This example shows how you can open a URL programmatically in the built-in web browser rather than within your application. This allows your app to open up a webpage without the need to include the INTERNET permission in your manifest le. public void onBrowseClick(View v) { String url = "http://www.google.com"; Uri uri = Uri.parse(url); Intent intent = new Intent(Intent.ACTION_VIEW, uri); // Verify that the intent will resolve to an activity if (intent.resolveActivity(getPackageManager()) != null) { // Here we use an intent without a Chooser unlike the next example GoalKicker.com Android Notes for Professionals 309 startActivity(intent); } } Prompting the user to select a browser Note that this example uses the Intent.createChooser() method: public void onBrowseClick(View v) { String url = "http://www.google.com"; Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(url)); // Note the Chooser below. If no applications match, // Android displays a system message.So here there is no need for try-catch. startActivity(Intent.createChooser(intent, "Browse with")); } In some cases, the URL may start with "www". If that is the case you will get this exception: android.content.ActivityNotFoundException : No Activity found to handle Intent The URL must always start with "http://" or "https://". Your code should therefore check for it, as shown in the following code snippet: if (!url.startsWith("https://") && !url.startsWith("http://")){ url = "http://" + url; } Intent openUrlIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(url)); if (openUrlIntent.resolveActivity(getPackageManager()) != null) { startActivity(openUrlIntent); } Best Practices Check if there are no apps on the device that can receive the implicit intent. Otherwise, your app will crash when it calls startActivity(). To rst verify that an app exists to receive the intent, call resolveActivity() on your Intent object. If the result is non-null, there is at least one app that can handle the intent and it's safe to call startActivity(). If the result is null, you should not use the intent and, if possible, you should disable the feature that invokes the intent. Section 41.4: Starter Pattern This pattern is a more strict approach to starting an Activity. Its purpose is to improve code readability, while at the same time decrease code complexity, maintenance costs, and coupling of your components. The following example implements the starter pattern, which is usually implemented as a static method on the Activity itself. This static method accepts all required parameters, constructs a valid Intent from that data, and then starts the Activity. An Intent is an object that provides runtime binding between separate components, such as two activities. The Intent represents an apps "intent to do something." You can use intents for a wide variety of tasks, but here, your intent starts another activity. public class ExampleActivity extends AppCompatActivity { GoalKicker.com Android Notes for Professionals 310 private static final String EXTRA_DATA = "EXTRA_DATA"; public static void start(Context context, String data) { Intent intent = new Intent(context, ExampleActivity.class); intent.putExtra(EXTRA_DATA, data); context.startActivity(intent); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Intent intent = getIntent(); if(!intent.getExtras().containsKey(EXTRA_DATA)){ throw new UnsupportedOperationException("Activity should be started using the static start method"); } String data = intent.getStringExtra(EXTRA_DATA); } } This pattern also allows you to force additional data to be passed with the intent. The ExampleActivity can then be started like this, where context is an activity context: ExampleActivity.start(context, "Some data!"); Section 41.5: Clearing an activity stack Sometimes you may want to start a new activity while removing previous activities from the back stack, so the back button doesn't take you back to them. One example of this might be starting an app on the Login activity, taking you through to the Main activity of your application, but on logging out you want to be directed back to Login without a chance to go back. In a case like that you can set the FLAG_ACTIVITY_CLEAR_TOP ag for the intent, meaning if the activity being launched is already running in the current task (LoginActivity), then instead of launching a new instance of that activity, all of the other activities on top of it will be closed and this Intent will be delivered to the (now on top) old activity as a new Intent. Intent intent = new Intent(getApplicationContext(), LoginActivity.class); intent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(intent); It's also possible to use the ags FLAG_ACTIVITY_NEW_TASK along with FLAG_ACTIVITY_CLEAR_TASK if you want to clear all Activities on the back stack: Intent intent = new Intent(getApplicationContext(), LoginActivity.class); // Closing all the Activities, clear the back stack. intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK); startActivity(intent); Section 41.6: Start an activity This example will start DestinationActivity from OriginActivity. Here, the Intent constructor takes two parameters: 1. A Context as its rst parameter (this is used because the Activity class is a subclass of Context) GoalKicker.com Android Notes for Professionals 311 2. The Class of the app component to which the system should deliver the Intent (in this case, the activity that should be started) public class OriginActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_origin); Intent intent = new Intent(this, DestinationActivity.class); startActivity(intent); finish(); // Optionally, you can close OriginActivity. In this way when the user press back from DestinationActivity he/she won't land on OriginActivity again. } } Another way to create the Intent to open DestinationActivity is to use the default constructor for the Intent, and use the setClass() method to tell it which Activity to open: Intent i=new Intent(); i.setClass(this, DestinationActivity.class); startActivity(intent); finish(); // Optionally, you can close OriginActivity. In this way when the user press back from DestinationActivity he/she won't land on OriginActivity Section 41.7: Sending emails // Compile a Uri with the 'mailto' schema Intent emailIntent = new Intent(Intent.ACTION_SENDTO, Uri.fromParts( "mailto","<EMAIL>", null)); // Subject emailIntent.putExtra(Intent.EXTRA_SUBJECT, "Hello World!"); // Body of email emailIntent.putExtra(Intent.EXTRA_TEXT, "Hi! I am sending you a test email."); // File attachment emailIntent.putExtra(Intent.EXTRA_STREAM, attachedFileUri); // Check if the device has an email client if (emailIntent.resolveActivity(getPackageManager()) != null) { // Prompt the user to select a mail app startActivity(Intent.createChooser(emailIntent,"Choose your mail application")); } else { // Inform the user that no email clients are installed or provide an alternative } This will pre-ll an email in a mail app of the user's choice. If you need to add an attachment, you can use Intent.ACTION_SEND instead of Intent.ACTION_SENDTO. For multiple attachments you can use ACTION_SEND_MULTIPLE A word of caution: not every device has a provider for ACTION_SENDTO, and calling startActivity() without checking with resolveActivity() rst may throw an ActivityNotFoundException. Section 41.8: CustomTabsIntent for Chrome Custom Tabs Version 4.0.3 GoalKicker.com Android Notes for Professionals 312 Using a CustomTabsIntent, it is now possible to congure Chrome custom tabs in order to customize key UI components in the browser that is opened from your app. This is a good alternative to using a WebView for some cases. It allows loading of a web page with an Intent, with the added ability to inject some degree of the look and feel of your app into the browser. Here is an example of how to open a url using CustomTabsIntent String url = "https://www.google.pl/"; CustomTabsIntent intent = new CustomTabsIntent.Builder() .setStartAnimations(getContext(), R.anim.slide_in_right, R.anim.slide_out_left) .setExitAnimations(getContext(), android.R.anim.slide_in_left, android.R.anim.slide_out_right) .setCloseButtonIcon(BitmapFactory.decodeResource(getResources(), R.drawable.ic_arrow_back_white_24dp)) .setToolbarColor(Color.parseColor("#43A047")) .enableUrlBarHiding() .build(); intent.launchUrl(getActivity(), Uri.parse(url)); Note: To use custom tabs, you need to add this dependency to your build.gradle compile 'com.android.support:customtabs:24.1.1' Section 41.9: Intent URI This example shows, how to start intent from browser: <a href="intent://host.com/path#Intent;package=com.sample.test;scheme=yourscheme;end">Start intent</a> This intent will start app with package com.sample.test or will open google play with this package. Also this intent can be started with javascript: var intent = "intent://host.com/path#Intent;package=com.sample.test;scheme=yourscheme;end"; window.location.replace(intent) In activity this host and path can be obtained from intent data: @Override public void onCreate(Bundle bundle) { super.onCreate(bundle); Uri data = getIntent().getData(); // returns host.com/path } Intent URI syntax: HOST/URI-path // Optional host #Intent; package=[string]; action=[string]; category=[string]; component=[string]; scheme=[string]; GoalKicker.com Android Notes for Professionals 313 end; Section 41.10: Start the dialer This example shows how to open a default dialer (an app that makes regular calls) with a provided telephone number already in place: Intent intent = new Intent(Intent.ACTION_DIAL); intent.setData(Uri.parse("tel:9988776655")); //Replace with valid phone number. Remember to add the tel: prefix, otherwise it will crash. startActivity(intent); Result from running the code above: Section 41.11: Broadcasting Messages to Other Components Intents can be used to broadcast messages to other components of your application (such as a running background service) or to the entire Android system. To send a broadcast within your application, use the LocalBroadcastManager class: Intent intent = new Intent("com.example.YOUR_ACTION"); // the intent action intent.putExtra("key", "value"); // data to be passed with your broadcast LocalBroadcastManager manager = LocalBroadcastManager.getInstance(context); manager.sendBroadcast(intent); GoalKicker.com Android Notes for Professionals 314 To send a broadcast to components outside of your application, use the sendBroadcast() method on a Context object. Intent intent = new Intent("com.example.YOUR_ACTION"); // the intent action intent.putExtra("key", "value"); // data to be passed with your broadcast context.sendBroadcast(intent); Information about receiving broadcasts can be found here: Broadcast Receiver Section 41.12: Passing custom object between activities It is also possible to pass your custom object to other activities using the Bundle class. There are two ways: Serializable interfacefor Java and Android Parcelable interfacememory ecient, only for Android (recommended) Parcelable Parcelable processing is much faster than serializable. One of the reasons for this is that we are being explicit about the serialization process instead of using reection to infer it. It also stands to reason that the code has been heavily optimized for this purpose. public class MyObjects implements Parcelable { private int age; private String name; private ArrayList<String> address; public MyObjects(String name, int age, ArrayList<String> address) { this.name = name; this.age = age; this.address = address; } public MyObjects(Parcel source) { age = source.readInt(); name = source.readString(); address = source.createStringArrayList(); } @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeInt(age); dest.writeString(name); dest.writeStringList(address); } public int getAge() { return age; GoalKicker.com Android Notes for Professionals 315 } public String getName() { return name; } public ArrayList<String> getAddress() { if (!(address == null)) return address; else return new ArrayList<String>(); } public static final Creator<MyObjects> CREATOR = new Creator<MyObjects>() { @Override public MyObjects[] newArray(int size) { return new MyObjects[size]; } @Override public MyObjects createFromParcel(Parcel source) { return new MyObjects(source); } }; } Sending Activity Code MyObject mObject = new MyObject("name","age","Address array here"); //Passing MyOject Intent mIntent = new Intent(FromActivity.this, ToActivity.class); mIntent.putExtra("UniqueKey", mObject); startActivity(mIntent); Receiving the object in destination activity. //Getting MyObjects Intent mIntent = getIntent(); MyObjects workorder = (MyObjects) mIntent.getParcelable("UniqueKey"); You can pass Arraylist of Parceble object as below //Array of MyObjects ArrayList<MyObject> mUsers; //Passing MyObject List Intent mIntent = new Intent(FromActivity.this, ToActivity.class); mIntent.putParcelableArrayListExtra("UniqueKey", mUsers); startActivity(mIntent); //Getting MyObject List Intent mIntent = getIntent(); ArrayList<MyObjects> mUsers = mIntent.getParcelableArrayList("UniqueKey"); Note: There are Android Studio plugins such as this one available to generate Parcelable code Serializable GoalKicker.com Android Notes for Professionals 316 Sending Activity Code Product product = new Product(); Bundle bundle = new Bundle(); bundle.putSerializable("product", product); Intent cartIntent = new Intent(mContext, ShowCartActivity.class); cartIntent.putExtras(bundle); mContext.startActivity(cartIntent); Receiving the object in destination activity. protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Bundle bundle = this.getIntent().getExtras(); Product product = null; if (bundle != null) { product = (Product) bundle.getSerializable("product"); } Arraylist of Serializable object: same as single object passing Custom object should implement the Serializable interface. Section 41.13: Open Google map with specied latitude, longitude You can pass latitude, longitude from your app to Google map using Intent String uri = String.format(Locale.ENGLISH, "http://maps.google.com/maps?q=loc:%f,%f", 28.43242324,77.8977673); Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(uri)); startActivity(intent); Section 41.14: Passing dierent data through Intent in Activity 1. Passing integer data: SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putExtra("intVariableName", intValue); startActivity(myIntent); ReceiverActivity Intent mIntent = getIntent(); int intValue = mIntent.getIntExtra("intVariableName", 0); // set 0 as the default value if no value for intVariableName found 2. Passing double data: SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putExtra("doubleVariableName", doubleValue); startActivity(myIntent); GoalKicker.com Android Notes for Professionals 317 ReceiverActivity Intent mIntent = getIntent(); double doubleValue = mIntent.getDoubleExtra("doubleVariableName", 0.00); // set 0.00 as the default value if no value for doubleVariableName found 3. Passing String data: SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putExtra("stringVariableName", stringValue); startActivity(myIntent); ReceiverActivity Intent mIntent = getIntent(); String stringValue = mIntent.getExtras().getString("stringVariableName"); or Intent mIntent = getIntent(); String stringValue = mIntent.getStringExtra("stringVariableName"); 4. Passing ArrayList data : SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putStringArrayListExtra("arrayListVariableName", arrayList); startActivity(myIntent); ReceiverActivity Intent mIntent = getIntent(); arrayList = mIntent.getStringArrayListExtra("arrayListVariableName"); 5. Passing Object data : SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putExtra("ObjectVariableName", yourObject); startActivity(myIntent); ReceiverActivity Intent mIntent = getIntent(); yourObj = mIntent.getSerializableExtra("ObjectVariableName"); Note : Keep in mind your custom Class must implement the Serializable interface. 6. Passing HashMap<String, String> data : GoalKicker.com Android Notes for Professionals 318 SenderActivity HashMap<String, String> hashMap; Intent mIntent = new Intent(SenderActivity.this, ReceiverActivity.class); mIntent.putExtra("hashMap", hashMap); startActivity(mIntent); ReceiverActivity Intent mIntent = getIntent(); HashMap<String, String> hashMap = (HashMap<String, String>) mIntent.getSerializableExtra("hashMap"); 7. Passing Bitmap data : SenderActivity Intent myIntent = new Intent(SenderActivity.this, ReceiverActivity.class); myIntent.putExtra("image",bitmap); startActivity(mIntent); ReceiverActivity Intent mIntent = getIntent(); Bitmap bitmap = mIntent.getParcelableExtra("image"); Section 41.15: Share intent Share simple information with dierents apps. Intent sendIntent = new Intent(); sendIntent.setAction(Intent.ACTION_SEND); sendIntent.putExtra(Intent.EXTRA_TEXT, "This is my text to send."); sendIntent.setType("text/plain"); startActivity(Intent.createChooser(sendIntent, getResources().getText(R.string.send_to))); Share an image with dierents apps. Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, uriToImage); shareIntent.setType("image/jpeg"); startActivity(Intent.createChooser(shareIntent, getResources().getText(R.string.send_to))); Section 41.16: Showing a File Chooser and Reading the Result Starting a File Chooser Activity public void showFileChooser() { Intent intent = new Intent(Intent.ACTION_GET_CONTENT); // Update with mime types intent.setType("*/*"); // Update with additional mime types here using a String[]. intent.putExtra(Intent.EXTRA_MIME_TYPES, mimeTypes); GoalKicker.com Android Notes for Professionals 319 // Only pick openable and local files. Theoretically we could pull files from google drive // or other applications that have networked files, but that's unnecessary for this example. intent.addCategory(Intent.CATEGORY_OPENABLE); intent.putExtra(Intent.EXTRA_LOCAL_ONLY, true); // REQUEST_CODE = <some-integer> startActivityForResult(intent, REQUEST_CODE); } Reading the Result @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // If the user doesn't pick a file just return if (requestCode != REQUEST_CODE || resultCode != RESULT_OK) { return; } // Import the file importFile(data.getData()); } public void importFile(Uri uri) { String fileName = getFileName(uri); // The temp file could be whatever you want File fileCopy = copyToTempFile(uri, File tempFile) // Done! } /** * Obtains the file name for a URI using content resolvers. Taken from the following link * https://developer.android.com/training/secure-file-sharing/retrieve-info.html#RetrieveFileInfo * * @param uri a uri to query * @return the file name with no path * @throws IllegalArgumentException if the query is null, empty, or the column doesn't exist */ private String getFileName(Uri uri) throws IllegalArgumentException { // Obtain a cursor with information regarding this uri Cursor cursor = getContentResolver().query(uri, null, null, null, null); if (cursor.getCount() <= 0) { cursor.close(); throw new IllegalArgumentException("Can't obtain file name, cursor is empty"); } cursor.moveToFirst(); String fileName = cursor.getString(cursor.getColumnIndexOrThrow(OpenableColumns.DISPLAY_NAME)); cursor.close(); return fileName; } /** * Copies a uri reference to a temporary file * * @param uri the uri used as the input stream * @param tempFile the file used as an output stream GoalKicker.com Android Notes for Professionals 320 * @return the input tempFile for convenience * @throws IOException if an error occurs */ private File copyToTempFile(Uri uri, File tempFile) throws IOException { // Obtain an input stream from the uri InputStream inputStream = getContentResolver().openInputStream(uri); if (inputStream == null) { throw new IOException("Unable to obtain input stream from URI"); } // Copy the stream to the temp file FileUtils.copyInputStreamToFile(inputStream, tempFile); return tempFile; } Section 41.17: Sharing Multiple Files through Intent The String List passed as a parameter to the share() method contains the paths of all the les you want to share. It basically loops through the paths, adds them to Uri, and starts the Activity which can accept Files of this type. public static void share(AppCompatActivity context,List<String> paths) { if (paths == null || paths.size() == 0) { return; } ArrayList<Uri> uris = new ArrayList<>(); Intent intent = new Intent(); intent.setAction(android.content.Intent.ACTION_SEND_MULTIPLE); intent.setType("*/*"); for (String path : paths) { File file = new File(path); uris.add(Uri.fromFile(file)); } intent.putParcelableArrayListExtra(Intent.EXTRA_STREAM, uris); context.startActivity(intent); } Section 41.18: Start Unbound Service using an Intent A Service is a component which runs in the background (on the UI thread) without direct interaction with the user. An unbound Service is just started, and is not bound to the lifecycle of any Activity. To start a Service you can do as shown in the example below: // This Intent will be used to start the service Intent i= new Intent(context, ServiceName.class); // potentially add data to the intent extras i.putExtra("KEY1", "Value to be used by the service"); context.startService(i); You can use any extras from the intent by using an onStartCommand() override: public class MyService extends Service { public MyService() { } GoalKicker.com Android Notes for Professionals 321 @Override public int onStartCommand(Intent intent, int flags, int startId) { if (intent != null) { Bundle extras = intent.getExtras(); String key1 = extras.getString("KEY1", ""); if (key1.equals("Value to be used by the service")) { //do something } } return START_STICKY; } @Nullable @Override public IBinder onBind(Intent intent) { return null; } } Section 41.19: Getting a result from Activity to Fragment Like Getting a result from another Activity you need to call the Fragment's method startActivityForResult(Intent intent, int requestCode). note that you should not call getActivity().startActivityForResult() as this will take the result back to the Fragment's parent Activity. Receiving the result can be done using the Fragment's method onActivityResult(). You need to make sure that the Fragment's parent Activity also overrides onActivityResult() and calls it's super implementation. In the following example ActivityOne contains FragmentOne, which will start ActivityTwo and expect a result from it. ActivityOne public class ActivityOne extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_one); } // You must override this method as the second Activity will always send its results to this Activity and then to the Fragment @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); } } activity_one.xml <fragment android:name="com.example.FragmentOne" android:id="@+id/fragment_one" android:layout_width="match_parent" android:layout_height="match_parent" /> FragmentOne GoalKicker.com Android Notes for Professionals 322 public class FragmentOne extends Fragment { public static final int REQUEST_CODE = 11; public static final int RESULT_CODE = 12; public static final String EXTRA_KEY_TEST = "testKey"; // Initializing and starting the second Activity private void startSecondActivity() { Intent intent = new Intent(getActivity(), ActivityTwo.class); startActivityForResult(REQUEST_CODE, intent); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == REQUEST_CODE && resultCode == RESULT_CODE) { String testResult = data.getStringExtra(EXTRA_KEY_TEST); // TODO: Do something with your extra data } } } ActivityTwo public class ActivityTwo extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_two); } private void closeActivity() { Intent intent = new Intent(); intent.putExtra(FragmentOne.EXTRA_KEY_TEST, "Testing passing data back to ActivityOne"); setResult(FragmentOne.RESULT_CODE, intent); // You can also send result without any data using setResult(int resultCode) finish(); } } GoalKicker.com Android Notes for Professionals 323 Chapter 42: Fragments Introduction about Fragments and their intercommunication mechanism Section 42.1: Pass data from Activity to Fragment using Bundle All fragments should have an empty constructor (i.e. a constructor method having no input arguments). Therefore, in order to pass your data to the Fragment being created, you should use the setArguments() method. This methods gets a bundle, which you store your data in, and stores the Bundle in the arguments. Subsequently, this Bundle can then be retrieved in onCreate() and onCreateView() call backs of the Fragment. Activity: Bundle bundle = new Bundle(); String myMessage = "Stack Overflow is cool!"; bundle.putString("message", myMessage ); FragmentClass fragInfo = new FragmentClass(); fragInfo.setArguments(bundle); transaction.replace(R.id.fragment_single, fragInfo); transaction.commit(); Fragment: @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { String myValue = this.getArguments().getString("message"); ... } Section 42.2: The newInstance() pattern Although it is possible to create a fragment constructor with parameters, Android internally calls the zero-argument constructor when recreating fragments (for example, if they are being restored after being killed for Android's own reasons). For this reason, it is not advisable to rely on a constructor that has parameters. To ensure that your expected fragment arguments are always present you can use a static newInstance() method to create the fragment, and put whatever parameters you want in to a bundle that will be available when creating a new instance. import android.os.Bundle; import android.support.v4.app.Fragment; public class MyFragment extends Fragment { // Our identifier for obtaining the name from arguments private static final String NAME_ARG = "name"; private String mName; // Required public MyFragment(){} // The static constructor. // the fragment yourself This is the only way that you should instantiate GoalKicker.com Android Notes for Professionals 324 public static MyFragment newInstance(final String name) { final MyFragment myFragment = new MyFragment(); // The 1 below is an optimization, being the number of arguments that will // be added to this bundle. If you know the number of arguments you will add // to the bundle it stops additional allocations of the backing map. If // unsure, you can construct Bundle without any arguments final Bundle args = new Bundle(1); // This stores the argument as an argument in the bundle. Note that even if // the 'name' parameter is NULL then this will work, so you should consider // at this point if the parameter is mandatory and if so check for NULL and // throw an appropriate error if so args.putString(NAME_ARG, name); myFragment.setArguments(args); return myFragment; } @Override public void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); final Bundle arguments = getArguments(); if (arguments == null || !arguments.containsKey(NAME_ARG)) { // Set a default or error as you see fit } else { mName = arguments.getString(NAME_ARG); } } } Now, in the Activity: FragmentTransaction ft = getSupportFragmentManager().beginTransaction(); MyFragment mFragment = MyFragment.newInstance("my name"); ft.replace(R.id.placeholder, mFragment); //R.id.placeholder is where we want to load our fragment ft.commit(); This pattern is a best practice to ensure that all the needed arguments will be passed to fragments on creation. Note that when the system destroys the fragment and re-creates it later, it will automatically restore its state - but you must provide it with an onSaveInstanceState(Bundle) implementation. Section 42.3: Navigation between fragments using backstack and static fabric pattern First of all, we need to add our rst Fragment at the beginning, we should do it in the onCreate() method of our Activity: if (null == savedInstanceState) { getSupportFragmentManager().beginTransaction() .addToBackStack("fragmentA") .replace(R.id.container, FragmentA.newInstance(), "fragmentA") .commit(); } Next, we need to manage our backstack. The easiest way is using a function added in our activity that is used for all FragmentTransactions. GoalKicker.com Android Notes for Professionals 325 public void replaceFragment(Fragment fragment, String tag) { //Get current fragment placed in container Fragment currentFragment = getSupportFragmentManager().findFragmentById(R.id.container); //Prevent adding same fragment on top if (currentFragment.getClass() == fragment.getClass()) { return; } //If fragment is already on stack, we can pop back stack to prevent stack infinite growth if (getSupportFragmentManager().findFragmentByTag(tag) != null) { getSupportFragmentManager().popBackStack(tag, FragmentManager.POP_BACK_STACK_INCLUSIVE); } //Otherwise, just replace fragment getSupportFragmentManager() .beginTransaction() .addToBackStack(tag) .replace(R.id.container, fragment, tag) .commit(); } Finally, we should override onBackPressed() to exit the application when going back from the last Fragment available in the backstack. @Override public void onBackPressed() { int fragmentsInStack = getSupportFragmentManager().getBackStackEntryCount(); if (fragmentsInStack > 1) { // If we have more than one fragment, pop back stack getSupportFragmentManager().popBackStack(); } else if (fragmentsInStack == 1) { // Finish activity, if only one fragment left, to prevent leaving empty screen finish(); } else { super.onBackPressed(); } } Execution in activity: replaceFragment(FragmentB.newInstance(), "fragmentB"); Execution outside activity (assuming MainActivity is our activity): ((MainActivity) getActivity()).replaceFragment(FragmentB.newInstance(), "fragmentB"); Section 42.4: Sending events back to an activity with callback interface If you need to send events from fragment to activity, one of the possible solutions is to dene callback interface and require that the host activity implement it. Example Send callback to an activity, when fragment's button clicked First of all, dene callback interface: public interface SampleCallback { void onButtonClicked(); GoalKicker.com Android Notes for Professionals 326 } Next step is to assign this callback in fragment: public final class SampleFragment extends Fragment { private SampleCallback callback; @Override public void onAttach(Context context) { super.onAttach(context); if (context instanceof SampleCallback) { callback = (SampleCallback) context; } else { throw new RuntimeException(context.toString() + " must implement SampleCallback"); } } @Override public void onDetach() { super.onDetach(); callback = null; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { final View view = inflater.inflate(R.layout.sample, container, false); // Add button's click listener view.findViewById(R.id.actionButton).setOnClickListener(new View.OnClickListener() { public void onClick(View v) { callback.onButtonClicked(); // Invoke callback here } }); return view; } } And nally, implement callback in activity: public final class SampleActivity extends Activity implements SampleCallback { // ... Skipped code with settings content view and presenting the fragment @Override public void onButtonClicked() { // Invoked when fragment's button has been clicked } } Section 42.5: Animate the transition between fragments To animate the transition between fragments, or to animate the process of showing or hiding a fragment you use the FragmentManager to create a FragmentTransaction. For a single FragmentTransaction, there are two dierent ways to perform animations: you can use a standard animation or you can supply your own custom animations. GoalKicker.com Android Notes for Professionals 327 Standard animations are specied by calling FragmentTransaction.setTransition(int transit), and using one of the pre-dened constants available in the FragmentTransaction class. At the time of writing, these constants are: FragmentTransaction.TRANSIT_NONE FragmentTransaction.TRANSIT_FRAGMENT_OPEN FragmentTransaction.TRANSIT_FRAGMENT_CLOSE FragmentTransaction.TRANSIT_FRAGMENT_FADE The complete transaction might look something like this: getSupportFragmentManager() .beginTransaction() .setTransition(FragmentTransaction.TRANSIT_FRAGMENT_FADE) .replace(R.id.contents, new MyFragment(), "MyFragmentTag") .commit(); Custom animations are specied by calling either FragmentTransaction.setCustomAnimations(int enter, int exit) or FragmentTransaction.setCustomAnimations(int enter, int exit, int popEnter, int popExit). The enter and exit animations will be played for FragmentTransactions that do not involve popping fragments o of the back stack. The popEnter and popExit animations will be played when popping a fragment o of the back stack. The following code shows how you would replace a fragment by sliding out one fragment and sliding the other one in it's place. getSupportFragmentManager() .beginTransaction() .setCustomAnimations(R.anim.slide_in_left, R.anim.slide_out_right) .replace(R.id.contents, new MyFragment(), "MyFragmentTag") .commit(); The XML animation denitions would use the objectAnimator tag. An example of slide_in_left.xml might look something like this: <?xml version="1.0" encoding="utf-8"?> <set> <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android" android:propertyName="x" android:valueType="floatType" android:valueFrom="-1280" android:valueTo="0" android:duration="500"/> </set> Section 42.6: Communication between Fragments All communications between Fragments must go via an Activity. Fragments CANNOT communicate with each other without an Activity. Additional Resources How to implement OnFragmentInteractionListener Android | Communicating With Other Fragments In this sample, we have a MainActivity that hosts two fragments, SenderFragment and ReceiverFragment, for GoalKicker.com Android Notes for Professionals 328 sending and receiving a message (a simple String in this case) respectively. A Button in SenderFragment initiates the process of sending the message. A TextView in the ReceiverFragment is updated when the message is received by it. Following is the snippet for the MainActivity with comments explaining the important lines of code: // Our MainActivity implements the interface defined by the SenderFragment. This enables // communication from the fragment to the activity public class MainActivity extends AppCompatActivity implements SenderFragment.SendMessageListener { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } /** * This method is called when we click on the button in the SenderFragment * @param message The message sent by the SenderFragment */ @Override public void onSendMessage(String message) { // Find our ReceiverFragment using the SupportFragmentManager and the fragment's id ReceiverFragment receiverFragment = (ReceiverFragment) getSupportFragmentManager().findFragmentById(R.id.fragment_receiver); // Make sure that such a fragment exists if (receiverFragment != null) { // Send this message to the ReceiverFragment by calling its public method receiverFragment.showMessage(message); } } } The layout le for the MainActivity hosts two fragments inside a LinearLayout : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.naru.fragmentcommunication.MainActivity"> <fragment android:id="@+id/fragment_sender" android:name="com.naru.fragmentcommunication.SenderFragment" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" tools:layout="@layout/fragment_sender" /> <fragment android:id="@+id/fragment_receiver" android:name="com.naru.fragmentcommunication.ReceiverFragment" GoalKicker.com Android Notes for Professionals 329 android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" tools:layout="@layout/fragment_receiver" /> </LinearLayout> The SenderFragment exposes an interface SendMessageListener that helps the MainActivity know when Button in the SenderFragment was clicked. Following is the code snippet for the SenderFragment explaining the important lines of code: public class SenderFragment extends Fragment { private SendMessageListener commander; /** * This interface is created to communicate between the activity and the fragment. Any activity * which implements this interface will be able to receive the message that is sent by this * fragment. */ public interface SendMessageListener { void onSendMessage(String message); } /** * API LEVEL >= 23 * <p> * This method is called when the fragment is attached to the activity. This method here will * help us to initialize our reference variable, 'commander' , for our interface * 'SendMessageListener' * * @param context */ @Override public void onAttach(Context context) { super.onAttach(context); // Try to cast the context to our interface SendMessageListener i.e. check whether the // activity implements the SendMessageListener. If not a ClassCastException is thrown. try { commander = (SendMessageListener) context; } catch (ClassCastException e) { throw new ClassCastException(context.toString() + "must implement the SendMessageListener interface"); } } /** * API LEVEL < 23 * <p> * This method is called when the fragment is attached to the activity. This method here will * help us to initialize our reference variable, 'commander' , for our interface * 'SendMessageListener' * * @param activity */ @Override public void onAttach(Activity activity) { super.onAttach(activity); // Try to cast the context to our interface SendMessageListener i.e. check whether the // activity implements the SendMessageListener. If not a ClassCastException is thrown. try { GoalKicker.com Android Notes for Professionals 330 commander = (SendMessageListener) activity; } catch (ClassCastException e) { throw new ClassCastException(activity.toString() + "must implement the SendMessageListener interface"); } } @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { // Inflate view for the sender fragment. View view = inflater.inflate(R.layout.fragment_receiver, container, false); // Initialize button and a click listener on it Button send = (Button) view.findViewById(R.id.bSend); send.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Sanity check whether we were able to properly initialize our interface reference if (commander != null) { // Call our interface method. This enables us to call the implemented method // in the activity, from where we can send the message to the ReceiverFragment. commander.onSendMessage("HELLO FROM SENDER FRAGMENT!"); } } }); return view; } } The layout le for the SenderFragment: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical"> <Button android:id="@+id/bSend" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="SEND" android:layout_gravity="center_horizontal" /> </LinearLayout> The ReceiverFragment is simple and exposes a simple public method to updates its TextView. When the MainActivity receives the message from the SenderFragment it calls this public method of the ReceiverFragment Following is the code snippet for the ReceiverFragment with comments explaining the important lines of code : public class ReceiverFragment extends Fragment { TextView tvMessage; @Nullable GoalKicker.com Android Notes for Professionals 331 @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { // Inflate view for the sender fragment. View view = inflater.inflate(R.layout.fragment_receiver, container, false); // Initialize the TextView tvMessage = (TextView) view.findViewById(R.id.tvReceivedMessage); return view; } /** * Method that is called by the MainActivity when it receives a message from the SenderFragment. * This method helps update the text in the TextView to the message sent by the SenderFragment. * @param message Message sent by the SenderFragment via the MainActivity. */ public void showMessage(String message) { tvMessage.setText(message); } } The layout le for the ReceiverFragment : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical"> <TextView android:id="@+id/tvReceivedMessage" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Waiting for message!" /> </LinearLayout> GoalKicker.com Android Notes for Professionals 332 Chapter 43: Button Section 43.1: Using the same click event for one or more Views in the XML When we create any View in layout, we can use the android:onClick attribute to reference a method in the associated activity or fragment to handle the click events. XML Layout <Button android:id="@+id/button" ... // onClick should reference the method in your activity or fragment android:onClick="doSomething" /> // Note that this works with any class which is a subclass of View, not just Button <ImageView android:id="@+id/image" ... android:onClick="doSomething" /> Activity/fragment code In your code, create the method you named, where v will be the view that was touched, and do something for each view that calls this method. public void doSomething(View v) { switch(v.getId()) { case R.id.button: // Button was clicked, do something. break; case R.id.image: // Image was clicked, do something else. break; } } If you want, you can also use dierent method for each View (in this case, of course, you don't have to check for the ID). Section 43.2: Dening external Listener When should I use it When the code inside an inline listener is too big and your method / class becomes ugly and hard to read You want to perform same action in various elements (view) of your app To achieve this you need to create a class implementing one of the listeners in the View API. For example, give help when long click on any element: public class HelpLongClickListener implements View.OnLongClickListener { public HelpLongClickListener() { } @Override GoalKicker.com Android Notes for Professionals 333 public void onLongClick(View v) { // show help toast or popup } } Then you just need to have an attribute or variable in your Activity to use it: HelpLongClickListener helpListener = new HelpLongClickListener(...); button1.setOnClickListener(helpListener); button2.setOnClickListener(helpListener); label.setOnClickListener(helpListener); button1.setOnClickListener(helpListener); NOTE: dening listeners in separated class has one disadvantage, it cannot access class elds directly, so you need to pass data (context, view) through constructor unless you make attributes public or dene geters. Section 43.3: inline onClickListener Say we have a button (we can create it programmatically, or bind it from a view using ndViewbyId(), etc...) Button btnOK = (...) Now, create an anonymous class and set it inline. btnOk.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Do stuff here... } }); Section 43.4: Customizing Button style There are many possible ways of customizing the look of a Button. This example presents several options: Option 0: Use ThemeOverlay (currently the easiest/quickest way) Create a new style in your styles le: styles.xml <resources> <style name=mybutton parent=ThemeOverlay.AppCompat.Ligth> <!-- customize colorButtonNormal for the disable color --> <item name="colorButtonNormal">@color/colorbuttonnormal</item> <!-- customize colorAccent for the enabled color --> <item name="colorButtonNormal">@color/coloraccent</item> </style> </resources> Then in the layout where you place your button (e.g. MainActivity): activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" GoalKicker.com Android Notes for Professionals 334 xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center_horizontal" android:gravity="center_horizontal" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <Button android:id="@+id/mybutton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello" android:theme="@style/mybutton" style="@style/Widget.AppCompat.Button.Colored"/> </LinearLayout> Option 1: Create your own button style In values/styles.xml, create a new style for your button: styles.xml <resources> <style name="mybuttonstyle" parent="@android:style/Widget.Button"> <item name="android:gravity">center_vertical|center_horizontal</item> <item name="android:textColor">#FFFFFFFF</item> <item name="android:shadowColor">#FF000000</item> <item name="android:shadowDx">0</item> <item name="android:shadowDy">-1</item> <item name="android:shadowRadius">0.2</item> <item name="android:textSize">16dip</item> <item name="android:textStyle">bold</item> <item name="android:background">@drawable/button</item> </style> </resources> Then in the layout where you place your button (e.g. in MainActivity): activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center_horizontal" android:gravity="center_horizontal" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity"> GoalKicker.com Android Notes for Professionals 335 <Button android:id="@+id/mybutton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello" android:theme="@style/mybuttonstyle"/> </LinearLayout> Option 2: Assign a drawable for each of your button states Create an xml le into drawable folder called 'mybuttondrawable.xml' to dene the drawable resource of each of your button states: drawable/mybutton.xml <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_enabled="false" android:drawable="@drawable/mybutton_disabled" /> <item android:state_pressed="true" android:state_enabled="true" android:drawable="@drawable/mybutton_pressed" /> <item android:state_focused="true" android:state_enabled="true" android:drawable="@drawable/mybutton_focused" /> <item android:state_enabled="true" android:drawable="@drawable/mybutton_enabled" /> </selector> Each of those drawables may be images (e.g. mybutton_disabled.png) or xml les dened by you and stored in the drawables folder. For instance: drawable/mybutton_disabled.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:startColor="#F2F2F2" android:centerColor="#A4A4A4" android:endColor="#F2F2F2" android:angle="90"/> <padding android:left="7dp" android:top="7dp" android:right="7dp" android:bottom="7dp" /> <stroke android:width="2dip" android:color="#FFFFFF" /> <corners android:radius= "8dp" /> </shape> Then in the layout where you place your button (e.g. MainActivity): activity_main.xml GoalKicker.com Android Notes for Professionals 336 <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center_horizontal" android:gravity="center_horizontal" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <Button android:id="@+id/mybutton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello" android:background="@drawable/mybuttondrawable"/> </LinearLayout> Option 3: Add your button style to your App theme You can override the default android button style in the denition of your app theme (in values/styles.xml). styles.xml <resources> <style name="AppTheme" parent="android:Theme"> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> <item name="android:button">@style/mybutton</item> </style> <style name="mybutton" parent="android:style/Widget.Button"> <item name="android:gravity">center_vertical|center_horizontal</item> <item name="android:textColor">#FFFFFFFF</item> <item name="android:shadowColor">#FF000000</item> <item name="android:shadowDx">0</item> <item name="android:shadowDy">-1</item> <item name="android:shadowRadius">0.2</item> <item name="android:textSize">16dip</item> <item name="android:textStyle">bold</item> <item name="android:background">@drawable/anydrawable</item> </style> </resources> Option 4: Overlay a color on the default button style programmatically Just nd you button in your activity and apply a color lter: Button mybutton = (Button) findViewById(R.id.mybutton); mybutton.getBackground().setColorFilter(anycolor, PorterDuff.Mode.MULTIPLY) You can check dierent blending modes here and nice examples here. GoalKicker.com Android Notes for Professionals 337 Section 43.5: Custom Click Listener to prevent multiple fast clicks In order to prevent a button from ring multiple times within a short period of time (let's say 2 clicks within 1 second, which may cause serious problems if the ow is not controlled), one can implement a custom SingleClickListener. This ClickListener sets a specic time interval as threshold (for instance, 1000ms). When the button is clicked, a check will be ran to see if the trigger was executed in the past amount of time you dened, and if not it will trigger it. public class SingleClickListener implements View.OnClickListener { protected int defaultInterval; private long lastTimeClicked = 0; public SingleClickListener() { this(1000); } public SingleClickListener(int minInterval) { this.defaultInterval = minInterval; } @Override public void onClick(View v) { if (SystemClock.elapsedRealtime() - lastTimeClicked < defaultInterval) { return; } lastTimeClicked = SystemClock.elapsedRealtime(); performClick(v); } public abstract void performClick(View v); } And in the class, the SingleClickListener is associated to the Button at stake myButton = (Button) findViewById(R.id.my_button); myButton.setOnClickListener(new SingleClickListener() { @Override public void performClick(View view) { // do stuff } }); Section 43.6: Using the layout to dene a click action When we create a button in layout, we can use the android:onClick attribute to reference a method in code to handle clicks. Button <Button android:width="120dp" android:height="wrap_content" android:text="Click me" GoalKicker.com Android Notes for Professionals 338 android:onClick="handleClick" /> Then in your activity, create the handleClick method. public void handleClick(View v) { // Do whatever. } Section 43.7: Listening to the long click events To catch a long click and use it you need to provide appropriate listener to button: View.OnLongClickListener listener = new View.OnLongClickListener() { public boolean onLongClick(View v) { Button clickedButton = (Button) v; String buttonText = clickedButton.getText().toString(); Log.v(TAG, "button long pressed --> " + buttonText); return true; } }; button.setOnLongClickListener(listener); GoalKicker.com Android Notes for Professionals 339 Chapter 44: Emulator Section 44.1: Taking screenshots If you want to take a screenshot from the Android Emulator (2.0), then you just need to press Ctrl + S or you click on the camera icon on the side bar: GoalKicker.com Android Notes for Professionals 340 GoalKicker.com Android Notes for Professionals 341 If you use an older version of the Android Emulator or you want to take a screenshot from a real device, then you need to click on the camera icon in the Android Monitor: Double check that you have selected the right device, because this is a common pitfall. After taking a screenshot, you can optionally add the following decorations to it (also see the image below): 1. A device frame around the screenshot. 2. A drop shadow below the device frame. 3. A screen glare across device frame and screenshot. GoalKicker.com Android Notes for Professionals 342 GoalKicker.com Android Notes for Professionals 343 GoalKicker.com Android Notes for Professionals 344 Section 44.2: Simulate call To simulate a phone call, press the 'Extended controls' button indicated by three dots, choose 'Phone' and select 'Call'. You can also optionally change the phone number. Section 44.3: Open the AVD Manager Once the SDK installed, you can open the AVD Manager from the command line using android avd. You can also access AVD Manager from Android studio using Tools > Android > AVD Manager or by clicking on the AVD Manager icon in the toolbar which is the second in the screenshot below. Section 44.4: Resolving Errors while starting emulator First of all, ensure that you've enabled the 'Virtualization' in your BIOS setup. Start the Android SDK Manager, select Extras and then select Intel Hardware Accelerated Execution Manager and wait until your download completes. If it still doesn't work, open your SDK folder and run /extras/intel/Hardware_Accelerated_Execution_Manager/IntelHAXM.exe. Follow the on-screen instructions to complete installation. Or for OS X you can do it without onscreen prompts like this: /extras/intel/Hardware_Accelerated_Execution_Manager/HAXM\ installation If your CPU does not support VT-x or SVM, you can not use x86-based Android images. Please use ARMbased images instead. After installation completed, conrm that the virtualization driver is operating correctly by opening a command prompt window and running the following command: sc query intelhaxm GoalKicker.com Android Notes for Professionals 345 To run an x86-based emulator with VM acceleration: If you are running the emulator from the command line, just specify an x86-based AVD: emulator -avd <avd_name> If you follow all the steps mentioned above correctly, then surely you should be able to see your AVD with HAXM coming up normally. GoalKicker.com Android Notes for Professionals 346 Chapter 45: Service A Service runs in background to perform long-running operations or to perform work for remote processes. A service does not provide any user interface it runs only in background with Users input. For example a service can play music in the background while the user is in a dierent App, or it might download data from the internet without blocking users interaction with the Android device. Section 45.1: Lifecycle of a Service The services lifecycle has the following callbacks onCreate() : Executed when the service is rst created in order to set up the initial congurations you might need. This method is executed only if the service is not already running. onStartCommand() : Executed every time startService() is invoked by another component, like an Activity or a BroadcastReceiver. When you use this method, the Service will run until you call stopSelf() or stopService(). Note that regardless of how many times you call onStartCommand(), the methods stopSelf() and stopService() must be invoked only once in order to stop the service. onBind() : Executed when a component calls bindService() and returns an instance of IBInder, providing a communication channel to the Service. A call to bindService() will keep the service running as long as there are clients bound to it. onDestroy() : Executed when the service is no longer in use and allows for disposal of resources that have been allocated. It is important to note that during the lifecycle of a service other callbacks might be invoked such as onConfigurationChanged() and onLowMemory() https://developer.android.com/guide/components/services.html GoalKicker.com Android Notes for Professionals 347 Section 45.2: Dening the process of a service The android:process eld denes the name of the process where the service is to run. Normally, all components of an application run in the default process created for the application. However, a component can override the default with its own process attribute, allowing you to spread your application across multiple processes. If the name assigned to this attribute begins with a colon (':'), the service will run in its own separate process. <service android:name="com.example.appName" android:process=":externalProcess" /> If the process name begins with a lowercase character, the service will run in a global process of that name, provided that it has permission to do so. This allows components in dierent applications to share a process, reducing resource usage. Section 45.3: Creating an unbound service The rst thing to do is to add the service to AndroidManifest.xml, inside the <application> tag: <application ...> ... <service android:name=".RecordingService" GoalKicker.com Android Notes for Professionals 348 <!--"enabled" tag specifies Whether or not the service can be instantiated by the system "true" --> <!--if it can be, and "false" if not. The default value is "true".--> android:enabled="true" <!--exported tag specifies Whether or not components of other applications can invoke the -> <!--service or interact with it "true" if they can, and "false" if not. When the value-> <!--is "false", only components of the same application or applications with the same user --> <!--ID can start the service or bind to it.--> android:exported="false" /> </application> If your intend to manage your service class in a separate package (eg: .AllServices.RecordingService) then you will need to specify where your service is located. So, in above case we will modify: android:name=".RecordingService" to android:name=".AllServices.RecordingService" or the easiest way of doing so is to specify the full package name. Then we create the actual service class: public class RecordingService extends Service { private int NOTIFICATION = 1; // Unique identifier for our notification public static boolean isRunning = false; public static RecordingService instance = null; private NotificationManager notificationManager = null; @Override public IBinder onBind(Intent intent) { return null; } @Override public void onCreate(){ instance = this; isRunning = true; notificationManager = (NotificationManager) getSystemService(NOTIFICATION_SERVICE); super.onCreate(); } @Override public int onStartCommand(Intent intent, int flags, int startId){ // The PendingIntent to launch our activity if the user selects this notification PendingIntent contentIntent = PendingIntent.getActivity(this, 0, new Intent(this, MainActivity.class), 0); // Set the info for the views that show in the notification panel. GoalKicker.com Android Notes for Professionals 349 Notification notification = new NotificationCompat.Builder(this) .setSmallIcon(R.mipmap.ic_launcher) // the status icon .setTicker("Service running...") // the status text .setWhen(System.currentTimeMillis()) // the time stamp .setContentTitle("My App") // the label of the entry .setContentText("Service running...") // the content of the entry .setContentIntent(contentIntent) // the intent to send when the entry is clicked .setOngoing(true) .build(); // make persistent (disable swipe-away) // Start service in foreground mode startForeground(NOTIFICATION, notification); return START_STICKY; } @Override public void onDestroy(){ isRunning = false; instance = null; notificationManager.cancel(NOTIFICATION); // Remove notification super.onDestroy(); } public void doSomething(){ Toast.makeText(getApplicationContext(), "Doing stuff from service...", Toast.LENGTH_SHORT).show(); } } All this service does is show a notication when it's running, and it can display toasts when its doSomething() method is called. As you'll notice, it's implemented as a singleton, keeping track of its own instance - but without the usual static singleton factory method because services are naturally singletons and are created by intents. The instance is useful to the outside to get a "handle" to the service when it's running. Last, we need to start and stop the service from an activity: public void startOrStopService(){ if( RecordingService.isRunning ){ // Stop service Intent intent = new Intent(this, RecordingService.class); stopService(intent); } else { // Start service Intent intent = new Intent(this, RecordingService.class); startService(intent); } } In this example, the service is started and stopped by the same method, depending on it's current state. GoalKicker.com Android Notes for Professionals 350 We can also invoke the doSomething() method from our activity: public void makeServiceDoSomething(){ if( RecordingService.isRunning ) RecordingService.instance.doSomething(); } Section 45.4: Starting a Service Starting a service is very easy, just call startService with an intent, from within an Activity: Intent intent = new Intent(this, MyService.class); //substitute MyService with the name of your service intent.putExtra(Intent.EXTRA_TEXT, "Some text"); //add any extra data to pass to the service startService(intent); //Call startService to start the service. Section 45.5: Creating Bound Service with help of Binder Create a class which extends Service class and in overridden method onBind return your local binder instance: public class LocalService extends Service { // Binder given to clients private final IBinder mBinder = new LocalBinder(); /** * Class used for the client Binder. Because we know this service always * runs in the same process as its clients, we don't need to deal with IPC. */ public class LocalBinder extends Binder { LocalService getService() { // Return this instance of LocalService so clients can call public methods return LocalService.this; } } @Override public IBinder onBind(Intent intent) { return mBinder; } } Then in your activity bind to service in onStart callback, using ServiceConnection instance and unbind from it in onStop: public class BindingActivity extends Activity { LocalService mService; boolean mBound = false; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } @Override protected void onStart() { super.onStart(); GoalKicker.com Android Notes for Professionals 351 // Bind to LocalService Intent intent = new Intent(this, LocalService.class); bindService(intent, mConnection, Context.BIND_AUTO_CREATE); } @Override protected void onStop() { super.onStop(); // Unbind from the service if (mBound) { unbindService(mConnection); mBound = false; } } /** Defines callbacks for service binding, passed to bindService() */ private ServiceConnection mConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName className, IBinder service) { // We've bound to LocalService, cast the IBinder and get LocalService instance LocalBinder binder = (LocalBinder) service; mService = binder.getService(); mBound = true; } @Override public void onServiceDisconnected(ComponentName arg0) { mBound = false; } }; } Section 45.6: Creating Remote Service (via AIDL) Describe your service access interface through .aidl le: // IRemoteService.aidl package com.example.android; // Declare any non-default types here with import statements /** Example service interface */ interface IRemoteService { /** Request the process ID of this service, to do evil things with it. */ int getPid(); } Now after build application, sdk tools will generate appropriate .java le. This le will contain Stub class which implements our aidl interface, and which we need to extend: public class RemoteService extends Service { private final IRemoteService.Stub binder = new IRemoteService.Stub() { @Override public int getPid() throws RemoteException { return Process.myPid(); } }; GoalKicker.com Android Notes for Professionals 352 @Nullable @Override public IBinder onBind(Intent intent) { return binder; } } Then in activity: public class MainActivity extends AppCompatActivity { private final ServiceConnection connection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName componentName, IBinder iBinder) { IRemoteService service = IRemoteService.Stub.asInterface(iBinder); Toast.makeText(this, "Activity process: " + Process.myPid + ", Service process: " + getRemotePid(service), LENGTH_SHORT).show(); } @Override public void onServiceDisconnected(ComponentName componentName) {} }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override protected void onStart() { super.onStart(); Intent intent = new Intent(this, RemoteService.class); bindService(intent, connection, Context.BIND_AUTO_CREATE); } @Override protected void onStop() { super.onStop(); unbindService(connection); } private int getRemotePid(IRemoteService service) { int result = -1; try { result = service.getPid(); } catch (RemoteException e) { e.printStackTrace(); } return result; } } GoalKicker.com Android Notes for Professionals 353 Chapter 46: The Manifest File The Manifest is an obligatory le named exactly "AndroidManifest.xml" and located in the app's root directory. It species the app name, icon, Java package name, version, declaration of Activities, Services, app permissions and other information. Section 46.1: Declaring Components The primary task of the manifest is to inform the system about the app's components. For example, a manifest le can declare an activity as follows: <?xml version="1.0" encoding="utf-8"?> <manifest ... > <application android:icon="@drawable/app_icon.png" ... > <activity android:name="com.example.project.ExampleActivity" android:label="@string/example_label" ... > </activity> ... </application> </manifest> In the <application> element, the android:icon attribute points to resources for an icon that identies the app. In the element, the android:name attribute species the fully qualied class name of the Activity subclass and the android:label attribute species a string to use as the user-visible label for the activity. You must declare all app components this way: -<activity> elements for activities -<service> elements for services -<receiver> elements for broadcast receivers -<provider> elements for content providers Activities, services, and content providers that you include in your source but do not declare in the manifest are not visible to the system and, consequently, can never run. However, broadcast receivers can be either declared in the manifest or created dynamically in code (as BroadcastReceiver objects) and registered with the system by calling registerReceiver(). For more about how to structure the manifest le for your app, see The AndroidManifest.xml File documentation. Section 46.2: Declaring permissions in your manifest le Any permission required by your application to access a protected part of the API or to interact with other applications must be declared in your AndroidManifest.xml le. This is done using the <uses-permission /> tag. Syntax <uses-permission android:name="string" android:maxSdkVersion="integer"/> android:name: This is the name of the required permission GoalKicker.com Android Notes for Professionals 354 android:maxSdkVersion: The highest API level at which this permission should be granted to your app. Setting this permission is optional and should only be set if the permission your app requires is no longer needed at a certain API level. Sample AndroidManifest.xml: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.android.samplepackage"> <!-- request internet permission --> <uses-permission android:name="android.permission.INTERNET" /> <!-- request camera permission --> <uses-permission android:name="android.permission.CAMERA"/> <!-- request permission to write to external storage --> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" android:maxSdkVersion="18" /> <application>....</application> </manifest> * Also see the Permissions topic. GoalKicker.com Android Notes for Professionals 355 Chapter 47: Gradle for Android Gradle is a JVM-based build system that enables developers to write high-level scripts that can be used to automate the process of compilation and application production. It is a exible plugin-based system, which allows you to automate various aspects of the build process; including compiling and signing a .jar, downloading and managing external dependencies, injecting elds into the AndroidManifest or utilising specic SDK versions. Section 47.1: A basic build.gradle le This is an example of a default build.gradle le in a module. apply plugin: 'com.android.application' android { compileSdkVersion 25 buildToolsVersion '25.0.3' signingConfigs { applicationName { keyAlias 'applicationName' keyPassword 'password' storeFile file('../key/applicationName.jks') storePassword 'keystorePassword' } } defaultConfig { applicationId 'com.company.applicationName' minSdkVersion 14 targetSdkVersion 25 versionCode 1 versionName '1.0' signingConfig signingConfigs.applicationName } buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support:design:25.3.1' testCompile 'junit:junit:4.12' } DSL (domain-specic language) Each block in the le above is called a DSL (domain-specic language). Plugins The rst line, apply plugin: 'com.android.application', applies the Android plugin for Gradle to the build and GoalKicker.com Android Notes for Professionals 356 makes the android {} block available to declare Android-specic build options. For an Android Application: apply plugin: 'com.android.application' For an Android Library: apply plugin: 'com.android.library' Understanding the DSLs in the sample above The second part, The android {...} block, is the Android DSL which contains information about your project. For example, you can set the compileSdkVersion which species the Android API level , Which should be used by Gradle to compile your app. The sub-block defaultConfig holds the defaults for your manifest. You can override them with Product Flavors. You can nd more info in these examples: DSL for the app module Build Types Product Flavors Signing settings Dependencies The dependencies block is dened outside the android block {...} : This means it's not dened by the Android plugin but it's standard Gradle. The dependencies block species what external libraries (typically Android libraries, but Java libraries are also valid) you wish to include in your app. Gradle will automatically download these dependencies for you (if there is no local copy available), you just need to add similar compile lines when you wish to add another library. Let's look at one of the lines present here: compile 'com.android.support:design:25.3.1' This line basically says add a dependency on the Android support design library to my project. Gradle will ensure that the library is downloaded and present so that you can use it in your app, and its code will also be included in your app. If you're familiar with Maven, this syntax is the GroupId, a colon, ArtifactId, another colon, then the version of the dependency you wish to include, giving you full control over versioning. While it is possible to specify artifact versions using the plus (+) sign, best practice is to avoid doing so; it can lead to issues if the library gets updated with breaking changes without your knowledge, which would likely lead to crashes in your app. GoalKicker.com Android Notes for Professionals 357 You can add dierent kind of dependencies: local binary dependencies module dependencies remote dependencies A particular attention should be dedicated to the aar at dependencies. You can nd more details in this topic. Note about the -v7 in appcompat-v7 compile 'com.android.support:appcompat-v7:25.3.1' This simply means that this library (appcompat) is compatible with the Android API level 7 and forward. Note about the junit:junit:4.12 This is Testing dependency for Unit testing. Specifying dependencies specic to dierent build congurations You can specify that a dependency should only be used for a certain build conguration or you can dene dierent dependencies for the build types or the product avors (e.g., debug, test or release) by using debugCompile, testCompile or releaseCompile instead of the usual compile. This is helpful for keeping test- and debug- related dependencies out of your release build, which will keep your release APK as slim as possible and help to ensure that any debug information cannot be used to obtain internal information about your app. signingCong The signingConfig allows you to congure your Gradle to include keystore information and ensure that the APK built using these congurations are signed and ready for Play Store release. Here you can nd a dedicated topic. Note: It's not recommended though to keep the signing credentials inside your Gradle le. To remove the signing congurations, just omit the signingConfigs portion. You can specify them in dierent ways: storing in an external le storing them in setting environment variables. See this topic for more details : Sign APK without exposing keystore password. You can nd further information about Gradle for Android in the dedicated Gradle topic. Section 47.2: Dene and use Build Conguration Fields BuildCongField GoalKicker.com Android Notes for Professionals 358 Gradle allows buildConfigField lines to dene constants. These constants will be accessible at runtime as static elds of the BuildConfig class. This can be used to create avors by dening all elds within the defaultConfig block, then overriding them for individual build avors as needed. This example denes the build date and ags the build for production rather than test: android { ... defaultConfig { ... // defining the build date buildConfigField "long", "BUILD_DATE", System.currentTimeMillis() + "L" // define whether this build is a production build buildConfigField "boolean", "IS_PRODUCTION", "false" // note that to define a string you need to escape it buildConfigField "String", "API_KEY", "\"my_api_key\"" } productFlavors { prod { // override the productive flag for the flavor "prod" buildConfigField "boolean", "IS_PRODUCTION", "true" resValue 'string', 'app_name', 'My App Name' } dev { // inherit default fields resValue 'string', 'app_name', 'My App Name - Dev' } } } The automatically-generated <package_name>.BuildConfig.java in the gen folder contains the following elds based on the directive above: public class BuildConfig { // ... other generated fields ... public static final long BUILD_DATE = 1469504547000L; public static final boolean IS_PRODUCTION = false; public static final String API_KEY = "my_api_key"; } The dened elds can now be used within the app at runtime by accessing the generated BuildConfig class: public void example() { // format the build date SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd"); String buildDate = dateFormat.format(new Date(BuildConfig.BUILD_DATE)); Log.d("build date", buildDate); // do something depending whether this is a productive build if (BuildConfig.IS_PRODUCTION) { connectToProductionApiEndpoint(); } else { connectToStagingApiEndpoint(); } } ResValue GoalKicker.com Android Notes for Professionals 359 The resValue in the productFlavors creates a resource value. It can be any type of resource (string, dimen, color, etc.). This is similar to dening a resource in the appropriate le: e.g. dening string in a strings.xml le. The advantage being that the one dened in gradle can be modied based on your productFlavor/buildVariant. To access the value, write the same code as if you were accessing a res from the resources le: getResources().getString(R.string.app_name) The important thing is that resources dened this way cannot modify existing resources dened in les. They can only create new resource values. Some libraries (such as the Google Maps Android API) require an API key provided in the Manifest as a meta-data tag. If dierent keys are needed for debugging and production builds, specify a manifest placeholder lled in by Gradle. In your AndroidManifest.xml le: <meta-data android:name="com.google.android.geo.API_KEY" android:value="${MAPS_API_KEY}"/> And then set the eld accordingly in your build.gradle le: android { defaultConfig { ... // Your development key manifestPlaceholders = [ MAPS_API_KEY: "AIza..." ] } productFlavors { prod { // Your production key manifestPlaceholders = [ MAPS_API_KEY: "AIza..." ] } } } The Android build system generates a number of elds automatically and places them in BuildConfig.java. These elds are: Field DEBUG Description a Boolean stating if the app is in debug or release mode APPLICATION_ID a String containing the ID of the application (e.g. com.example.app) BUILD_TYPE a String containing the build type of the application (usually either debug or release) FLAVOR a String containing the particular avor of the build VERSION_CODE an int containing the version (build) number. This is the same as versionCode in build.gradle or versionCode in AndroidManifest.xml VERSION_NAME a String containing the version (build) name. This is the same as versionName in build.gradle or versionName in AndroidManifest.xml In addition to the above, if you have dened multiple dimensions of avor then each dimension will have its own value. For example, if you had two dimensions of avor for color and size you will also have the following variables: Field Description GoalKicker.com Android Notes for Professionals 360 FLAVOR_color a String containing the value for the 'color' avor. FLAVOR_size a String containing the value for the 'size' avor. Section 47.3: Centralizing dependencies via "dependencies.gradle" le When working with multi-module projects, it is helpful to centralize dependencies in a single location rather than having them spread across many build les, especially for common libraries such as the Android support libraries and the Firebase libraries. One recommended way is to separate the Gradle build les, with one build.gradle per module, as well as one in the project root and another one for the dependencies, for example: root +- gradleScript/ | dependencies.gradle +- module1/ | build.gradle +- module2/ | build.gradle +- build.gradle Then, all of your dependencies can be located in gradleScript/dependencies.gradle: ext { // Version supportVersion = '24.1.0' // Support Libraries dependencies supportDependencies = [ design: "com.android.support:design:${supportVersion}", recyclerView: "com.android.support:recyclerview-v7:${supportVersion}", cardView: "com.android.support:cardview-v7:${supportVersion}", appCompat: "com.android.support:appcompat-v7:${supportVersion}", supportAnnotation: "com.android.support:support-annotations:${supportVersion}", ] firebaseVersion = '9.2.0'; firebaseDependencies = [ core: "com.google.firebase:firebase-core:${firebaseVersion}", database: "com.google.firebase:firebase-database:${firebaseVersion}", storage: "com.google.firebase:firebase-storage:${firebaseVersion}", crash: "com.google.firebase:firebase-crash:${firebaseVersion}", auth: "com.google.firebase:firebase-auth:${firebaseVersion}", messaging: "com.google.firebase:firebase-messaging:${firebaseVersion}", remoteConfig: "com.google.firebase:firebase-config:${firebaseVersion}", invites: "com.google.firebase:firebase-invites:${firebaseVersion}", adMod: "com.google.firebase:firebase-ads:${firebaseVersion}", appIndexing: "com.google.android.gms:play-services-appindexing:${firebaseVersion}", ]; } Which can then be applied from that le in the top level le build.gradle like so: // Load dependencies apply from: 'gradleScript/dependencies.gradle' GoalKicker.com Android Notes for Professionals 361 and in the module1/build.gradle like so: // Module build file dependencies { // ... compile supportDependencies.appCompat compile supportDependencies.design compile firebaseDependencies.crash } Another approach A less verbose approach for centralizing library dependencies versions can be achieved by declaring the version number as a variable once, and using it everywhere. In the workspace root build.gradle add this: ext.v = [ supportVersion:'24.1.1', ] And in every module that uses the same library add the needed libraries compile "com.android.support:support-v4:${v.supportVersion}" compile "com.android.support:recyclerview-v7:${v.supportVersion}" compile "com.android.support:design:${v.supportVersion}" compile "com.android.support:support-annotations:${v.supportVersion}" Section 47.4: Sign APK without exposing keystore password You can dene the signing conguration to sign the apk in the build.gradle le using these properties: storeFile : the keystore le storePassword: the keystore password keyAlias: a key alias name keyPassword: A key alias password In many case you may need to avoid this kind of info in the build.gradle le. Method A: Congure release signing using a keystore.properties le It's possible to congure your app's build.gradle so that it will read your signing conguration information from a properties le like keystore.properties. Setting up signing like this is benecial because: Your signing conguration information is separate from your build.gradle le You do not have to intervene during the signing process in order to provide passwords for your keystore le You can easily exclude the keystore.properties le from version control First, create a le called keystore.properties in the root of your project with content like this (replacing the values with your own): storeFile=keystore.jks storePassword=storePassword keyAlias=keyAlias GoalKicker.com Android Notes for Professionals 362 keyPassword=keyPassword Now, in your app's build.gradle le, set up the signingConfigs block as follows: android { ... signingConfigs { release { def propsFile = rootProject.file('keystore.properties') if (propsFile.exists()) { def props = new Properties() props.load(new FileInputStream(propsFile)) storeFile = file(props['storeFile']) storePassword = props['storePassword'] keyAlias = props['keyAlias'] keyPassword = props['keyPassword'] } } } } That's really all there is to it, but don't forget to exclude both your keystore le and your keystore.properties le from version control. A couple of things to note: The storeFile path specied in the keystore.properties le should be relative to your app's build.gradle le. This example assumes that the keystore le is in the same directory as the app's build.gradle le. This example has the keystore.properties le in the root of the project. If you put it somewhere else, be sure to change the value in rootProject.file('keystore.properties') to the location of yours, relative to the root of your project. Method B: By using an environment variable The same can be achieved also without a properties le, making the password harder to nd: android { signingConfigs { release { storeFile file('/your/keystore/location/key') keyAlias 'your_alias' String ps = System.getenv("ps") if (ps == null) { throw new GradleException('missing ps env variable') } keyPassword ps storePassword ps } } The "ps" environment variable can be global, but a safer approach can be by adding it to the shell of Android Studio only. In linux this can be done by editing Android Studio's Desktop Entry Exec=sh -c "export ps=myPassword123 ; /path/to/studio.sh" GoalKicker.com Android Notes for Professionals 363 You can nd more details in this topic. Section 47.5: Adding product avor-specic dependencies Dependencies can be added for a specic product avor, similar to how they can be added for specic build congurations. For this example, assume that we have already dened two product avors called free and paid (more on dening avors here). We can then add the AdMob dependency for the free avor, and the Picasso library for the paid one like so: android { ... productFlavors { free { applicationId "com.example.app.free" versionName "1.0-free" } paid { applicationId "com.example.app.paid" versionName "1.0-paid" } } } ... dependencies { ... // Add AdMob only for free flavor freeCompile 'com.android.support:appcompat-v7:23.1.1' freeCompile 'com.google.android.gms:play-services-ads:8.4.0' freeCompile 'com.android.support:support-v4:23.1.1' // Add picasso only for paid flavor paidCompile 'com.squareup.picasso:picasso:2.5.2' } ... Section 47.6: Specifying dierent application IDs for build types and product avors You can specify dierent application IDs or package names for each buildType or productFlavor using the applicationIdSux conguration attribute: Example of suxing the applicationId for each buildType: defaultConfig { applicationId "com.package.android" minSdkVersion 17 targetSdkVersion 23 versionCode 1 versionName "1.0" } buildTypes { release { debuggable false GoalKicker.com Android Notes for Professionals 364 } development { debuggable true applicationIdSuffix ".dev" } testing { debuggable true applicationIdSuffix ".qa" } } Our resulting applicationIds would now be: com.package.android for release com.package.android.dev for development com.package.android.qa for testing This can be done for productFlavors as well: productFlavors { free { applicationIdSuffix ".free" } paid { applicationIdSuffix ".paid" } } The resulting applicationIds would be: com.package.android.free for the free avor com.package.android.paid for the paid avor Section 47.7: Versioning your builds via "version.properties" le You can use Gradle to auto-increment your package version each time you build it. To do so create a version.properties le in the same directory as your build.gradle with the following contents: VERSION_MAJOR=0 VERSION_MINOR=1 VERSION_BUILD=1 (Changing the values for major and minor as you see t). Then in your build.gradle add the following code to the android section: // Read version information from local file and increment as appropriate def versionPropsFile = file('version.properties') if (versionPropsFile.canRead()) { def Properties versionProps = new Properties() versionProps.load(new FileInputStream(versionPropsFile)) def versionMajor = versionProps['VERSION_MAJOR'].toInteger() def versionMinor = versionProps['VERSION_MINOR'].toInteger() GoalKicker.com Android Notes for Professionals 365 def versionBuild = versionProps['VERSION_BUILD'].toInteger() + 1 // Update the build number in the local file versionProps['VERSION_BUILD'] = versionBuild.toString() versionProps.store(versionPropsFile.newWriter(), null) defaultConfig { versionCode versionBuild versionName "${versionMajor}.${versionMinor}." + String.format("%05d", versionBuild) } } The information can be accessed in Java as a string BuildConfig.VERSION_NAME for the complete {major}.{minor}.{build} number and as an integer BuildConfig.VERSION_CODE for just the build number. Section 47.8: Dening product avors Product avors are dened in the build.gradle le inside the android { ... } block as seen below. ... android { ... productFlavors { free { applicationId "com.example.app.free" versionName "1.0-free" } paid { applicationId "com.example.app.paid" versionName "1.0-paid" } } } By doing this, we now have two additional product avors: free and paid. Each can have its own specic conguration and attributes. For example, both of our new avors has a separate applicationId and versionName than our existing main avor (available by default, so not shown here). Section 47.9: Changing output apk name and add version name: This is the code for changing output application le name (.apk). The name can be congured by assigning a dierent value to newName android { applicationVariants.all { variant -> def newName = "ApkName"; variant.outputs.each { output -> def apk = output.outputFile; newName += "-v" + defaultConfig.versionName; if (variant.buildType.name == "release") { newName += "-release.apk"; } else { newName += ".apk"; } if (!output.zipAlign) { GoalKicker.com Android Notes for Professionals 366 newName = newName.replace(".apk", "-unaligned.apk"); } output.outputFile = new File(apk.parentFile, newName); logger.info("INFO: Set outputFile to " + output.outputFile + " for [" + output.name + "]"); } } } Section 47.10: Adding product avor-specic resources Resources can be added for a specic product avor. For this example, assume that we have already dened two product avors called free and paid. In order to add product avor-specic resources, we create additional resource folders alongside the main/res folder, which we can then add resources to like usual. For this example, we'll dene a string, status, for each product avor: /src/main/res/values/strings.xml <resources> <string name="status">Default</string> </resources> /src/free/res/values/strings.xml <resources> <string name="status">Free</string> </resources> /src/paid/res/values/strings.xml <resources> <string name="status">Paid</string> </resources> The product avor-specic status strings will override the value for status in the main avor. Section 47.11: Why are there two build.gradle les in an Android Studio project? <PROJECT_ROOT>\app\build.gradle is specic for app module. <PROJECT_ROOT>\build.gradle is a "Top-level build le" where you can add conguration options common to all sub-projects/modules. If you use another module in your project, as a local library you would have another build.gradle le: <PROJECT_ROOT>\module\build.gradle In the top level le you can specify common properties as the buildscript block or some common properties. buildscript { repositories { mavenCentral() } GoalKicker.com Android Notes for Professionals 367 dependencies { classpath 'com.android.tools.build:gradle:2.2.0' classpath 'com.google.gms:google-services:3.0.0' } } ext { compileSdkVersion = 23 buildToolsVersion = "23.0.1" } In the app\build.gradle you dene only the properties for the module: apply plugin: 'com.android.application' android { compileSdkVersion rootProject.ext.compileSdkVersion buildToolsVersion rootProject.ext.buildToolsVersion } dependencies { //..... } Section 47.12: Directory structure for avor-specic resources Dierent avors of application builds can contain dierent resources. To create a avor-specic resource make a directory with the lower-case name of your avor in the src directory and add your resources in the same way you would normally. For example, if you had a avour Development and wanted to provide a distinct launcher icon for it you would create a directory src/development/res/drawable-mdpi and inside that directory create an ic_launcher.png le with your development-specic icon. The directory structure will look like this: src/ main/ res/ drawable-mdpi/ ic_launcher.png development/ res/ drawable-mdpi/ ic_launcher.png <-- the default launcher icon <-- the launcher icon used when the product flavor is 'Development' (Of course, in this case you would also create icons for drawable-hdpi, drawable-xhdpi etc). Section 47.13: Enable Proguard using gradle For enabling Proguard congurations for your application you need to enable it in your module-level gradle le. You need to set the value of minifyEnabled to true. buildTypes { release { GoalKicker.com Android Notes for Professionals 368 minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } The above code will apply your Proguard congurations contained in the default Android SDK combined with the "proguard-rules.pro" le on your module to your released apk. Section 47.14: Ignoring build variant For some reasons you may want to ignore your build variants. For example: you have 'mock' product avour and you use it only for debug purposes, such as unit/instrumentation tests. Let's ignore mockRelease variant from our project. Open build.gradle le and write: // Remove mockRelease as it's not needed. android.variantFilter { variant -> if (variant.buildType.name.equals('release') && variant.getFlavors().get(0).name.equals('mock')) { variant.setIgnore(true); } } Section 47.15: Enable experimental NDK plugin support for Gradle and AndroidStudio Enable and congure the experimental Gradle plugin to improve AndroidStudio's NDK support. Check that you fulll the following requirements: Gradle 2.10 (for this example) Android NDK r10 or later Android SDK with build tools v19.0.0 or later Congure MyApp/build.gradle le Edit the dependencies.classpath line in build.gradle from e.g. classpath 'com.android.tools.build:gradle:2.1.2' to classpath 'com.android.tools.build:gradle-experimental:0.7.2' (v0.7.2 was the latest version at the time of writing. Check the latest version yourself and adapt your line accordingly) The build.gradle le should look similar to this: buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle-experimental:0.7.2' } } GoalKicker.com Android Notes for Professionals 369 allprojects { repositories { jcenter() } } task clean(type: Delete) { delete rootProject.buildDir } Congure MyApp/app/build.gradle le Edit the build.gradle le to look similar to the following example. Your version numbers may look dierent. apply plugin: 'com.android.model.application' model { android { compileSdkVersion 19 buildToolsVersion "24.0.1" defaultConfig { applicationId "com.example.mydomain.myapp" minSdkVersion.apiLevel 19 targetSdkVersion.apiLevel 19 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles.add(file('proguard-android.txt')) } } ndk { moduleName "myLib" /* The following lines are examples of a some optional flags that you may set to configure your build environment */ cppFlags.add("-I${file("path/to/my/includes/dir")}".toString()) cppFlags.add("-std=c++11") ldLibs.addAll(['log', 'm']) stl = "c++_static" abiFilters.add("armeabi-v7a") } } } dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) } Sync and check that there are no errors in the Gradle les before proceeding. Test if plugin is enabled First make sure you have downloaded the Android NDK module. Then create an new app in AndroidStudio and add the following to the ActivityMain le: public class MainActivity implements Activity { GoalKicker.com Android Notes for Professionals 370 onCreate() { // Pregenerated code. Not important here } static { System.loadLibrary("myLib"); } public static native String getString(); } The getString() part should be highlighted red saying that the corresponding JNI function could not be found. Hover your mouse over the function call until a red lightbulb appears. Click the bulb and select create function JNI_.... This should generate a myLib.c le in the myApp/app/src/main/jni directory with the correct JNI function call. It should look similar to this: #include <jni.h> JNIEXPORT jstring JNICALL Java_com_example_mydomain_myapp_MainActivity_getString(JNIEnv *env, jobject instance) { // TODO return (*env)->NewStringUTF(env, returnValue); } If it doesn't look like this, then the plugin has not correctly been congured or the NDK has not been downloaded Section 47.16: Display signing information In some circumstances (for example obtaining a Google API key) you need to nd your keystore ngerprint. Gradle has a convenient task that display all the signing information, including keystore ngerprints: ./gradlew signingReport This is a sample output: :app:signingReport Variant: release Config: none ---------Variant: debug Config: debug Store: /Users/user/.android/debug.keystore Alias: AndroidDebugKey MD5: 25:08:76:A9:7C:0C:19:35:99:02:7B:00:AA:1E:49:CA SHA1: 26:BE:89:58:00:8C:5A:7D:A3:A9:D3:60:4A:30:53:7A:3D:4E:05:55 Valid until: Saturday 18 June 2044 ---------Variant: debugAndroidTest Config: debug Store: /Users/user/.android/debug.keystore Alias: AndroidDebugKey MD5: 25:08:76:A9:7C:0C:19:35:99:02:7B:00:AA:1E:49:CA SHA1: 26:BE:89:58:00:8C:5A:7D:A3:A9:D3:60:4A:30:53:7A:3D:4E:05:55 Valid until: Saturday 18 June 2044 ---------Variant: debugUnitTest Config: debug Store: /Users/user/.android/debug.keystore Alias: AndroidDebugKey GoalKicker.com Android Notes for Professionals 371 MD5: 25:08:76:A9:7C:0C:19:35:99:02:7B:00:AA:1E:49:CA SHA1: 26:BE:89:58:00:8C:5A:7D:A3:A9:D3:60:4A:30:53:7A:3D:4E:05:55 Valid until: Saturday 18 June 2044 ---------Variant: releaseUnitTest Config: none ---------- Section 47.17: Seeing dependency tree Use the task dependencies. Depending on how your modules are set up, it may be either ./gradlew dependencies or to see the dependencies of module app use ./gradlew :app:dependencies The example following build.gradle le dependencies { compile 'com.android.support:design:23.2.1' compile 'com.android.support:cardview-v7:23.1.1' compile 'com.google.android.gms:play-services:6.5.87' } will produce the following graph: Parallel execution is an incubating feature. :app:dependencies -----------------------------------------------------------Project :app -----------------------------------------------------------. . . _releaseApk - ## Internal use, do not manually configure ## +--- com.android.support:design:23.2.1 | +--- com.android.support:support-v4:23.2.1 | | \--- com.android.support:support-annotations:23.2.1 | +--- com.android.support:appcompat-v7:23.2.1 | | +--- com.android.support:support-v4:23.2.1 (*) | | +--- com.android.support:animated-vector-drawable:23.2.1 | | | \--- com.android.support:support-vector-drawable:23.2.1 | | | \--- com.android.support:support-v4:23.2.1 (*) | | \--- com.android.support:support-vector-drawable:23.2.1 (*) | \--- com.android.support:recyclerview-v7:23.2.1 | +--- com.android.support:support-v4:23.2.1 (*) | \--- com.android.support:support-annotations:23.2.1 +--- com.android.support:cardview-v7:23.1.1 \--- com.google.android.gms:play-services:6.5.87 \--- com.android.support:support-v4:21.0.0 -> 23.2.1 (*) . . . Here you can see the project is directly including com.android.support:design version 23.2.1, which itself is bringing com.android.support:support-v4 with version 23.2.1. However, com.google.android.gms:playservices itself has a dependency on the same support-v4 but with an older version 21.0.0, which is a conict detected by gradle. (*) are used when gradle skips the subtree because those dependencies were already listed previously. GoalKicker.com Android Notes for Professionals 372 Section 47.18: Disable image compression for a smaller APK le size If you are optimizing all images manually, disable APT Cruncher for a smaller APK le size. android { aaptOptions { cruncherEnabled = false } } Section 47.19: Delete "unaligned" apk automatically If you don't need automatically generated apk les with unaligned sux (which you probably don't), you may add the following code to build.gradle le: // delete unaligned files android.applicationVariants.all { variant -> variant.assemble.doLast { variant.outputs.each { output -> println "aligned " + output.outputFile println "unaligned " + output.packageApplication.outputFile File unaligned = output.packageApplication.outputFile; File aligned = output.outputFile if (!unaligned.getName().equalsIgnoreCase(aligned.getName())) { println "deleting " + unaligned.getName() unaligned.delete() } } } } From here Section 47.20: Executing a shell script from gradle A shell script is a very versatile way to extend your build to basically anything you can think of. As an example, here is a simple script to compile protobuf les and add the result java les to the source directory for further compilation: def compilePb() { exec { // NOTICE: gradle will fail if there's an error in the protoc file... executable "../pbScript.sh" } } project.afterEvaluate { compilePb() } The 'pbScript.sh' shell script for this example, located in the project's root folder: #!/usr/bin/env bash GoalKicker.com Android Notes for Professionals 373 pp=/home/myself/my/proto /usr/local/bin/protoc -I=$pp \ --java_out=./src/main/java \ --proto_path=$pp \ $pp/my.proto \ --proto_path=$pp \ $pp/my_other.proto Section 47.21: Show all gradle project tasks gradlew tasks -- show all tasks Android tasks ------------androidDependencies - Displays the Android dependencies of the project. signingReport - Displays the signing info for each variant. sourceSets - Prints out all the source sets defined in this project. Build tasks ----------assemble - Assembles all variants of all applications and secondary packages. assembleAndroidTest - Assembles all the Test applications. assembleDebug - Assembles all Debug builds. assembleRelease - Assembles all Release builds. build - Assembles and tests this project. buildDependents - Assembles and tests this project and all projects that depend on it. buildNeeded - Assembles and tests this project and all projects it depends on. classes - Assembles main classes. clean - Deletes the build directory. compileDebugAndroidTestSources compileDebugSources compileDebugUnitTestSources compileReleaseSources compileReleaseUnitTestSources extractDebugAnnotations - Extracts Android annotations for the debug variant into the archive file extractReleaseAnnotations - Extracts Android annotations for the release variant into the archive file jar - Assembles a jar archive containing the main classes. mockableAndroidJar - Creates a version of android.jar that is suitable for unit tests. testClasses - Assembles test classes. Build Setup tasks ----------------init - Initializes a new Gradle build. [incubating] wrapper - Generates Gradle wrapper files. [incubating] Documentation tasks ------------------javadoc - Generates Javadoc API documentation for the main source code. Help tasks ---------buildEnvironment - Displays all buildscript dependencies declared in root project 'LeitnerBoxPro'. components - Displays the components produced by root project 'LeitnerBoxPro'. [incubating] dependencies - Displays all dependencies declared in root project 'LeitnerBoxPro'. dependencyInsight - Displays the insight into a specific dependency in root project 'LeitnerBoxPro'. help - Displays a help message. GoalKicker.com Android Notes for Professionals 374 model - Displays the configuration model of root project 'LeitnerBoxPro'. [incubating] projects - Displays the sub-projects of root project 'LeitnerBoxPro'. properties - Displays the properties of root project 'LeitnerBoxPro'. tasks - Displays the tasks runnable from root project 'LeitnerBoxPro' (some of the displayed tasks may belong to subprojects) . Install tasks ------------installDebug - Installs the Debug build. installDebugAndroidTest - Installs the android (on device) tests for the Debug build. uninstallAll - Uninstall all applications. uninstallDebug - Uninstalls the Debug build. uninstallDebugAndroidTest - Uninstalls the android (on device) tests for the Debug build. uninstallRelease - Uninstalls the Release build. Verification tasks -----------------check - Runs all checks. connectedAndroidTest - Installs and runs instrumentation tests for all flavors on connected devices. connectedCheck - Runs all device checks on currently connected devices. connectedDebugAndroidTest - Installs and runs the tests for debug on connected devices. deviceAndroidTest - Installs and runs instrumentation tests using all Device Providers. deviceCheck - Runs all device checks using Device Providers and Test Servers. lint - Runs lint on all variants. lintDebug - Runs lint on the Debug build. lintRelease - Runs lint on the Release build. test - Run unit tests for all variants. testDebugUnitTest - Run unit tests for the debug build. testReleaseUnitTest - Run unit tests for the release build. Other tasks ----------assembleDefault clean jarDebugClasses jarReleaseClasses transformResourcesWithMergeJavaResForDebugUnitTest transformResourcesWithMergeJavaResForReleaseUnitTest Section 47.22: Debugging your Gradle errors The following is an excerpt from Gradle - What is a non-zero exit value and how do I x it?, see it for the full discussion. Let's say you are developing an application and you get some Gradle error that appears that generally will look like so. :module:someTask FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':module:someTask'. > some message here... finished with non-zero exit value X * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED Total time: Y.ZZ secs GoalKicker.com Android Notes for Professionals 375 You search here on StackOverow for your problem, and people say to clean and rebuild your project, or enable MultiDex, and when you try that, it just isn't xing the problem. There are ways to get more information, but the Gradle output itself should point at the actual error in the few lines above that message between :module:someTask FAILED and the last :module:someOtherTask that passed. Therefore, if you ask a question about your error, please edit your questions to include more context to the error. So, you get a "non-zero exit value." Well, that number is a good indicator of what you should try to x. Here are a few occur most frequently. 1 is a just a general error code and the error is likely in the Gradle output 2 seems to be related to overlapping dependencies or project misconguration. 3 seems to be from including too many dependencies, or a memory issue. The general solutions for the above (after attempting a Clean and Rebuild of the project) are: 1 - Address the error that is mentioned. Generally, this is a compile-time error, meaning some piece of code in your project is not valid. This includes both XML and Java for an Android project. 2 & 3 - Many answers here tell you to enable multidex. While it may x the problem, it is most likely a workaround. If you don't understand why you are using it (see the link), you probably don't need it. General solutions involve cutting back your overuse of library dependencies (such as all of Google Play Services, when you only need to use one library, like Maps or Sign-In, for example). Section 47.23: Use gradle.properties for central versionnumber/buildcongurations You can dene central cong info's in a separate gradle include le Centralizing dependencies via "dependencies.gradle" le a stand alone properties le Versioning your builds via "version.properties" le or do it with root gradle.properties le the project structure root +- module1/ | build.gradle +- module2/ | build.gradle +- build.gradle +- gradle.properties global setting for all submodules in gradle.properties # used for manifest # todo increment for every release appVersionCode=19 appVersionName=0.5.2.160726 # android tools settings appCompileSdkVersion=23 appBuildToolsVersion=23.0.2 usage in a submodule GoalKicker.com Android Notes for Professionals 376 apply plugin: 'com.android.application' android { // appXXX are defined in gradle.properties compileSdkVersion = Integer.valueOf(appCompileSdkVersion) buildToolsVersion = appBuildToolsVersion defaultConfig { // appXXX are defined in gradle.properties versionCode = Long.valueOf(appVersionCode) versionName = appVersionName } } dependencies { ... } Note: If you want to publish your app in the F-Droid app store you have to use magic numbers in the gradle le because else f-droid robot cannot read current versionnumner to detect/verify version changes. Section 47.24: Dening build types You can create and congure build types in the module-level build.gradle le inside the android {} block. android { ... defaultConfig {...} buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } debug { applicationIdSuffix ".debug" } } } GoalKicker.com Android Notes for Professionals 377 Chapter 48: FileIO with Android Reading and writing les in Android are not dierent from reading and writing les in standard Java. Same java.io package can be used. However, there is some specic related to the folders where you are allowed to write, permissions in general and MTP work arounds. Section 48.1: Obtaining the working folder You can get your working folder by calling the method getFilesDir() on your Activity (Activity is the central class in your application that inherits from Context. See here). Reading is not dierent. Only your application will have access to this folder. Your activity could contain the following code, for instance: File myFolder = getFilesDir(); File myFile = new File(myFolder, "myData.bin"); Section 48.2: Writing raw array of bytes File myFile = new File(getFilesDir(), "myData.bin"); FileOutputStream out = new FileOutputStream(myFile); // Write four bytes one two three four: out.write(new byte [] { 1, 2, 3, 4} out.close() There is nothing Android specic with this code. If you write lots of small values often, use BueredOutputStream to reduce the wear of the device internal SSD. Section 48.3: Serializing the object The old good Java object serialization is available for you in Android. you can dene Serializable classes like: class Cirle implements Serializable { final int radius; final String name; Circle(int radius, int name) { this.radius = radius; this.name = name; } } and then write then to the ObjectOutputStream: File myFile = new File(getFilesDir(), "myObjects.bin"); FileOutputStream out = new FileOutputStream(myFile); ObjectOutputStream oout = new ObjectOutputStream(new BufferedOutputStream(out)); oout.writeObject(new Circle(10, "One")); oout.writeObject(new Circle(12, "Two")); oout.close() Java object serialization may be either perfect or really bad choice, depending on what do you want to do with it GoalKicker.com Android Notes for Professionals 378 outside the scope of this tutorial and sometimes opinion based. Read about the versioning rst if you decide to use it. Section 48.4: Writing to external storage (SD card) You can also read and write from/to memory card (SD card) that is present in many Android devices. Files in this location can be accessed by other programs, also directly by the user after connecting device to PC via USB cable and enabling MTP protocol. Finding the SD card location is somewhat more problematic. The Environment class contains static methods to get "external directories" that should normally be inside the SD card, also information if the SD card exists at all and is writable. This question contains valuable answers how to make sure the right location will be found. Accessing external storage requires permissions in you Android manifest: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> For older versions of Android putting permissions it is enough to put these permissions into manifest (the user must approve during installation). However starting from Android 6.0 Android asks the user for approval at the time of the rst access, and you must support this new approach. Otherwise access is denied regardless of your manifest. In Android 6.0, rst you need to check for permission, then, if not granted, request it. The code examples can be found inside this SO question. Section 48.5: Solving "Invisible MTP les" problem If you create les for exporting via USB cable to desktop using MTP protocol, may be a problem that newly created les are not immediately visible in the le explorer running on the connected desktop PC. To to make new les visible, you need to call MediaScannerConnection: File file = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_DOCUMENTS), "theDocument.txt"); FileOutputStream out = new FileOutputStream(file) ... (write the document) out.close() MediaScannerConnection.scanFile(this, new String[] {file.getPath()}, null, null); context.sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, Uri.fromFile(file))); This MediaScannerConnection call code works for les only, not for directories. The problem is described in this Android bug report. This may be xed for some version in the future, or on some devices. Section 48.6: Working with big les Small les are processed in a fraction of second and you can read / write them in place of the code where you need this. However if the le is bigger or otherwise slower to process, you may need to use AsyncTask in Android to work with the le in the background: class FileOperation extends AsyncTask<String, Void, File> { @Override GoalKicker.com Android Notes for Professionals 379 protected File doInBackground(String... params) { try { File file = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_DOCUMENTS), "bigAndComplexDocument.odf"); FileOutputStream out = new FileOutputStream(file) ... (write the document) out.close() return file; } catch (IOException ex) { Log.e("Unable to write", ex); return null; } } @Override protected void onPostExecute(File result) { // This is called when we finish } @Override protected void onPreExecute() { // This is called before we begin } @Override protected void onProgressUpdate(Void... values) { // Unlikely required for this example } } } and then new FileOperation().execute("Some parameters"); This SO question contains the complete example on how to create and call the AsyncTask. Also see the question on error handling on how to handle IOExceptions and other errors. GoalKicker.com Android Notes for Professionals 380 Chapter 49: FileProvider Section 49.1: Sharing a le In this example you'll learn how to share a le with other apps. We'll use a pdf le in this example although the code works with every other format as well. The roadmap: Specify the directories in which the les you want to share are placed To share les we'll use a FileProvider, a class allowing secure le sharing between apps. A FileProvider can only share les in predened directories, so let's dene these. 1. Create a new XML le that will contain the paths, e.g. res/xml/lepaths.xml 2. Add the paths <paths xmlns:android="http://schemas.android.com/apk/res/android"> <files-path name="pdf_folder" path="documents/"/> </paths> Dene a FileProvider and link it with the le paths This is done in the manifest: <manifest> ... <application> ... <provider android:name="android.support.v4.context.FileProvider" android:authorities="com.mydomain.fileprovider" android:exported="false" android:grantUriPermissions="true"> <meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/filepaths" /> </provider> ... </application> ... </manifest> Generate the URI for the le To share the le we must provide an identier for the le. This is done by using a URI (Uniform Resource Identier). // We assume the file we want to load is in the documents/ subdirectory // of the internal storage File documentsPath = new File(Context.getFilesDir(), "documents"); File file = new File(documentsPath, "sample.pdf"); // This can also in one line of course: // File file = new File(Context.getFilesDir(), "documents/sample.pdf"); Uri uri = FileProvider.getUriForFile(getContext(), "com.mydomain.fileprovider", file); As you can see in the code we rst make a new File class representing the le. To get a URI we ask FileProvider to get us one. The second argument is important: it passes the authority of a FileProvider. It must be equal to the authority of the FileProvider dened in the manifest. GoalKicker.com Android Notes for Professionals 381 Share the le with other apps We use ShareCompat to share the le with other apps: Intent intent = ShareCompat.IntentBuilder.from(getContext()) .setType("application/pdf") .setStream(uri) .setChooserTitle("Choose bar") .createChooserIntent() .addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION); Context.startActivity(intent); A chooser is a menu from which the user can choose with which app he/she wants to share the le. The ag Intent.FLAG_GRANT_READ_URI_PERMISSION is needed to grant temporary read access permission to the URI. GoalKicker.com Android Notes for Professionals 382 Chapter 50: Storing Files in Internal & External Storage Parameter Details name The name of the le to open. NOTE: Cannot contain path separators mode Operating mode. Use MODE_PRIVATE for default operation, and MODE_APPEND to append an existing le. Other modes include MODE_WORLD_READABLE and MODE_WORLD_WRITEABLE, which were both deprecated in API 17. dir Directory of the le to create a new le in path Path to specify the location of the new le type Type of les directory to retrieve. Can be null, or any of the following: DIRECTORY_MUSIC, DIRECTORY_PODCASTS, DIRECTORY_RINGTONES, DIRECTORY_ALARMS, DIRECTORY_NOTIFICATIONS, DIRECTORY_PICTURES, or DIRECTORY_MOVIES Section 50.1: Android: Internal and External Storage Terminology Clarication Android developers(mainly beginners) have been confused regarding Internal & External storage terminology. There are lot of questions on Stackoverow regarding the same. This is mainly because of the fact that terminology according to Google/ocial Android documentation is quite dierent to that of normal Android OS user. Hence I thought documenting this would help. What we think - Users Terminology (UT) Internal storage(UT) phones inbuilt internal memory External storage(UT) removable Secure Digital(SD) card or micro SD storage Example: Nexus 6P's 32 GB internal memory. Example: storage space in removable SD cards provided by vendors like samsung, sandisk, strontium, transcend and others But, According to Android Documentation/Guide - Googles Terminology (GT) Internal storage(GT): By default, les saved to the internal storage are private to your application and other applications cannot access them (nor can the user). External storage(GT): This can be a removable storage media (such as an SD card) or an internal (non-removable) storage. External Storage(GT) can be categorized into two types: Primary External Storage This is same as phones inbuilt internal memory (or) Internal storage(UT) Secondary External Storage or Removable storage(GT) This is same as removable micro SD card storage (or) External storage(UT) Example: Nexus 6P's 32 GB internal memory. Example: storage space in removable SD cards provided by vendors like samsung, sandisk, strontium, transcend and others GoalKicker.com Android Notes for Professionals 383 This type of storage can be accessed on windows This type of storage can be accessed on windows PC by PC by connecting your phone to PC via USB cable connecting your phone to PC via USB cable and selecting File and selecting Camera(PTP) in the USB options transfer in the USB options notication. notication. In a nutshell, External Storage(GT) = Internal Storage(UT) and External Storage(UT) Removable Storage(GT) = External Storage(UT) Internal Storage(GT) doesn't have a term in UT. Let me explain clearly, Internal Storage(GT): By default, les saved to the internal storage are private to your application and other applications cannot access them. Your app user also can't access them using le manager; even after enabling "show hidden les" option in le manager. To access les in Internal Storage(GT), you have to root your Android phone. Moreover, when the user uninstalls your application, these les are removed/deleted. So Internal Storage(GT) is NOT what we think as Nexus 6P's 32/64 GB internal memory Generally, Internal Storage(GT) location would be: /data/data/your.application.package.appname/someDirectory/ External Storage(GT): Every Android-compatible device supports a shared "external storage" that you can use to save les. Files saved to the external storage are world-readable and can be modied by the user when they enable USB mass storage to transfer les on a computer. External Storage(GT) location: It could be anywhere in your internal storage(UT) or in your removable storage(GT) i.e. micro SD card. It depends on your phone's OEM and also on Android OS version. In order to read or write les on the External Storage(GT), your app must acquire the READ_EXTERNAL_STORAGE or WRITE_EXTERNAL_STORAGE system permissions. For example: <manifest ...> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> ... </manifest> If you need to both read and write les, then you need to request only the WRITE_EXTERNAL_STORAGE permission, because it implicitly requires read access as well. In External Storage(GT), you may also save les that are app-private But, When the user uninstalls your application, this directory and all its contents are deleted. GoalKicker.com Android Notes for Professionals 384 When do you need to save les that are app-private in External Storage(GT)? If you are handling les that are not intended for other apps to use (such as graphic textures or sound eects used by only your app), you should use a private storage directory on the external storage Beginning with Android 4.4, reading or writing les in your app's private directories does not require the READ_EXTERNAL_STORAGE or WRITE_EXTERNAL_STORAGE permissions. So you can declare the permission should be requested only on the lower versions of Android by adding the maxSdkVersion attribute: <manifest ...> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" android:maxSdkVersion="18" /> ... </manifest Methods to store in Internal Storage(GT): Both these methods are present in Context class File getDir (String name, int mode) File getFilesDir () Methods to store in Primary External Storage i.e. Internal Storage(UT): File getExternalStorageDirectory () File getExternalFilesDir (String type) File getExternalStoragePublicDirectory (String type) In the beginning, everyone used Environment.getExternalStorageDirectory() , which pointed to the root of Primary External Storage. As a result, Primary External Storage was lled with random content. Later, these two methods were added: 1. In Context class, they added getExternalFilesDir(), pointing to an app-specic directory on Primary External Storage. This directory and its contents will be deleted when the app is uninstalled. 2. Environment.getExternalStoragePublicDirectory() for centralized places to store well-known le types, like photos and movies. This directory and its contents will NOT be deleted when the app is uninstalled. Methods to store in Removable Storage(GT) i.e. micro SD card Before API level 19, there was no ocial way to store in SD card. But, many could do it using unocial libraries or APIs. Ocially, one method was introduced in Context class in API level 19 (Android version 4.4 - Kitkat). File[] getExternalFilesDirs (String type) GoalKicker.com Android Notes for Professionals 385 It returns absolute paths to application-specic directories on all shared/external storage devices where the application can place persistent les it owns. These les are internal to the application, and not typically visible to the user as media. That means, it will return paths to both types of External Storage(GT) - Internal memory and Micro SD card. Generally second path would be storage path of micro SD card(but not always). So you need to check it out by executing the code with this method. Example with code snippet: I created a new android project with empty activity, wrote the following code inside protected void onCreate(Bundle savedInstanceState) method of MainActivity.java File internal_m1 = getDir("custom", 0); File internal_m2 = getFilesDir(); File external_m1 = Environment.getExternalStorageDirectory(); File external_m2 = getExternalFilesDir(null); File external_m2_Args = getExternalFilesDir(Environment.DIRECTORY_PICTURES); File external_m3 = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES); File[] external_AND_removable_storage_m1 = getExternalFilesDirs(null); File[] external_AND_removable_storage_m1_Args = getExternalFilesDirs(Environment.DIRECTORY_PICTURES); After executing above code, Output: internal_m1: /data/data/your.application.package.appname/app_custom internal_m2: /data/data/your.application.package.appname/files external_m1: /storage/emulated/0 external_m2: /storage/emulated/0/Android/data/your.application.package.appname/files external_m2_Args: /storage/emulated/0/Android/data/your.application.package.appname/files/Pictures external_m3: /storage/emulated/0/Pictures external_AND_removable_storage_m1 (first path): /storage/emulated/0/Android/data/your.application.package.appname/files external_AND_removable_storage_m1 (second path): /storage/sdcard1/Android/data/your.application.package.appname/files external_AND_removable_storage_m1_Args (first path): /storage/emulated/0/Android/data/your.application.package.appname/files/Pictures external_AND_removable_storage_m1_Args (second path): /storage/sdcard1/Android/data/your.application.package.appname/files/Pictures Note: I have connected my phone to Windows PC; enabled both developer options, USB debugging and then ran GoalKicker.com Android Notes for Professionals 386 this code. If you do not connect your phone; but instead run this on Android emulator, your output may vary. My phone model is Coolpad Note 3 - running on Android 5.1 Storage locations on my phone: Micro SD storage location: /storage/sdcard1 Internal Storage(UT) location: /storage/sdcard0. Note that /sdcard & /storage/emulated/0 also point to Internal Storage(UT). But these are symlinks to /storage/sdcard0. To clearly understand dierent storage paths in Android, Please go through this answer Disclaimer: All the storage paths mentioned above are paths on my phone. Your les may not be stored on same storage paths. Because, the storage locations/paths may vary on other mobile phones depending on your vendor, manufacturer and dierent versions of Android OS. Section 50.2: Using External Storage "External" Storage is another type of storage that we can use to save les to the user's device. It has some key dierences from "Internal" Storage, namely: It is not always available. In the case of a removable medium (SD card), the user can simply remove the storage. It is not private. The user (and other applications) have access to these les. If the user uninstalls the app, the les you save in the directory retrieved with getExternalFilesDir() will be removed. To use External Storage, we need to rst obtain the proper permissions. You will need to use: android.permission.WRITE_EXTERNAL_STORAGE for reading and writing android.permission.READ_EXTERNAL_STORAGE for just reading To grant these permissions, you will need to identify them in your AndroidManifest.xml as such <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> NOTE: Since they are Dangerous permissions if you are using API Level 23 or above, you will need to request the permissions at runtime. Before attempting to write or read from External Storage, you should always check that the storage medium is available. String state = Environment.getExternalStorageState(); if (state.equals(Environment.MEDIA_MOUNTED)) { // Available to read and write } if (state.equals(Environment.MEDIA_MOUNTED) || state.equals(Environment.MEDIA_MOUNTED_READ_ONLY)) { // Available to at least read } When writing les to the External Storage, you should decide if the le should be recognized as Public or Private. GoalKicker.com Android Notes for Professionals 387 While both of these types of les are still accessible to the user and other applications on the device, there is a key distinction between them. Public les should remain on the device when the user uninstalls the app. An example of a le that should be saved as Public would be photos that are taken through your application. Private les should all be removed when the user uninstalls the app. These types of les would be app specic, and not be of use to the user or other applications. Ex. temporary les downloaded/used by your application. Here's how to get access to the Documents directory for both Public and Private les. Public // Access your app's directory in the device's Public documents directory File docs = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_DOCUMENTS), "YourAppDirectory"); // Make the directory if it does not yet exist myDocs.mkdirs(); Private // Access your app's Private documents directory File file = new File(context.getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS), "YourAppDirectory"); // Make the directory if it does not yet exist myDocs.mkdirs(); Section 50.3: Using Internal Storage By default, any les that you save to Internal Storage are private to your application. They cannot be accessed by other applications, nor the user under normal circumstances. These les are deleted when the user uninstalls the application. To Write Text to a File String fileName= "helloworld"; String textToWrite = "Hello, World!"; FileOutputStream fileOutputStream; try { fileOutputStream = openFileOutput(fileName, Context.MODE_PRIVATE); fileOutputStream.write(textToWrite.getBytes()); fileOutputStream.close(); } catch (Exception e) { e.printStackTrace(); } To Append Text to an Existing File Use Context.MODE_APPEND for the mode parameter of openFileOutput fileOutputStream = openFileOutput(fileName, Context.MODE_APPEND); Section 50.4: Fetch Device Directory : First Add Storage permission to read/fetch device directory. GoalKicker.com Android Notes for Professionals 388 <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> Create model class //create one directory model class //to store directory title and type in list public class DirectoryModel { String dirName; int dirType; // set 1 or 0, where 0 for directory and 1 for file. public int getDirType() { return dirType; } public void setDirType(int dirType) { this.dirType = dirType; } public String getDirName() { return dirName; } public void setDirName(String dirName) { this.dirName = dirName; } } Create list using directory model to add directory data. //define list to show directory List<DirectoryModel> rootDir = new ArrayList<>(); Fetch directory using following method. //to fetch device directory private void getDirectory(String currDir) { // pass device root directory File f = new File(currDir); File[] files = f.listFiles(); if (files != null) { if (files.length > 0) { rootDir.clear(); for (File inFile : files) { if (inFile.isDirectory()) { //return true if it's directory // is directory DirectoryModel dir = new DirectoryModel(); dir.setDirName(inFile.toString().replace("/storage/emulated/0", "")); dir.setDirType(0); // set 0 for directory rootDir.add(dir); } else if (inFile.isFile()) { // return true if it's file //is file DirectoryModel dir = new DirectoryModel(); dir.setDirName(inFile.toString().replace("/storage/emulated/0", "")); dir.setDirType(1); // set 1 for file rootDir.add(dir); } } GoalKicker.com Android Notes for Professionals 389 } printDirectoryList(); } } Print directory list in log. //print directory list in logs private void printDirectoryList() { for (int i = 0; i < rootDir.size(); i++) { Log.e(TAG, "printDirectoryLogs: " + rootDir.get(i).toString()); } } Usage //to Fetch Directory Call function with root directory. String rootPath = Environment.getExternalStorageDirectory().toString(); // return ==> /storage/emulated/0/ getDirectory(rootPath ); To fetch inner les/folder of specic directory use same method just change argument, pass the current selected path in argument and handle response for same. To get File Extension : private String getExtension(String filename) { String filenameArray[] = filename.split("\\."); String extension = filenameArray[filenameArray.length - 1]; Log.d(TAG, "getExtension: " + extension); return extension; } Section 50.5: Save Database on SD Card (Backup DB on SD) public static Boolean ExportDB(String DATABASE_NAME , String packageName , String folderName){ //DATABASE_NAME including ".db" at the end like "mayApp.db" String DBName = DATABASE_NAME.substring(0, DATABASE_NAME.length() - 3); File data = Environment.getDataDirectory(); FileChannel source=null; FileChannel destination=null; String currentDBPath = "/data/"+ packageName +"/databases/"+DATABASE_NAME; // getting app db path File sd = Environment.getExternalStorageDirectory(); // getting phone SD card path String backupPath = sd.getAbsolutePath() + folderName; // if you want to set backup in specific folder name /* be careful , foldername must initial like this : "/myFolder" . don't forget "/" at begin of folder name you could define foldername like this : "/myOutterFolder/MyInnerFolder" and so on ... */ File dir = new File(backupPath); if(!dir.exists()) // if there was no folder at this path , it create it . { dir.mkdirs(); GoalKicker.com Android Notes for Professionals 390 } DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss"); Date date = new Date(); /* use date including file name for arrange them and preventing to make file with the same*/ File currentDB = new File(data, currentDBPath); File backupDB = new File(backupPath, DBName +"("+ dateFormat.format(date)+").db"); try { if (currentDB.exists() && !backupDB.exists()) { source = new FileInputStream(currentDB).getChannel(); destination = new FileOutputStream(backupDB).getChannel(); destination.transferFrom(source, 0, source.size()); source.close(); destination.close(); return true; } return false; } catch(IOException e) { e.printStackTrace(); return false; } } call this method this way : ExportDB("myDB.db","com.example.exam","/myFolder"); GoalKicker.com Android Notes for Professionals 391 Chapter 51: Zip le in android Section 51.1: Zip le on android import android.util.Log; import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; public class Compress { private static final int BUFFER = 2048; private String[] _files; private String _zipFile; public Compress(String[] files, String zipFile) { _files = files; _zipFile = zipFile; } public void zip() { try { BufferedInputStream origin = null; FileOutputStream dest = new FileOutputStream(_zipFile); ZipOutputStream out = new ZipOutputStream(new BufferedOutputStream(dest)); byte data[] = new byte[BUFFER]; for(int i=0; i < _files.length; i++) { Log.v("Compress", "Adding: " + _files[i]); FileInputStream fi = new FileInputStream(_files[i]); origin = new BufferedInputStream(fi, BUFFER); ZipEntry entry = new ZipEntry(_files[i].substring(_files[i].lastIndexOf("/") + 1)); out.putNextEntry(entry); int count; while ((count = origin.read(data, 0, BUFFER)) != -1) { out.write(data, 0, count); } origin.close(); } out.close(); } catch(Exception e) { e.printStackTrace(); } } } GoalKicker.com Android Notes for Professionals 392 Chapter 52: Unzip File in Android Section 52.1: Unzip le private boolean unpackZip(String path, String zipname){ InputStream is; ZipInputStream zis; try { String filename; is = new FileInputStream(path + zipname); zis = new ZipInputStream(new BufferedInputStream(is)); ZipEntry ze; byte[] buffer = new byte[1024]; int count; while ((ze = zis.getNextEntry()) != null){ // zapis do souboru filename = ze.getName(); // Need to create directories if not exists, or // it will generate an Exception... if (ze.isDirectory()) { File fmd = new File(path + filename); fmd.mkdirs(); continue; } FileOutputStream fout = new FileOutputStream(path + filename); // cteni zipu a zapis while ((count = zis.read(buffer)) != -1){ fout.write(buffer, 0, count); } fout.close(); zis.closeEntry(); } zis.close(); } catch(IOException e){ e.printStackTrace(); return false; } return true;} GoalKicker.com Android Notes for Professionals 393 Chapter 53: Camera and Gallery Section 53.1: Take photo Add a permission to access the camera to the AndroidManifest le: <uses-permission android:name="android.permission.CAMERA"></uses-permission> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> Xml le : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <SurfaceView android:id="@+id/surfaceView" android:layout_height="0dip" android:layout_width="0dip"></SurfaceView> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/imageView"></ImageView> </LinearLayout> Activity import java.io.IOException; import android.app.Activity; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.hardware.Camera; import android.hardware.Camera.Parameters; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.widget.ImageView; public class TakePicture extends Activity implements SurfaceHolder.Callback { //a variable to store a reference to the Image View at the main.xml file private ImageView iv_image; //a variable to store a reference to the Surface View at the main.xml file private SurfaceView sv; //a bitmap to display the captured image private Bitmap bmp; //Camera variables //a surface holder private SurfaceHolder sHolder; //a variable to control the camera private Camera mCamera; //the camera parameters private Parameters parameters; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { GoalKicker.com Android Notes for Professionals 394 super.onCreate(savedInstanceState); setContentView(R.layout.main); //get the Image View at the main.xml file iv_image = (ImageView) findViewById(R.id.imageView); //get the Surface View at the main.xml file sv = (SurfaceView) findViewById(R.id.surfaceView); //Get a surface sHolder = sv.getHolder(); //add the callback interface methods defined below as the Surface View callbacks sHolder.addCallback(this); //tells Android that this surface will have its data constantly replaced sHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } @Override public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) { //get camera parameters parameters = mCamera.getParameters(); //set camera parameters mCamera.setParameters(parameters); mCamera.startPreview(); //sets what code should be executed after the picture is taken Camera.PictureCallback mCall = new Camera.PictureCallback() { @Override public void onPictureTaken(byte[] data, Camera camera) { //decode the data obtained by the camera into a Bitmap bmp = BitmapFactory.decodeByteArray(data, 0, data.length); String filename=Environment.getExternalStorageDirectory() + File.separator + "testimage.jpg"; FileOutputStream out = null; try { out = new FileOutputStream(filename); bmp.compress(Bitmap.CompressFormat.PNG, 100, out); // bmp is your Bitmap instance // PNG is a lossless format, the compression factor (100) is ignored } catch (Exception e) { e.printStackTrace(); } finally { try { if (out != null) { out.close(); } } catch (IOException e) { e.printStackTrace(); } } //set the iv_image iv_image.setImageBitmap(bmp); } }; mCamera.takePicture(null, null, mCall); GoalKicker.com Android Notes for Professionals 395 } @Override public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, acquire the camera and tell it where // to draw the preview. mCamera = Camera.open(); try { mCamera.setPreviewDisplay(holder); } catch (IOException exception) { mCamera.release(); mCamera = null; } } @Override public void surfaceDestroyed(SurfaceHolder holder) { //stop the preview mCamera.stopPreview(); //release the camera mCamera.release(); //unbind the camera from this object mCamera = null; } } Section 53.2: Taking full-sized photo from camera To take a photo, rst we need to declare required permissions in AndroidManifest.xml. We need two permissions: Camera - to open camera app. If attribute required is set to true you will not be able to install this app if you don't have hardware camera. WRITE_EXTERNAL_STORAGE - This permission is required to create new le, in which captured photo will be saved. AndroidManifest.xml <uses-feature android:name="android.hardware.camera" android:required="true" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> The main idea in taking full-sized photo from camera is that we need to create new le for photo, before we open camera app and capture photo. private void dispatchTakePictureIntent() { Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); // Ensure that there's a camera activity to handle the intent if (takePictureIntent.resolveActivity(getPackageManager()) != null) { // Create the File where the photo should go File photoFile = null; try { photoFile = createImageFile(); } catch (IOException ex) { Log.e("DEBUG_TAG", "createFile", ex); } // Continue only if the File was successfully created if (photoFile != null) { GoalKicker.com Android Notes for Professionals 396 takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(photoFile)); startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE); } } } private File createImageFile() throws IOException { // Create an image file name String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss", Locale.getDefault()).format(new Date()); String imageFileName = "JPEG_" + timeStamp + "_"; File storageDir = getAlbumDir(); File image = File.createTempFile( imageFileName, /* prefix */ ".jpg", /* suffix */ storageDir /* directory */ ); // Save a file: path for use with ACTION_VIEW intents mCurrentPhotoPath = image.getAbsolutePath(); return image; } private File getAlbumDir() { File storageDir = null; if (Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState())) { storageDir = new File(Environment.getExternalStorageDirectory() + "/dcim/" + "MyRecipes"); if (!storageDir.mkdirs()) { if (!storageDir.exists()) { Log.d("CameraSample", "failed to create directory"); return null; } } } else { Log.v(getString(R.string.app_name), "External storage is not mounted READ/WRITE."); } return storageDir; } private void setPic() { /* There isn't enough memory to open up more than a couple camera photos */ /* So pre-scale the target bitmap into which the file is decoded */ /* Get the size of the ImageView */ int targetW = recipeImage.getWidth(); int targetH = recipeImage.getHeight(); /* Get the size of the image */ BitmapFactory.Options bmOptions = new BitmapFactory.Options(); bmOptions.inJustDecodeBounds = true; BitmapFactory.decodeFile(mCurrentPhotoPath, bmOptions); int photoW = bmOptions.outWidth; int photoH = bmOptions.outHeight; GoalKicker.com Android Notes for Professionals 397 /* Figure out which way needs to be reduced less */ int scaleFactor = 2; if ((targetW > 0) && (targetH > 0)) { scaleFactor = Math.max(photoW / targetW, photoH / targetH); } /* Set bitmap options to scale the image decode target */ bmOptions.inJustDecodeBounds = false; bmOptions.inSampleSize = scaleFactor; bmOptions.inPurgeable = true; Matrix matrix = new Matrix(); matrix.postRotate(getRotation()); /* Decode the JPEG file into a Bitmap */ Bitmap bitmap = BitmapFactory.decodeFile(mCurrentPhotoPath, bmOptions); bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, false); /* Associate the Bitmap to the ImageView */ recipeImage.setImageBitmap(bitmap); } private float getRotation() { try { ExifInterface ei = new ExifInterface(mCurrentPhotoPath); int orientation = ei.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); switch (orientation) { case ExifInterface.ORIENTATION_ROTATE_90: return 90f; case ExifInterface.ORIENTATION_ROTATE_180: return 180f; case ExifInterface.ORIENTATION_ROTATE_270: return 270f; default: return 0f; } } catch (Exception e) { Log.e("Add Recipe", "getRotation", e); return 0f; } } private void galleryAddPic() { Intent mediaScanIntent = new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE); File f = new File(mCurrentPhotoPath); Uri contentUri = Uri.fromFile(f); mediaScanIntent.setData(contentUri); sendBroadcast(mediaScanIntent); } private void handleBigCameraPhoto() { if (mCurrentPhotoPath != null) { setPic(); galleryAddPic(); } } GoalKicker.com Android Notes for Professionals 398 @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == Activity.RESULT_OK) { handleBigCameraPhoto(); } } Section 53.3: Decode bitmap correctly rotated from the uri fetched with the intent private static final String TAG = "IntentBitmapFetch"; private static final String COLON_SEPARATOR = ":"; private static final String IMAGE = "image"; @Nullable public Bitmap getBitmap(@NonNull Uri bitmapUri, int maxDimen) { InputStream is = context.getContentResolver().openInputStream(bitmapUri); Bitmap bitmap = BitmapFactory.decodeStream(is, null, getBitmapOptions(bitmapUri, maxDimen)); int imgRotation = getImageRotationDegrees(bitmapUri); int endRotation = (imgRotation < 0) ? -imgRotation : imgRotation; endRotation %= 360; endRotation = 90 * (endRotation / 90); if (endRotation > 0 && bitmap != null) { Matrix m = new Matrix(); m.setRotate(endRotation); Bitmap tmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), m, true); if (tmp != null) { bitmap.recycle(); bitmap = tmp; } } return bitmap; } private BitmapFactory.Options getBitmapOptions(Uri uri, int imageMaxDimen){ BitmapFactory.Options options = new BitmapFactory.Options(); if (imageMaxDimen > 0) { options.inJustDecodeBounds = true; decodeImage(null, uri, options); options.inSampleSize = calculateScaleFactor(options, imageMaxDimen); options.inJustDecodeBounds = false; options.inPreferredConfig = Bitmap.Config.RGB_565; addInBitmapOptions(options); } } private int calculateScaleFactor(@NonNull BitmapFactory.Options bitmapOptionsMeasureOnly, int imageMaxDimen) { int inSampleSize = 1; if (bitmapOptionsMeasureOnly.outHeight > imageMaxDimen || bitmapOptionsMeasureOnly.outWidth > imageMaxDimen) { final int halfHeight = bitmapOptionsMeasureOnly.outHeight / 2; final int halfWidth = bitmapOptionsMeasureOnly.outWidth / 2; while ((halfHeight / inSampleSize) > imageMaxDimen && (halfWidth / inSampleSize) > imageMaxDimen) { inSampleSize *= 2; GoalKicker.com Android Notes for Professionals 399 } } return inSampleSize; } public int getImageRotationDegrees(@NonNull Uri imgUri) { int photoRotation = ExifInterface.ORIENTATION_UNDEFINED; try { boolean hasRotation = false; //If image comes from the gallery and is not in the folder DCIM (Scheme: content://) String[] projection = {MediaStore.Images.ImageColumns.ORIENTATION}; Cursor cursor = context.getContentResolver().query(imgUri, projection, null, null, null); if (cursor != null) { if (cursor.getColumnCount() > 0 && cursor.moveToFirst()) { photoRotation = cursor.getInt(cursor.getColumnIndex(projection[0])); hasRotation = photoRotation != 0; Log.d("Cursor orientation: "+ photoRotation); } cursor.close(); } //If image comes from the camera (Scheme: file://) or is from the folder DCIM (Scheme: content://) if (!hasRotation) { ExifInterface exif = new ExifInterface(getAbsolutePath(imgUri)); int exifRotation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); switch (exifRotation) { case ExifInterface.ORIENTATION_ROTATE_90: { photoRotation = 90; break; } case ExifInterface.ORIENTATION_ROTATE_180: { photoRotation = 180; break; } case ExifInterface.ORIENTATION_ROTATE_270: { photoRotation = 270; break; } } Log.d(TAG, "Exif orientation: "+ photoRotation); } } catch (IOException e) { Log.e(TAG, "Error determining rotation for image"+ imgUri, e); } return photoRotation; } @TargetApi(Build.VERSION_CODES.KITKAT) private String getAbsolutePath(Uri uri) { //Code snippet edited from: http://stackoverflow.com/a/20559418/2235133 String filePath = uri.getPath(); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT && DocumentsContract.isDocumentUri(context, uri)) { // Will return "image:x*" String[] wholeID = TextUtils.split(DocumentsContract.getDocumentId(uri), COLON_SEPARATOR); // Split at colon, use second item in the array String type = wholeID[0]; if (IMAGE.equalsIgnoreCase(type)) {//If it not type image, it means it comes from a remote location, like Google Photos GoalKicker.com Android Notes for Professionals 400 String id = wholeID[1]; String[] column = {MediaStore.Images.Media.DATA}; // where id is equal to String sel = MediaStore.Images.Media._ID + "=?"; Cursor cursor = context.getContentResolver(). query(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, column, sel, new String[]{id}, null); if (cursor != null) { int columnIndex = cursor.getColumnIndex(column[0]); if (cursor.moveToFirst()) { filePath = cursor.getString(columnIndex); } cursor.close(); } Log.d(TAG, "Fetched absolute path for uri" + uri); } } return filePath; } Section 53.4: Set camera resolution Set High resolution programmatically. Camera mCamera = Camera.open(); Camera.Parameters params = mCamera.getParameters(); // Check what resolutions are supported by your camera List<Size> sizes = params.getSupportedPictureSizes(); // Iterate through all available resolutions and choose one. // The chosen resolution will be stored in mSize. Size mSize; for (Size size : sizes) { Log.i(TAG, "Available resolution: "+size.width+" "+size.height); mSize = size; } } Log.i(TAG, "Chosen resolution: "+mSize.width+" "+mSize.height); params.setPictureSize(mSize.width, mSize.height); mCamera.setParameters(params); Section 53.5: How to start camera or gallery and save camera result to storage First of all you need Uri and temp Folders and request codes : public final int REQUEST_SELECT_PICTURE = 0x01; public final int REQUEST_CODE_TAKE_PICTURE = 0x2; public static String TEMP_PHOTO_FILE_NAME ="photo_"; Uri mImageCaptureUri; File mFileTemp; Then init mFileTemp : public void initTempFile(){ String state = Environment.getExternalStorageState(); if (Environment.MEDIA_MOUNTED.equals(state)) { GoalKicker.com Android Notes for Professionals 401 mFileTemp = new File(Environment.getExternalStorageDirectory() + File.separator + getResources().getString(R.string.app_foldername) + File.separator + getResources().getString(R.string.pictures_folder) , TEMP_PHOTO_FILE_NAME + System.currentTimeMillis() + ".jpg"); mFileTemp.getParentFile().mkdirs(); } else { mFileTemp = new File(getFilesDir() + File.separator + getResources().getString(R.string.app_foldername) + File.separator + getResources().getString(R.string.pictures_folder) , TEMP_PHOTO_FILE_NAME + System.currentTimeMillis() + ".jpg"); mFileTemp.getParentFile().mkdirs(); } } Opening Camera and Gallery intents : public void openCamera(){ Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); try { mImageCaptureUri = null; String state = Environment.getExternalStorageState(); if (Environment.MEDIA_MOUNTED.equals(state)) { mImageCaptureUri = Uri.fromFile(mFileTemp); } else { mImageCaptureUri = InternalStorageContentProvider.CONTENT_URI; } intent.putExtra(MediaStore.EXTRA_OUTPUT, mImageCaptureUri); intent.putExtra("return-data", true); startActivityForResult(intent, REQUEST_CODE_TAKE_PICTURE); } catch (Exception e) { Log.d("error", "cannot take picture", e); } } public void openGallery(){ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN && ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) { requestPermission(Manifest.permission.READ_EXTERNAL_STORAGE, getString(R.string.permission_read_storage_rationale), REQUEST_STORAGE_READ_ACCESS_PERMISSION); } else { Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); intent.addCategory(Intent.CATEGORY_OPENABLE); startActivityForResult(Intent.createChooser(intent, getString(R.string.select_image)), REQUEST_SELECT_PICTURE); } } Then in onActivityResult method : @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { GoalKicker.com Android Notes for Professionals 402 if (resultCode != RESULT_OK) { return; } Bitmap bitmap; switch (requestCode) { case REQUEST_SELECT_PICTURE: try { Uri uri = data.getData(); try { bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), uri); Bitmap bitmapScaled = Bitmap.createScaledBitmap(bitmap, 800, 800, true); Drawable drawable=new BitmapDrawable(bitmapScaled); mImage.setImageDrawable(drawable); mImage.setVisibility(View.VISIBLE); } catch (IOException e) { Log.v("act result", "there is an error : "+e.getContent()); } } catch (Exception e) { Log.v("act result", "there is an error : "+e.getContent()); } break; case REQUEST_CODE_TAKE_PICTURE: try{ Bitmap bitmappicture = MediaStore.Images.Media.getBitmap(getContentResolver() , mImageCaptureUri); mImage.setImageBitmap(bitmappicture); mImage.setVisibility(View.VISIBLE); }catch (IOException e){ Log.v("error camera",e.getMessage()); } break; } super.onActivityResult(requestCode, resultCode, data); } You need theese permissions in AndroidManifest.xml : <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.CAMERA" /> And you need to handle runtime permissions such as Read/Write external storage etc ... I am checking READ_EXTERNAL_STORAGE permission in my openGallery method : My requestPermission method : protected void requestPermission(final String permission, String rationale, final int requestCode) { if (ActivityCompat.shouldShowRequestPermissionRationale(this, permission)) { showAlertDialog(getString(R.string.permission_title_rationale), rationale, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions(BasePermissionActivity.this, new String[]{permission}, requestCode); } }, getString(android.R.string.ok), null, getString(android.R.string.cancel)); } else { GoalKicker.com Android Notes for Professionals 403 ActivityCompat.requestPermissions(this, new String[]{permission}, requestCode); } } Then Override onRequestPermissionsResult method : @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { switch (requestCode) { case REQUEST_STORAGE_READ_ACCESS_PERMISSION: if (grantResults[0] == PackageManager.PERMISSION_GRANTED) { handleGallery(); } break; default: super.onRequestPermissionsResult(requestCode, permissions, grantResults); } } showAlertDialog method : protected void showAlertDialog(@Nullable String title, @Nullable String message, @Nullable DialogInterface.OnClickListener onPositiveButtonClickListener, @NonNull String positiveText, @Nullable DialogInterface.OnClickListener onNegativeButtonClickListener, @NonNull String negativeText) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle(title); builder.setMessage(message); builder.setPositiveButton(positiveText, onPositiveButtonClickListener); builder.setNegativeButton(negativeText, onNegativeButtonClickListener); mAlertDialog = builder.show(); } GoalKicker.com Android Notes for Professionals 404 Chapter 54: Camera 2 API Parameter Details A congured capture session for a CameraDevice, used for capturing images from the CameraCaptureSession camera or reprocessing images captured from the camera in the same session previously CameraDevice A representation of a single camera connected to an Android device The properties describing a CameraDevice. These properties are xed for a given CameraCharacteristics CameraDevice, and can be queried through the CameraManager interface with getCameraCharacteristics(String) CameraManager A system service manager for detecting, characterizing, and connecting to CameraDevices. You can get an instance of this class by calling Context.getSystemService() CaptureRequest An immutable package of settings and outputs needed to capture a single image from the camera device. Contains the conguration for the capture hardware (sensor, lens, ash), the processing pipeline, the control algorithms, and the output buers. Also contains the list of target Surfaces to send image data to for this capture. Can be created by using a CaptureRequest.Builder instance, obtained by calling createCaptureRequest(int) CaptureResult The subset of the results of a single image capture from the image sensor. Contains a subset of the nal conguration for the capture hardware (sensor, lens, ash), the processing pipeline, the control algorithms, and the output buers. It is produced by a CameraDevice after processing a CaptureRequest Section 54.1: Preview the main camera in a TextureView In this case, building against API 23, so permissions are handled too. You must add in the Manifest the following permission (wherever the API level you're using): <uses-permission android:name="android.permission.CAMERA"/> We're about to create an activity (Camera2Activity.java) that lls a TextureView with the preview of the device's camera. The Activity we're going to use is a typical AppCompatActivity: public class Camera2Activity extends AppCompatActivity { Attributes (You may need to read the entire example to understand some of it) The MAX_PREVIEW_SIZE guaranteed by Camera2 API is 1920x1080 private static final int MAX_PREVIEW_WIDTH = 1920; private static final int MAX_PREVIEW_HEIGHT = 1080; TextureView.SurfaceTextureListener handles several lifecycle events on a TextureView. In this case, we're listening to those events. When the SurfaceTexture is ready, we initialize the camera. When it size changes, we setup the preview coming from the camera accordingly private final TextureView.SurfaceTextureListener mSurfaceTextureListener = new TextureView.SurfaceTextureListener() { @Override public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) { openCamera(width, height); } GoalKicker.com Android Notes for Professionals 405 @Override public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) { configureTransform(width, height); } @Override public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) { return true; } @Override public void onSurfaceTextureUpdated(SurfaceTexture texture) { } }; A CameraDevice represent one physical device's camera. In this attribute, we save the ID of the current CameraDevice private String mCameraId; This is the view (TextureView) that we'll be using to "draw" the preview of the Camera private TextureView mTextureView; The CameraCaptureSession for camera preview private CameraCaptureSession mCaptureSession; A reference to the opened CameraDevice private CameraDevice mCameraDevice; The Size of camera preview. private Size mPreviewSize; CameraDevice.StateCallback is called when CameraDevice changes its state private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice cameraDevice) { // This method is called when the camera is opened. We start camera preview here. mCameraOpenCloseLock.release(); mCameraDevice = cameraDevice; createCameraPreviewSession(); } @Override public void onDisconnected(@NonNull CameraDevice cameraDevice) { mCameraOpenCloseLock.release(); cameraDevice.close(); mCameraDevice = null; } @Override public void onError(@NonNull CameraDevice cameraDevice, int error) { GoalKicker.com Android Notes for Professionals 406 mCameraOpenCloseLock.release(); cameraDevice.close(); mCameraDevice = null; finish(); } }; An additional thread for running tasks that shouldn't block the UI private HandlerThread mBackgroundThread; A Handler for running tasks in the background private Handler mBackgroundHandler; An ImageReader that handles still image capture private ImageReader mImageReader; CaptureRequest.Builder for the camera preview private CaptureRequest.Builder mPreviewRequestBuilder; CaptureRequest generated by mPreviewRequestBuilder private CaptureRequest mPreviewRequest; A Semaphore to prevent the app from exiting before closing the camera. private Semaphore mCameraOpenCloseLock = new Semaphore(1); Constant ID of the permission request private static final int REQUEST_CAMERA_PERMISSION = 1; Android Lifecycle methods @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_camera2); mTextureView = (TextureView) findViewById(R.id.texture); } @Override public void onResume() { super.onResume(); startBackgroundThread(); // When the screen is turned off and turned back on, the SurfaceTexture is already // available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open // a camera and start preview from here (otherwise, we wait until the surface is ready in // the SurfaceTextureListener). if (mTextureView.isAvailable()) { openCamera(mTextureView.getWidth(), mTextureView.getHeight()); GoalKicker.com Android Notes for Professionals 407 } else { mTextureView.setSurfaceTextureListener(mSurfaceTextureListener); } } @Override public void onPause() { closeCamera(); stopBackgroundThread(); super.onPause(); } Camera2 related methods Those are methods that uses the Camera2 APIs private void openCamera(int width, int height) { if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) { requestCameraPermission(); return; } setUpCameraOutputs(width, height); configureTransform(width, height); CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); try { if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) { throw new RuntimeException("Time out waiting to lock camera opening."); } manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler); } catch (CameraAccessException e) { e.printStackTrace(); } catch (InterruptedException e) { throw new RuntimeException("Interrupted while trying to lock camera opening.", e); } } Closes the current camera private void closeCamera() { try { mCameraOpenCloseLock.acquire(); if (null != mCaptureSession) { mCaptureSession.close(); mCaptureSession = null; } if (null != mCameraDevice) { mCameraDevice.close(); mCameraDevice = null; } if (null != mImageReader) { mImageReader.close(); mImageReader = null; } } catch (InterruptedException e) { throw new RuntimeException("Interrupted while trying to lock camera closing.", e); } finally { mCameraOpenCloseLock.release(); } } GoalKicker.com Android Notes for Professionals 408 Sets up member variables related to camera private void setUpCameraOutputs(int width, int height) { CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); try { for (String cameraId : manager.getCameraIdList()) { CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId); // We don't use a front facing camera in this sample. Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING); if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) { continue; } StreamConfigurationMap map = characteristics.get( CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); if (map == null) { continue; } // For still image captures, we use the largest available size. Size largest = Collections.max( Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)), new CompareSizesByArea()); mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(), ImageFormat.JPEG, /*maxImages*/2); mImageReader.setOnImageAvailableListener( null, mBackgroundHandler); Point displaySize = new Point(); getWindowManager().getDefaultDisplay().getSize(displaySize); int rotatedPreviewWidth = width; int rotatedPreviewHeight = height; int maxPreviewWidth = displaySize.x; int maxPreviewHeight = displaySize.y; if (maxPreviewWidth > MAX_PREVIEW_WIDTH) { maxPreviewWidth = MAX_PREVIEW_WIDTH; } if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) { maxPreviewHeight = MAX_PREVIEW_HEIGHT; } // Danger! Attempting to use too large a preview size could exceed the camera // bus' bandwidth limitation, resulting in gorgeous previews but the storage of // garbage capture data. mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class), rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth, maxPreviewHeight, largest); mCameraId = cameraId; return; } } catch (CameraAccessException e) { e.printStackTrace(); } catch (NullPointerException e) { // Currently an NPE is thrown when the Camera2API is used but not supported on the // device this code runs. Toast.makeText(Camera2Activity.this, "Camera2 API not supported on this device", Toast.LENGTH_LONG).show(); GoalKicker.com Android Notes for Professionals 409 } } Creates a new CameraCaptureSession for camera preview private void createCameraPreviewSession() { try { SurfaceTexture texture = mTextureView.getSurfaceTexture(); assert texture != null; // We configure the size of default buffer to be the size of camera preview we want. texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); // This is the output Surface we need to start preview. Surface surface = new Surface(texture); // We set up a CaptureRequest.Builder with the output Surface. mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); mPreviewRequestBuilder.addTarget(surface); // Here, we create a CameraCaptureSession for camera preview. mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) { // The camera is already closed if (null == mCameraDevice) { return; } // When the session is ready, we start displaying the preview. mCaptureSession = cameraCaptureSession; try { // Auto focus should be continuous for camera preview. mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE); // Finally, we start displaying the camera preview. mPreviewRequest = mPreviewRequestBuilder.build(); mCaptureSession.setRepeatingRequest(mPreviewRequest, null, mBackgroundHandler); } catch (CameraAccessException e) { e.printStackTrace(); } } @Override public void onConfigureFailed( @NonNull CameraCaptureSession cameraCaptureSession) { showToast("Failed"); } }, null ); } catch (CameraAccessException e) { e.printStackTrace(); } } Permissions related methods For Android API 23+ GoalKicker.com Android Notes for Professionals 410 private void requestCameraPermission() { if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) { new AlertDialog.Builder(Camera2Activity.this) .setMessage("R string request permission") .setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions(Camera2Activity.this, new String[]{Manifest.permission.CAMERA}, REQUEST_CAMERA_PERMISSION); } }) .setNegativeButton(android.R.string.cancel, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { finish(); } }) .create(); } else { ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, REQUEST_CAMERA_PERMISSION); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if (requestCode == REQUEST_CAMERA_PERMISSION) { if (grantResults.length != 1 || grantResults[0] != PackageManager.PERMISSION_GRANTED) { Toast.makeText(Camera2Activity.this, "ERROR: Camera permissions not granted", Toast.LENGTH_LONG).show(); } } else { super.onRequestPermissionsResult(requestCode, permissions, grantResults); } } Background thread / handler methods private void startBackgroundThread() { mBackgroundThread = new HandlerThread("CameraBackground"); mBackgroundThread.start(); mBackgroundHandler = new Handler(mBackgroundThread.getLooper()); } private void stopBackgroundThread() { mBackgroundThread.quitSafely(); try { mBackgroundThread.join(); mBackgroundThread = null; mBackgroundHandler = null; } catch (InterruptedException e) { e.printStackTrace(); } } Utility methods GoalKicker.com Android Notes for Professionals 411 Given choices of Sizes supported by a camera, choose the smallest one that is at least at large as the respective texture view size, and that is as most as large as the respective max size, and whose aspect ratio matches with the specied value. If doesn't exist, choose the largest one that is at most as large as the respective max size, and whose aspect ratio matches with the specied value private static Size chooseOptimalSize(Size[] choices, int textureViewWidth, int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio) { // Collect the supported resolutions that are at least as big as the preview Surface List<Size> bigEnough = new ArrayList<>(); // Collect the supported resolutions that are smaller than the preview Surface List<Size> notBigEnough = new ArrayList<>(); int w = aspectRatio.getWidth(); int h = aspectRatio.getHeight(); for (Size option : choices) { if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight && option.getHeight() == option.getWidth() * h / w) { if (option.getWidth() >= textureViewWidth && option.getHeight() >= textureViewHeight) { bigEnough.add(option); } else { notBigEnough.add(option); } } } // Pick the smallest of those big enough. If there is no one big enough, pick the // largest of those not big enough. if (bigEnough.size() > 0) { return Collections.min(bigEnough, new CompareSizesByArea()); } else if (notBigEnough.size() > 0) { return Collections.max(notBigEnough, new CompareSizesByArea()); } else { Log.e("Camera2", "Couldn't find any suitable preview size"); return choices[0]; } } This method conggures the neccesary Matrix transformation to mTextureView private void configureTransform(int viewWidth, int viewHeight) { if (null == mTextureView || null == mPreviewSize) { return; } int rotation = getWindowManager().getDefaultDisplay().getRotation(); Matrix matrix = new Matrix(); RectF viewRect = new RectF(0, 0, viewWidth, viewHeight); RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth()); float centerX = viewRect.centerX(); float centerY = viewRect.centerY(); if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) { bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY()); matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL); float scale = Math.max( (float) viewHeight / mPreviewSize.getHeight(), (float) viewWidth / mPreviewSize.getWidth()); matrix.postScale(scale, scale, centerX, centerY); matrix.postRotate(90 * (rotation - 2), centerX, centerY); } else if (Surface.ROTATION_180 == rotation) { GoalKicker.com Android Notes for Professionals 412 matrix.postRotate(180, centerX, centerY); } mTextureView.setTransform(matrix); } This method compares two Sizes based on their areas. static class CompareSizesByArea implements Comparator<Size> { @Override public int compare(Size lhs, Size rhs) { // We cast here to ensure the multiplications won't overflow return Long.signum((long) lhs.getWidth() * lhs.getHeight() (long) rhs.getWidth() * rhs.getHeight()); } } Not much to see here /** * Shows a {@link Toast} on the UI thread. * * @param text The message to show */ private void showToast(final String text) { runOnUiThread(new Runnable() { @Override public void run() { Toast.makeText(Camera2Activity.this, text, Toast.LENGTH_SHORT).show(); } }); } GoalKicker.com Android Notes for Professionals 413 Chapter 55: Fingerprint API in android Section 55.1: How to use Android Fingerprint API to save user passwords This example helper class interacts with the nger print manager and performs encryption and decryption of password. Please note that the method used for encryption in this example is AES. This is not the only way to encrypt and other examples exist. In this example the data is encrypted and decrypted in the following manner: Encryption: 1. User gives helper the desired non-encrypted password. 2. User is required to provide ngerprint. 3. Once authenticated, the helper obtains a key from the KeyStore and encrypts the password using a Cipher. 4. Password and IV salt (IV is recreated for every encryption and is not reused) are saved to shared preferences to be used later in the decryption process. Decryption: 1. User requests to decrypt the password. 2. User is required to provide ngerprint. 3. The helper builds a Cipher using the IV and once user is authenticated, the KeyStore obtains a key from the KeyStore and deciphers the password. public class FingerPrintAuthHelper { private static final String FINGER_PRINT_HELPER = "FingerPrintAuthHelper"; private static final String ENCRYPTED_PASS_SHARED_PREF_KEY = "ENCRYPTED_PASS_SHARED_PREF_KEY"; private static final String LAST_USED_IV_SHARED_PREF_KEY = "LAST_USED_IV_SHARED_PREF_KEY"; private static final String MY_APP_ALIAS = "MY_APP_ALIAS"; private KeyguardManager keyguardManager; private FingerprintManager fingerprintManager; private final Context context; private KeyStore keyStore; private KeyGenerator keyGenerator; private String lastError; public interface Callback { void onSuccess(String savedPass); void onFailure(String message); void onHelp(int helpCode, String helpString); } public FingerPrintAuthHelper(Context context) { this.context = context; } public String getLastError() { return lastError; } GoalKicker.com Android Notes for Professionals 414 @TargetApi(Build.VERSION_CODES.M) public boolean init() { if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) { setError("This Android version does not support fingerprint authentication"); return false; } keyguardManager = (KeyguardManager) context.getSystemService(KEYGUARD_SERVICE); fingerprintManager = (FingerprintManager) context.getSystemService(FINGERPRINT_SERVICE); if (!keyguardManager.isKeyguardSecure()) { setError("User hasn't enabled Lock Screen"); return false; } if (!hasPermission()) { setError("User hasn't granted permission to use Fingerprint"); return false; } if (!fingerprintManager.hasEnrolledFingerprints()) { setError("User hasn't registered any fingerprints"); return false; } if (!initKeyStore()) { return false; } return false; } @Nullable @RequiresApi(api = Build.VERSION_CODES.M) private Cipher createCipher(int mode) throws NoSuchPaddingException, NoSuchAlgorithmException, UnrecoverableKeyException, KeyStoreException, InvalidKeyException, InvalidAlgorithmParameterException { Cipher cipher = Cipher.getInstance(KeyProperties.KEY_ALGORITHM_AES + "/" + KeyProperties.BLOCK_MODE_CBC + "/" + KeyProperties.ENCRYPTION_PADDING_PKCS7); Key key = keyStore.getKey(MY_APP_ALIAS, null); if (key == null) { return null; } if(mode == Cipher.ENCRYPT_MODE) { cipher.init(mode, key); byte[] iv = cipher.getIV(); saveIv(iv); } else { byte[] lastIv = getLastIv(); cipher.init(mode, key, new IvParameterSpec(lastIv)); } return cipher; } @NonNull @RequiresApi(api = Build.VERSION_CODES.M) private KeyGenParameterSpec createKeyGenParameterSpec() { return new KeyGenParameterSpec.Builder(MY_APP_ALIAS, KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) .setBlockModes(KeyProperties.BLOCK_MODE_CBC) .setUserAuthenticationRequired(true) GoalKicker.com Android Notes for Professionals 415 .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_PKCS7) .build(); } @RequiresApi(api = Build.VERSION_CODES.M) private boolean initKeyStore() { try { keyStore = KeyStore.getInstance("AndroidKeyStore"); keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore"); keyStore.load(null); if (getLastIv() == null) { KeyGenParameterSpec keyGeneratorSpec = createKeyGenParameterSpec(); keyGenerator.init(keyGeneratorSpec); keyGenerator.generateKey(); } } catch (Throwable t) { setError("Failed init of keyStore & keyGenerator: " + t.getMessage()); return false; } return true; } @RequiresApi(api = Build.VERSION_CODES.M) private void authenticate(CancellationSignal cancellationSignal, FingerPrintAuthenticationListener authListener, int mode) { try { if (hasPermission()) { Cipher cipher = createCipher(mode); FingerprintManager.CryptoObject crypto = new FingerprintManager.CryptoObject(cipher); fingerprintManager.authenticate(crypto, cancellationSignal, 0, authListener, null); } else { authListener.getCallback().onFailure("User hasn't granted permission to use Fingerprint"); } } catch (Throwable t) { authListener.getCallback().onFailure("An error occurred: " + t.getMessage()); } } private String getSavedEncryptedPassword() { SharedPreferences sharedPreferences = getSharedPreferences(); if (sharedPreferences != null) { return sharedPreferences.getString(ENCRYPTED_PASS_SHARED_PREF_KEY, null); } return null; } private void saveEncryptedPassword(String encryptedPassword) { SharedPreferences.Editor edit = getSharedPreferences().edit(); edit.putString(ENCRYPTED_PASS_SHARED_PREF_KEY, encryptedPassword); edit.commit(); } private byte[] getLastIv() { SharedPreferences sharedPreferences = getSharedPreferences(); if (sharedPreferences != null) { String ivString = sharedPreferences.getString(LAST_USED_IV_SHARED_PREF_KEY, null); if (ivString != null) { return decodeBytes(ivString); GoalKicker.com Android Notes for Professionals 416 } } return null; } private void saveIv(byte[] iv) { SharedPreferences.Editor edit = getSharedPreferences().edit(); String string = encodeBytes(iv); edit.putString(LAST_USED_IV_SHARED_PREF_KEY, string); edit.commit(); } private SharedPreferences getSharedPreferences() { return context.getSharedPreferences(FINGER_PRINT_HELPER, 0); } @RequiresApi(api = Build.VERSION_CODES.M) private boolean hasPermission() { return ActivityCompat.checkSelfPermission(context, Manifest.permission.USE_FINGERPRINT) == PackageManager.PERMISSION_GRANTED; } @RequiresApi(api = Build.VERSION_CODES.M) public void savePassword(@NonNull String password, CancellationSignal cancellationSignal, Callback callback) { authenticate(cancellationSignal, new FingerPrintEncryptPasswordListener(callback, password), Cipher.ENCRYPT_MODE); } @RequiresApi(api = Build.VERSION_CODES.M) public void getPassword(CancellationSignal cancellationSignal, Callback callback) { authenticate(cancellationSignal, new FingerPrintDecryptPasswordListener(callback), Cipher.DECRYPT_MODE); } @RequiresApi(api = Build.VERSION_CODES.M) public boolean encryptPassword(Cipher cipher, String password) { try { // Encrypt the text if(password.isEmpty()) { setError("Password is empty"); return false; } if (cipher == null) { setError("Could not create cipher"); return false; } ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); CipherOutputStream cipherOutputStream = new CipherOutputStream(outputStream, cipher); byte[] bytes = password.getBytes(Charset.defaultCharset()); cipherOutputStream.write(bytes); cipherOutputStream.flush(); cipherOutputStream.close(); saveEncryptedPassword(encodeBytes(outputStream.toByteArray())); } catch (Throwable t) { setError("Encryption failed " + t.getMessage()); return false; } return true; GoalKicker.com Android Notes for Professionals 417 } private byte[] decodeBytes(String s) { final int len = s.length(); // "111" is not a valid hex encoding. if( len%2 != 0 ) throw new IllegalArgumentException("hexBinary needs to be even-length: "+s); byte[] out = new byte[len/2]; for( int i=0; i<len; i+=2 ) { int h = hexToBin(s.charAt(i )); int l = hexToBin(s.charAt(i+1)); if( h==-1 || l==-1 ) throw new IllegalArgumentException("contains illegal character for hexBinary: "+s); out[i/2] = (byte)(h*16+l); } return out; } private static int hexToBin( char ch ) { if( '0'<=ch && ch<='9' ) return ch-'0'; if( 'A'<=ch && ch<='F' ) return ch-'A'+10; if( 'a'<=ch && ch<='f' ) return ch-'a'+10; return -1; } private static final char[] hexCode = "0123456789ABCDEF".toCharArray(); public String encodeBytes(byte[] data) { StringBuilder r = new StringBuilder(data.length*2); for ( byte b : data) { r.append(hexCode[(b >> 4) & 0xF]); r.append(hexCode[(b & 0xF)]); } return r.toString(); } @NonNull private String decipher(Cipher cipher) throws IOException, IllegalBlockSizeException, BadPaddingException { String retVal = null; String savedEncryptedPassword = getSavedEncryptedPassword(); if (savedEncryptedPassword != null) { byte[] decodedPassword = decodeBytes(savedEncryptedPassword); CipherInputStream cipherInputStream = new CipherInputStream(new ByteArrayInputStream(decodedPassword), cipher); ArrayList<Byte> values = new ArrayList<>(); int nextByte; while ((nextByte = cipherInputStream.read()) != -1) { values.add((byte) nextByte); } cipherInputStream.close(); byte[] bytes = new byte[values.size()]; for (int i = 0; i < values.size(); i++) { bytes[i] = values.get(i).byteValue(); } GoalKicker.com Android Notes for Professionals 418 retVal = new String(bytes, Charset.defaultCharset()); } return retVal; } private void setError(String error) { lastError = error; Log.w(FINGER_PRINT_HELPER, lastError); } @RequiresApi(Build.VERSION_CODES.M) protected class FingerPrintAuthenticationListener extends FingerprintManager.AuthenticationCallback { protected final Callback callback; public FingerPrintAuthenticationListener(@NonNull Callback callback) { this.callback = callback; } public void onAuthenticationError(int errorCode, CharSequence errString) { callback.onFailure("Authentication error [" + errorCode + "] " + errString); } /** * Called when a recoverable error has been encountered during authentication. The help * string is provided to give the user guidance for what went wrong, such as * "Sensor dirty, please clean it." * @param helpCode An integer identifying the error message * @param helpString A human-readable string that can be shown in UI */ public void onAuthenticationHelp(int helpCode, CharSequence helpString) { callback.onHelp(helpCode, helpString.toString()); } /** * Called when a fingerprint is recognized. * @param result An object containing authentication-related data */ public void onAuthenticationSucceeded(FingerprintManager.AuthenticationResult result) { } /** * Called when a fingerprint is valid but not recognized. */ public void onAuthenticationFailed() { callback.onFailure("Authentication failed"); } public @NonNull Callback getCallback() { return callback; } } @RequiresApi(api = Build.VERSION_CODES.M) private class FingerPrintEncryptPasswordListener extends FingerPrintAuthenticationListener { private final String password; public FingerPrintEncryptPasswordListener(Callback callback, String password) { GoalKicker.com Android Notes for Professionals 419 super(callback); this.password = password; } public void onAuthenticationSucceeded(FingerprintManager.AuthenticationResult result) { Cipher cipher = result.getCryptoObject().getCipher(); try { if (encryptPassword(cipher, password)) { callback.onSuccess("Encrypted"); } else { callback.onFailure("Encryption failed"); } } catch (Exception e) { callback.onFailure("Encryption failed " + e.getMessage()); } } } @RequiresApi(Build.VERSION_CODES.M) protected class FingerPrintDecryptPasswordListener extends FingerPrintAuthenticationListener { public FingerPrintDecryptPasswordListener(@NonNull Callback callback) { super(callback); } public void onAuthenticationSucceeded(FingerprintManager.AuthenticationResult result) { Cipher cipher = result.getCryptoObject().getCipher(); try { String savedPass = decipher(cipher); if (savedPass != null) { callback.onSuccess(savedPass); } else { callback.onFailure("Failed deciphering"); } } catch (Exception e) { callback.onFailure("Deciphering failed " + e.getMessage()); } } } } This activity below is a very basic example of how to get a user saved password and interact with the helper. public class MainActivity extends AppCompatActivity { private TextView passwordTextView; private FingerPrintAuthHelper fingerPrintAuthHelper; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); passwordTextView = (TextView) findViewById(R.id.password); errorTextView = (TextView) findViewById(R.id.error); View savePasswordButton = findViewById(R.id.set_password_button); savePasswordButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { GoalKicker.com Android Notes for Professionals 420 fingerPrintAuthHelper.savePassword(passwordTextView.getText().toString(), new CancellationSignal(), getAuthListener(false)); } } }); View getPasswordButton = findViewById(R.id.get_password_button); getPasswordButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { fingerPrintAuthHelper.getPassword(new CancellationSignal(), getAuthListener(true)); } } }); } // Start the finger print helper. In case this fails show error to user private void startFingerPrintAuthHelper() { fingerPrintAuthHelper = new FingerPrintAuthHelper(this); if (!fingerPrintAuthHelper.init()) { errorTextView.setText(fingerPrintAuthHelper.getLastError()); } } @NonNull private FingerPrintAuthHelper.Callback getAuthListener(final boolean isGetPass) { return new FingerPrintAuthHelper.Callback() { @Override public void onSuccess(String result) { if (isGetPass) { errorTextView.setText("Success!!! Pass = " + result); } else { errorTextView.setText("Encrypted pass = " + result); } } @Override public void onFailure(String message) { errorTextView.setText("Failed - " + message); } @Override public void onHelp(int helpCode, String helpString) { errorTextView.setText("Help needed - " + helpString); } }; } } Section 55.2: Adding the Fingerprint Scanner in Android application Android supports ngerprint api from Android 6.0 (Marshmallow) SDK 23 To use this feature in your app, rst add the USE_FINGERPRINT permission in your manifest. <uses-permission GoalKicker.com Android Notes for Professionals 421 android:name="android.permission.USE_FINGERPRINT" /> Here the procedure to follow First you need to create a symmetric key in the Android Key Store using KeyGenerator which can be only be used after the user has authenticated with ngerprint and pass a KeyGenParameterSpec. KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC, "AndroidKeyStore"); keyPairGenerator.initialize( new KeyGenParameterSpec.Builder(KEY_NAME, KeyProperties.PURPOSE_SIGN) .setDigests(KeyProperties.DIGEST_SHA256) .setAlgorithmParameterSpec(new ECGenParameterSpec("secp256r1")) .setUserAuthenticationRequired(true) .build()); keyPairGenerator.generateKeyPair(); By setting KeyGenParameterSpec.Builder.setUserAuthenticationRequired to true, you can permit the use of the key only after the user authenticate it including when authenticated with the user's ngerprint. KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore"); keyStore.load(null); PublicKey publicKey = keyStore.getCertificate(MainActivity.KEY_NAME).getPublicKey(); KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore"); keyStore.load(null); PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null); Then start listening to a ngerprint on the ngerprint sensor by calling FingerprintManager.authenticate with a Cipher initialized with the symmetric key created. Or alternatively you can fall back to server-side veried password as an authenticator. Create and initialise the FingerprintManger from fingerprintManger.class getContext().getSystemService(FingerprintManager.class) To authenticate use FingerprintManger api and create subclass using FingerprintManager.AuthenticationCallback and override the methods onAuthenticationError onAuthenticationHelp onAuthenticationSucceeded onAuthenticationFailed To Start To startListening the ngerPrint event call authenticate method with crypto fingerprintManager .authenticate(cryptoObject, mCancellationSignal, 0 , this, null); GoalKicker.com Android Notes for Professionals 422 Cancel to stop listenting the scanner call android.os.CancellationSignal; Once the ngerprint (or password) is veried, the FingerprintManager.AuthenticationCallback#onAuthenticationSucceeded() callback is called. @Override public void onAuthenticationSucceeded(AuthenticationResult result) { } GoalKicker.com Android Notes for Professionals 423 Chapter 56: Bluetooth and Bluetooth LE API Section 56.1: Permissions Add this permission to the manifest le to use Bluetooth features in your application: <uses-permission android:name="android.permission.BLUETOOTH" /> If you need to initiate device discovery or manipulate Bluetooth settings, you also need to add this permission: <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" /> Targetting Android API level 23 and above, will require location access: <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <!-- OR --> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> * Also see the Permissions topic for more details on how to use permissions appropriately. Section 56.2: Check if bluetooth is enabled private static final int REQUEST_ENABLE_BT = 1; // Unique request code BluetoothAdapter mBluetoothAdapter; // ... if (!mBluetoothAdapter.isEnabled()) { Intent enableBtIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableBtIntent, REQUEST_ENABLE_BT); } // ... @Override protected void onActivityResult(final int requestCode, final int resultCode, final Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == REQUEST_ENABLE_BT) { if (resultCode == RESULT_OK) { // Bluetooth was enabled } else if (resultCode == RESULT_CANCELED) { // Bluetooth was not enabled } } } Section 56.3: Find nearby Bluetooth Low Energy devices The BluetoothLE API was introduced in API 18. However, the way of scanning devices has changed in API 21. The searching of devices must start with dening the service UUID that is to be scanned (either ocailly adopted 16-bit UUID's or proprietary ones). This example illustrates, how to make an API independent way of searching for BLE devices: GoalKicker.com Android Notes for Professionals 424 1. Create bluetooth device model: public class BTDevice { String address; String name; public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } public String getName() { return name; } public void setName(String name) { this.name = name; } } 2. Dene Bluetooth Scanning interface: public interface ScanningAdapter { void startScanning(String name, String[] uuids); void stopScanning(); List<BTDevice> getFoundDeviceList(); } 3. Create scanning factory class: public class BluetoothScanningFactory implements ScanningAdapter { private ScanningAdapter mScanningAdapter; public BluetoothScanningFactory() { if (isNewerAPI()) { mScanningAdapter = new LollipopBluetoothLEScanAdapter(); } else { mScanningAdapter = new JellyBeanBluetoothLEScanAdapter(); } } private boolean isNewerAPI() { return Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP; } @Override public void startScanning(String[] uuids) { mScanningAdapter.startScanning(uuids); } @Override public void stopScanning() { mScanningAdapter.stopScanning(); } @Override public List<BTDevice> getFoundDeviceList() { GoalKicker.com Android Notes for Professionals 425 return mScanningAdapter.getFoundDeviceList(); } } 4. Create factory implementation for each API: API 18: import android.annotation.TargetApi; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothDevice; import android.os.Build; import android.os.Parcelable; import android.util.Log; import bluetooth.model.BTDevice; import java.util.ArrayList; import java.util.List; import java.util.UUID; @TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2) public class JellyBeanBluetoothLEScanAdapter implements ScanningAdapter{ BluetoothAdapter bluetoothAdapter; ScanCallback mCallback; List<BTDevice> mBluetoothDeviceList; public JellyBeanBluetoothLEScanAdapter() { bluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); mCallback = new ScanCallback(); mBluetoothDeviceList = new ArrayList<>(); } @Override public void startScanning(String[] uuids) { if (uuids == null || uuids.length == 0) { return; } UUID[] uuidList = createUUIDList(uuids); bluetoothAdapter.startLeScan(uuidList, mCallback); } private UUID[] createUUIDList(String[] uuids) { UUID[] uuidList = new UUID[uuids.length]; for (int i = 0 ; i < uuids.length ; ++i) { String uuid = uuids[i]; if (uuid == null) { continue; } uuidList[i] = UUID.fromString(uuid); } return uuidList; } @Override public void stopScanning() { bluetoothAdapter.stopLeScan(mCallback); } @Override public List<BTDevice> getFoundDeviceList() { GoalKicker.com Android Notes for Professionals 426 return mBluetoothDeviceList; } private class ScanCallback implements BluetoothAdapter.LeScanCallback { @Override public void onLeScan(BluetoothDevice device, int rssi, byte[] scanRecord) { if (isAlreadyAdded(device)) { return; } BTDevice btDevice = new BTDevice(); btDevice.setName(new String(device.getName())); btDevice.setAddress(device.getAddress()); mBluetoothDeviceList.add(btDevice); Log.d("Bluetooth discovery", device.getName() + " " + device.getAddress()); Parcelable[] uuids = device.getUuids(); String uuid = ""; if (uuids != null) { for (Parcelable ep : uuids) { uuid += ep + " "; } Log.d("Bluetooth discovery", device.getName() + " " + device.getAddress() + " " + uuid); } } private boolean isAlreadyAdded(BluetoothDevice bluetoothDevice) { for (BTDevice device : mBluetoothDeviceList) { String alreadyAddedDeviceMACAddress = device.getAddress(); String newDeviceMACAddress = bluetoothDevice.getAddress(); if (alreadyAddedDeviceMACAddress.equals(newDeviceMACAddress)) { return true; } } return false; } } } API 21: import android.annotation.TargetApi; import android.bluetooth.BluetoothAdapter; import android.bluetooth.le.BluetoothLeScanner; import android.bluetooth.le.ScanFilter; import android.bluetooth.le.ScanResult; import android.bluetooth.le.ScanSettings; import android.os.Build; import android.os.ParcelUuid; import bluetooth.model.BTDevice; import java.util.ArrayList; import java.util.List; @TargetApi(Build.VERSION_CODES.LOLLIPOP) public class LollipopBluetoothLEScanAdapter implements ScanningAdapter { BluetoothLeScanner bluetoothLeScanner; ScanCallback mCallback; List<BTDevice> mBluetoothDeviceList; GoalKicker.com Android Notes for Professionals 427 public LollipopBluetoothLEScanAdapter() { bluetoothLeScanner = BluetoothAdapter.getDefaultAdapter().getBluetoothLeScanner(); mCallback = new ScanCallback(); mBluetoothDeviceList = new ArrayList<>(); } @Override public void startScanning(String[] uuids) { if (uuids == null || uuids.length == 0) { return; } List<ScanFilter> filterList = createScanFilterList(uuids); ScanSettings scanSettings = createScanSettings(); bluetoothLeScanner.startScan(filterList, scanSettings, mCallback); } private List<ScanFilter> createScanFilterList(String[] uuids) { List<ScanFilter> filterList = new ArrayList<>(); for (String uuid : uuids) { ScanFilter filter = new ScanFilter.Builder() .setServiceUuid(ParcelUuid.fromString(uuid)) .build(); filterList.add(filter); }; return filterList; } private ScanSettings createScanSettings() { ScanSettings settings = new ScanSettings.Builder() .setScanMode(ScanSettings.SCAN_MODE_BALANCED) .build(); return settings; } @Override public void stopScanning() { bluetoothLeScanner.stopScan(mCallback); } @Override public List<BTDevice> getFoundDeviceList() { return mBluetoothDeviceList; } public class ScanCallback extends android.bluetooth.le.ScanCallback { @Override public void onScanResult(int callbackType, ScanResult result) { super.onScanResult(callbackType, result); if (result == null) { return; } BTDevice device = new BTDevice(); device.setAddress(result.getDevice().getAddress()); device.setName(new StringBuffer(result.getScanRecord().getDeviceName()).toString()); if (device == null || device.getAddress() == null) { return; } if (isAlreadyAdded(device)) { return; } mBluetoothDeviceList.add(device); GoalKicker.com Android Notes for Professionals 428 } private boolean isAlreadyAdded(BTDevice bluetoothDevice) { for (BTDevice device : mBluetoothDeviceList) { String alreadyAddedDeviceMACAddress = device.getAddress(); String newDeviceMACAddress = bluetoothDevice.getAddress(); if (alreadyAddedDeviceMACAddress.equals(newDeviceMACAddress)) { return true; } } return false; } } } 5. Get found device list by calling: scanningFactory.startScanning({uuidlist}); wait few seconds... List<BTDevice> bluetoothDeviceList = scanningFactory.getFoundDeviceList(); Section 56.4: Make device discoverable private static final int REQUEST_DISCOVERABLE_BT = 2; // Unique request code private static final int DISCOVERABLE_DURATION = 120; // Discoverable duration time in seconds // 0 means always discoverable // maximum value is 3600 // ... Intent discoverableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_DISCOVERABLE); discoverableIntent.putExtra(BluetoothAdapter.EXTRA_DISCOVERABLE_DURATION, DISCOVERABLE_DURATION); startActivityForResult(discoverableIntent, REQUEST_DISCOVERABLE_BT); // ... @Override protected void onActivityResult(final int requestCode, final int resultCode, final Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == REQUEST_DISCOVERABLE_BT) { if (resultCode == RESULT_OK) { // Device is discoverable } else if (resultCode == RESULT_CANCELED) { // Device is not discoverable } } } Section 56.5: Connect to Bluetooth device After you obtained BluetoothDevice, you can communicate with it. This kind of communication performed by using socket input\output streams: Those are the basic steps for Bluetooth communication establishment: GoalKicker.com Android Notes for Professionals 429 1) Initialize socket: private BluetoothSocket _socket; //... public InitializeSocket(BluetoothDevice device){ try { _socket = device.createRfcommSocketToServiceRecord(<Your app UDID>); } catch (IOException e) { //Error } } 2) Connect to socket: try { _socket.connect(); } catch (IOException connEx) { try { _socket.close(); } catch (IOException closeException) { //Error } } if (_socket != null && _socket.isConnected()) { //Socket is connected, now we can obtain our IO streams } 3) Obtaining socket Input\Output streams private InputStream _inStream; private OutputStream _outStream; //.... try { _inStream = _socket.getInputStream(); _outStream = _socket.getOutputStream(); } catch (IOException e) { //Error } Input stream - Used as incoming data channel (receive data from connected device) Output stream - Used as outgoing data channel (send data to connected device) After nishing 3rd step, we can receive and send data between both devices using previously initialized streams: 1) Receiving data (reading from socket input stream) byte[] buffer = new byte[1024]; // buffer (our data) int bytesCount; // amount of read bytes while (true) { try { //reading data from input stream bytesCount = _inStream.read(buffer); if(buffer != null && bytesCount > 0) { //Parse received bytes GoalKicker.com Android Notes for Professionals 430 } } catch (IOException e) { //Error } } 2) Sending data (Writing to output stream) public void write(byte[] bytes) { try { _outStream.write(bytes); } catch (IOException e) { //Error } } Of course, connection, reading and writing functionality should be done in a dedicated thread. Sockets and Stream objects need to be Section 56.6: Find nearby bluetooth devices Declare a BluetoothAdapter rst. BluetoothAdapter mBluetoothAdapter; Now create a BroadcastReceiver for ACTION_FOUND private final BroadcastReceiver mReceiver = new BroadcastReceiver() { public void onReceive(Context context, Intent intent) { String action = intent.getAction(); //Device found if (BluetoothDevice.ACTION_FOUND.equals(action)) { // Get the BluetoothDevice object from the Intent BluetoothDevice device = intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE); // Add the name and address to an array adapter to show in a list mArrayAdapter.add(device.getName() + "\n" + device.getAddress()); } } }; Register the BroadcastReceiver IntentFilter filter = new IntentFilter(BluetoothDevice.ACTION_FOUND); registerReceiver(mReceiver, filter); Then start discovering the nearby bluetooth devices by calling startDiscovery mBluetoothAdapter.startDiscovery(); Don't forget to unregister the BroadcastReceiver inside onDestroy unregisterReceiver(mReceiver); GoalKicker.com Android Notes for Professionals 431 Chapter 57: Runtime Permissions in API-23 + Android Marshmallow introduced Runtime Permission model. Permissions are categorized into two categories i.e. Normal and Dangerous Permissions. where dangerous permissions are now granted by the user at run time. Section 57.1: Android 6.0 multiple permissions This example shows how to check permissions at runtime in Android 6 and later. public static final int MULTIPLE_PERMISSIONS = 10; // code you want. String[] permissions = new String[] { Manifest.permission.WRITE_EXTERNAL_STORAGE, Manifest.permission.CAMERA, Manifest.permission.ACCESS_COARSE_LOCATION, Manifest.permission.ACCESS_FINE_LOCATION }; @Override void onStart() { if (checkPermissions()){ // permissions granted. } else { // show dialog informing them that we lack certain permissions } } private boolean checkPermissions() { int result; List<String> listPermissionsNeeded = new ArrayList<>(); for (String p:permissions) { result = ContextCompat.checkSelfPermission(getActivity(),p); if (result != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(p); } } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this, listPermissionsNeeded.toArray(new String[listPermissionsNeeded.size()]), MULTIPLE_PERMISSIONS); return false; } return true; } @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case MULTIPLE_PERMISSIONS:{ if(grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED){ // permissions granted. } else { // no permissions granted. } return; } } } GoalKicker.com Android Notes for Professionals 432 Section 57.2: Multiple Runtime Permissions From Same Permission Groups In the manifest we have fours dangerous runtime permissions from two groups. <!-- Required to read and write to shredPref file. --> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <!-- Required to get location of device. --> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> In the activity where the permissions are required. Note it is important to check for permissions in any activity that requires permissions, as the permissions can be revoked while the app is in the background and the app will then crash. final private int REQUEST_CODE_ASK_MULTIPLE_PERMISSIONS = 124; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.act_layout); // A simple check of whether runtime permissions need to be managed if (Build.VERSION.SDK_INT >= 23) { checkMultiplePermissions(); } We only need to ask for permission for one of these from each group and all other permissions from this group are granted unless the permission is revoked by the user. private void checkMultiplePermissions() { if (Build.VERSION.SDK_INT >= 23) { List<String> permissionsNeeded = new ArrayList<String>(); List<String> permissionsList = new ArrayList<String>(); if (!addPermission(permissionsList, android.Manifest.permission.ACCESS_FINE_LOCATION)) { permissionsNeeded.add("GPS"); } if (!addPermission(permissionsList, android.Manifest.permission.READ_EXTERNAL_STORAGE)) { permissionsNeeded.add("Read Storage"); } if (permissionsList.size() > 0) { requestPermissions(permissionsList.toArray(new String[permissionsList.size()]), REQUEST_CODE_ASK_MULTIPLE_PERMISSIONS); return; } } } private boolean addPermission(List<String> permissionsList, String permission) { if (Build.VERSION.SDK_INT >= 23) GoalKicker.com Android Notes for Professionals 433 if (checkSelfPermission(permission) != PackageManager.PERMISSION_GRANTED) { permissionsList.add(permission); // Check for Rationale Option if (!shouldShowRequestPermissionRationale(permission)) return false; } return true; } This deals with the result of the user allowing or not allowing permissions. In this example, if the permissions are not allowed, the app is killed. @Override public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) { switch (requestCode) { case REQUEST_CODE_ASK_MULTIPLE_PERMISSIONS: { Map<String, Integer> perms = new HashMap<String, Integer>(); // Initial perms.put(android.Manifest.permission.ACCESS_FINE_LOCATION, PackageManager.PERMISSION_GRANTED); perms.put(android.Manifest.permission.READ_EXTERNAL_STORAGE, PackageManager.PERMISSION_GRANTED); // Fill with results for (int i = 0; i < permissions.length; i++) perms.put(permissions[i], grantResults[i]); if (perms.get(android.Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED && perms.get(android.Manifest.permission.READ_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED) { // All Permissions Granted return; } else { // Permission Denied if (Build.VERSION.SDK_INT >= 23) { Toast.makeText( getApplicationContext(), "My App cannot run without Location and Storage " + "Permissions.\nRelaunch My App or allow permissions" + " in Applications Settings", Toast.LENGTH_LONG).show(); finish(); } } } break; default: super.onRequestPermissionsResult(requestCode, permissions, grantResults); } } More Information https://inthecheesefactory.com/blog/things-you-need-to-know-about-android-m-permission-developer-edition/en Section 57.3: Using PermissionUtil PermissionUtil is a simple and convenient way of asking for permissions in context. You can easily provide what should happen in case of all requested permissions granted (onAllGranted()), any request was denied GoalKicker.com Android Notes for Professionals 434 (onAnyDenied()) or in case that a rational is needed (onRational()). Anywhere in your AppCompatActivity or Fragment that you want to ask for user's permisssion mRequestObject = PermissionUtil.with(this).request(Manifest.permission.WRITE_EXTERNAL_STORAGE).onAllGranted( new Func() { @Override protected void call() { //Happy Path } }).onAnyDenied( new Func() { @Override protected void call() { //Sad Path } }).ask(REQUEST_CODE_STORAGE); And add this to onRequestPermissionsResult if(mRequestObject!=null){ mRequestObject.onRequestPermissionsResult(requestCode, permissions, grantResults); } Add the requested permission to your AndroidManifest.xml as well <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> Section 57.4: Include all permission-related code to an abstract base class and extend the activity of this base class to achieve cleaner/reusable code public abstract class BaseActivity extends AppCompatActivity { private Map<Integer, PermissionCallback> permissionCallbackMap = new HashMap<>(); @Override protected void onStart() { super.onStart(); ... } @Override public void setContentView(int layoutResId) { super.setContentView(layoutResId); bindViews(); } ... @Override public void onRequestPermissionsResult( int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); PermissionCallback callback = permissionCallbackMap.get(requestCode); if (callback == null) return; // Check whether the permission request was rejected. if (grantResults.length < 0 && permissions.length > 0) { callback.onPermissionDenied(permissions); GoalKicker.com Android Notes for Professionals 435 return; } List<String> grantedPermissions = new ArrayList<>(); List<String> blockedPermissions = new ArrayList<>(); List<String> deniedPermissions = new ArrayList<>(); int index = 0; for (String permission : permissions) { List<String> permissionList = grantResults[index] == PackageManager.PERMISSION_GRANTED ? grantedPermissions : ! ActivityCompat.shouldShowRequestPermissionRationale(this, permission) ? blockedPermissions : deniedPermissions; permissionList.add(permission); index ++; } if (grantedPermissions.size() > 0) { callback.onPermissionGranted( grantedPermissions.toArray(new String[grantedPermissions.size()])); } if (deniedPermissions.size() > 0) { callback.onPermissionDenied( deniedPermissions.toArray(new String[deniedPermissions.size()])); } if (blockedPermissions.size() > 0) { callback.onPermissionBlocked( blockedPermissions.toArray(new String[blockedPermissions.size()])); } permissionCallbackMap.remove(requestCode); } /** * Check whether a permission is granted or not. * * @param permission * @return */ public boolean hasPermission(String permission) { return ContextCompat.checkSelfPermission(this, permission) == PackageManager.PERMISSION_GRANTED; } /** * Request permissions and get the result on callback. * * @param permissions * @param callback */ public void requestPermission(String [] permissions, @NonNull PermissionCallback callback) { int requestCode = permissionCallbackMap.size() + 1; permissionCallbackMap.put(requestCode, callback); ActivityCompat.requestPermissions(this, permissions, requestCode); } /** * Request permission and get the result on callback. * GoalKicker.com Android Notes for Professionals 436 * @param permission * @param callback */ public void requestPermission(String permission, @NonNull PermissionCallback callback) { int requestCode = permissionCallbackMap.size() + 1; permissionCallbackMap.put(requestCode, callback); ActivityCompat.requestPermissions(this, new String[] { permission }, requestCode); } } Example usage in the activity The activity should extend the abstract base class dened above as follows: private void requestLocationAfterPermissionCheck() { if (hasPermission(Manifest.permission.ACCESS_FINE_LOCATION)) { requestLocation(); return; } // Call the base class method. requestPermission(Manifest.permission.ACCESS_FINE_LOCATION, new PermissionCallback() { @Override public void onPermissionGranted(String[] grantedPermissions) { requestLocation(); } @Override public void onPermissionDenied(String[] deniedPermissions) { // Do something. } @Override public void onPermissionBlocked(String[] blockedPermissions) { // Do something. } }); } Section 57.5: Enforcing Permissions in Broadcasts, URI You can do a permissions check when sending an Intent to a registered broadcast receiver. The permissions you send are cross-checked with the ones registered under the tag. They restrict who can send broadcasts to the associated receiver. To send a broadcast request with permissions, specify the permission as a string in the Context.sendBroadcast(Intent intent, String permission) call, but keep in mind that the Receiver's app MUST have that permission in order to receive your broadcast. The receiver should be installed rst before the sender. The method signature is: void sendBroadcast (Intent intent, String receiverPermission) //for example to send a broadcast to Bcastreceiver receiver Intent broadcast = new Intent(this, Bcastreceiver.class); sendBroadcast(broadcast, "org.quadcore.mypermission"); and you can specify in your manifest that the broadcast sender is required to include the requested permission sent through the sendBroadcast: GoalKicker.com Android Notes for Professionals 437 <!-- Your special permission --> <permission android:name="org.quadcore.mypermission" android:label="my_permission" android:protectionLevel="dangerous"></permission> Also declare the permission in the manifest of the application that is supposed to receive this broadcast: <!-- I use the permission ! --> <uses-permission android:name="org.quadcore.mypermission"/> <!-- along with the receiver --> <receiver android:name="Bcastreceiver" android:exported="true" /> Note: Both a receiver and a broadcaster can require a permission, and when this happens, both permission checks must pass for the Intent to be delivered to the associated target. The App that denes the permission should be installed rst. Find the full documentation here on Permissions. GoalKicker.com Android Notes for Professionals 438 Chapter 58: Android Places API Section 58.1: Getting Current Places by Using Places API You can get the current location and local places of user by using the Google Places API. Ar rst, you should call the PlaceDetectionApi.getCurrentPlace() method in order to retrieve local business or other places. This method returns a PlaceLikelihoodBuffer object which contains a list of PlaceLikelihood objects. Then, you can get a Place object by calling the PlaceLikelihood.getPlace() method. Important: You must request and obtain the ACCESS_FINE_LOCATION permission in order to allow your app to access precise location information. private static final int PERMISSION_REQUEST_TO_ACCESS_LOCATION = 1; private TextView txtLocation; private GoogleApiClient googleApiClient; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_location); txtLocation = (TextView) this.findViewById(R.id.txtLocation); googleApiClient = new GoogleApiClient.Builder(this) .addApi(Places.GEO_DATA_API) .addApi(Places.PLACE_DETECTION_API) .enableAutoManage(this, this) .build(); getCurrentLocation(); } private void getCurrentLocation() { if (ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) { Log.e(LOG_TAG, "Permission is not granted"); ActivityCompat.requestPermissions(this,new String[]{Manifest.permission.ACCESS_FINE_LOCATION},PERMISSION_REQUEST_TO_ACCESS_LOCATION); return; } Log.i(LOG_TAG, "Permission is granted"); PendingResult<PlaceLikelihoodBuffer> result = Places.PlaceDetectionApi.getCurrentPlace(googleApiClient, null); result.setResultCallback(new ResultCallback<PlaceLikelihoodBuffer>() { @Override public void onResult(PlaceLikelihoodBuffer likelyPlaces) { Log.i(LOG_TAG, String.format("Result received : %d " , likelyPlaces.getCount() )); StringBuilder stringBuilder = new StringBuilder(); for (PlaceLikelihood placeLikelihood : likelyPlaces) { stringBuilder.append(String.format("Place : '%s' %n", placeLikelihood.getPlace().getName())); } likelyPlaces.release(); txtLocation.setText(stringBuilder.toString()); GoalKicker.com Android Notes for Professionals 439 } }); } @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case PERMISSION_REQUEST_TO_ACCESS_LOCATION: { // If the request is cancelled, the result arrays are empty. if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { getCurrentLocation(); } else { // Permission denied, boo! // Disable the functionality that depends on this permission. } return; } // Add further 'case' lines to check for other permissions this app might request. } } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { Log.e(LOG_TAG, "GoogleApiClient connection failed: " + connectionResult.getErrorMessage()); } Section 58.2: Place Autocomplete Integration The autocomplete feature in the Google Places API for Android provides place predictions to user. While user types in the search box, autocomplete shows places according to user's queries. AutoCompleteActivity.java private TextView txtSelectedPlaceName; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_autocomplete); txtSelectedPlaceName = (TextView) this.findViewById(R.id.txtSelectedPlaceName); PlaceAutocompleteFragment autocompleteFragment = (PlaceAutocompleteFragment) getFragmentManager().findFragmentById(R.id.fragment_autocomplete); autocompleteFragment.setOnPlaceSelectedListener(new PlaceSelectionListener() { @Override public void onPlaceSelected(Place place) { Log.i(LOG_TAG, "Place: " + place.getName()); txtSelectedPlaceName.setText(String.format("Selected places : %s - %s" , place.getName(), place.getAddress())); } @Override public void onError(Status status) { Log.i(LOG_TAG, "An error occurred: " + status); Toast.makeText(AutoCompleteActivity.this, "Place cannot be selected!!", Toast.LENGTH_SHORT).show(); } GoalKicker.com Android Notes for Professionals 440 }); } } activity_autocomplete.xml <fragment android:id="@+id/fragment_autocomplete" android:layout_width="match_parent" android:layout_height="wrap_content" android:name="com.google.android.gms.location.places.ui.PlaceAutocompleteFragment" /> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/txtSelectedPlaceName" android:layout_margin="20dp" android:padding="15dp" android:hint="@string/txt_select_place_hint" android:textSize="@dimen/place_autocomplete_prediction_primary_text"/> Section 58.3: Place Picker Usage Example Place Picker is a really simple UI widget provided by Places API. It provides a built-in map, current location, nearby places, search abilities and autocomplete. This is a sample usage of Place Picker UI widget. private static int PLACE_PICKER_REQUEST = 1; private TextView txtPlaceName; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_place_picker_sample); txtPlaceName = (TextView) this.findViewById(R.id.txtPlaceName); Button btnSelectPlace = (Button) this.findViewById(R.id.btnSelectPlace); btnSelectPlace.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { openPlacePickerView(); } }); } private void openPlacePickerView(){ PlacePicker.IntentBuilder builder = new PlacePicker.IntentBuilder(); try { startActivityForResult(builder.build(this), PLACE_PICKER_REQUEST); } catch (GooglePlayServicesRepairableException e) { e.printStackTrace(); GoalKicker.com Android Notes for Professionals 441 } catch (GooglePlayServicesNotAvailableException e) { e.printStackTrace(); } } protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == PLACE_PICKER_REQUEST) { if (resultCode == RESULT_OK) { Place place = PlacePicker.getPlace(this, data); Log.i(LOG_TAG, String.format("Place Name : %s", place.getName())); Log.i(LOG_TAG, String.format("Place Address : %s", place.getAddress())); Log.i(LOG_TAG, String.format("Place Id : %s", place.getId())); txtPlaceName.setText(String.format("Place : %s - %s" , place.getName() , place.getAddress())); } } } Section 58.4: Setting place type lters for PlaceAutocomplete In some scenarios, we might want to narrow down the results being shown by PlaceAutocomplete to a specic country or maybe to show only Regions. This can be achieved by setting an AutocompleteFilter on the intent. For example, if I want to look only for places of type REGION and only belonging to India, I would do the following: MainActivity.java public class MainActivity extends AppComatActivity { private static final int PLACE_AUTOCOMPLETE_REQUEST_CODE = 1; private TextView selectedPlace; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); selectedPlace = (TextView) findViewById(R.id.selected_place); try { AutocompleteFilter typeFilter = new AutocompleteFilter.Builder() .setTypeFilter(AutocompleteFilter.TYPE_FILTER_REGIONS) .setCountry("IN") .build(); Intent intent = new PlaceAutocomplete.IntentBuilder(PlaceAutocomplete.MODE_FULLSCREEN) .setFilter(typeFilter) .build(this); startActivityForResult(intent, PLACE_AUTOCOMPLETE_REQUEST_CODE); } catch (GooglePlayServicesRepairableException | GooglePlayServicesNotAvailableException e) { e.printStackTrace(); } } protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == PLACE_AUTOCOMPLETE_REQUEST_CODE && resultCode == Activity.RESULT_OK) { GoalKicker.com Android Notes for Professionals 442 final Place place = PlacePicker.getPlace(this, data); selectedPlace.setText(place.getName().toString().toUpperCase()); } else { Toast.makeText(MainActivity.this, "Could not get location.", Toast.LENGTH_SHORT).show(); } } activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/selected_place"/> </LinearLayout> The PlaceAutocomplete will launch automatically and you can then select a place from the results which will only be of the type REGION and will only belong to the specied country. The intent can also be launched at the click of a button. Section 58.5: Adding more than one google auto complete activity public static final int PLACE_AUTOCOMPLETE_FROM_PLACE_REQUEST_CODE=1; public static final int PLACE_AUTOCOMPLETE_TO_PLACE_REQUEST_CODE=2; fromPlaceEdit.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try { //Do your stuff from place startActivityForResult(intent, PLACE_AUTOCOMPLETE_FROM_PLACE_REQUEST_CODE); } catch (GooglePlayServicesRepairableException e) { // TODO: Handle the error. } catch (GooglePlayServicesNotAvailableException e) { // TODO: Handle the error. } } }); toPlaceEdit.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try { //Do your stuff to place startActivityForResult(intent, PLACE_AUTOCOMPLETE_TO_PLACE_REQUEST_CODE); } catch (GooglePlayServicesRepairableException e) { // TODO: Handle the error. } catch (GooglePlayServicesNotAvailableException e) { // TODO: Handle the error. GoalKicker.com Android Notes for Professionals 443 } } }); @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == PLACE_AUTOCOMPLETE_FROM_PLACE_REQUEST_CODE) { if (resultCode == RESULT_OK) { //Do your ok >from place< stuff here } else if (resultCode == PlaceAutocomplete.RESULT_ERROR) { //Handle your error >from place< } else if (resultCode == RESULT_CANCELED) { // The user canceled the operation. } } else if (requestCode == PLACE_AUTOCOMPLETE_TO_PLACE_REQUEST_CODE) { if (resultCode == RESULT_OK) { //Do your ok >to place< stuff here } else if (resultCode == PlaceAutocomplete.RESULT_ERROR) { //Handle your error >to place< } else if (resultCode == RESULT_CANCELED) { // The user canceled the operation. } } } GoalKicker.com Android Notes for Professionals 444 Chapter 59: Android NDK Section 59.1: How to log in ndk First make sure you link against the logging library in your Android.mk le: LOCAL_LDLIBS := -llog Then use one of the following __android_log_print() calls: #include <android/log.h> #define TAG "MY LOG" __android_log_print(ANDROID_LOG_VERBOSE, __android_log_print(ANDROID_LOG_WARN, __android_log_print(ANDROID_LOG_DEBUG, __android_log_print(ANDROID_LOG_INFO, __android_log_print(ANDROID_LOG_ERROR, TAG, "The value of 1 + 1 is %d", 1 + 1) TAG, "The value of 1 + 1 is %d", 1 + 1) TAG, "The value of 1 + 1 is %d", 1 + 1) TAG, "The value of 1 + 1 is %d", 1 + 1) TAG, "The value of 1 + 1 is %d", 1 + 1) Or use those in a more convenient way by dening corresponding macros: #define #define #define #define #define LOGV(...) LOGW(...) LOGD(...) LOGI(...) LOGE(...) __android_log_print(ANDROID_LOG_VERBOSE, __android_log_print(ANDROID_LOG_WARN, __android_log_print(ANDROID_LOG_DEBUG, __android_log_print(ANDROID_LOG_INFO, __android_log_print(ANDROID_LOG_ERROR, TAG, __VA_ARGS__) TAG, __VA_ARGS__) TAG, __VA_ARGS__) TAG, __VA_ARGS__) TAG, __VA_ARGS__) Example: int x = 42; LOGD("The value of x is %d", x); Section 59.2: Building native executables for Android project/jni/main.c #include <stdio.h> #include <unistd.h> int main(void) { printf("Hello world!\n"); return 0; } project/jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := hello_world LOCAL_SRC_FILES := main.c include $(BUILD_EXECUTABLE) project/jni/Application.mk GoalKicker.com Android Notes for Professionals 445 APP_ABI := all APP_PLATFORM := android-21 If you want to support devices running Android versions lower than 5.0 (API 21), you need to compile your binary with APP_PLATFORM set to an older API, e.g. android-8. This is a consequence of Android 5.0 enforcing Position Independent Binaries (PIE), whereas older devices do not necessarily support PIEs. Therefore, you need to use either the PIE or the non-PIE, depending on the device version. If you want to use the binary from within your Android application, you need to check the API level and extract the correct binary. APP_ABI can be changed to specic platforms such as armeabi to build the binary for those architectures only. In the worst case, you will have both a PIE and a non-PIE binary for each architecture (about 14 dierent binaries using ndk-r10e). To build the executable: cd project ndk-build You will nd the binaries at project/libs/<architecture>/hello_world. You can use them via ADB (push and chmod it with executable permission) or from your application (extract and chmod it with executable permission). To determine the architecture of the CPU, retrieve the build property ro.product.cpu.abi for the primary architecture or ro.product.cpu.abilist (on newer devices) for a complete list of supported architectures. You can do this using the android.os.Build class from within your application or using getprop <name> via ADB. Section 59.3: How to clean the build If you need to clean the build: ndk-build clean Section 59.4: How to use a makele other than Android.mk ndk-build NDK_PROJECT_PATH=PROJECT_PATH APP_BUILD_SCRIPT=MyAndroid.mk GoalKicker.com Android Notes for Professionals 446 Chapter 60: DayNight Theme (AppCompat v23.2 / API 14+) Section 60.1: Adding the DayNight theme to an app The DayNight theme gives an app the cool capability of switching color schemes based on the time of day and the device's last known location. Add the following to your styles.xml: <style name="AppTheme" parent="Theme.AppCompat.DayNight"> <!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> The themes you can extend from to add day night theme switching capability are the following: "Theme.AppCompat.DayNight" "Theme.AppCompat.DayNight.NoActionBar" "Theme.AppCompat.DayNight.DarkActionBar" Apart from colorPrimary, colorPrimaryDark and colorAccent, you can also add any other colors that you would like to be switched, e.g. textColorPrimary or textColorSecondary. You can add your app's custom colors to this style as well. For theme switching to work, you need to dene a default colors.xml in the res/values directory and another colors.xml in the res/values-night directory and dene day/night colors appropriately. To switch the theme, call the AppCompatDelegate.setDefaultNightMode(int) method from your Java code. (This will change the color scheme for the whole app, not just any one activity or fragment.) For example: AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO); You can pass any of the following three according to your choice: AppCompatDelegate.MODE_NIGHT_NO: this sets the default theme for your app and takes the colors dened in the res/values directory. It is recommended to use light colors for this theme. AppCompatDelegate.MODE_NIGHT_YES: this sets a night theme for your app and takes the colors dened in the res/values-night directory. It is recommended to use dark colors for this theme. AppCompatDelegate.MODE_NIGHT_AUTO: this auto switches the colors of the app based on the time of the day and the colors you have dened in values and values-night directories. It is also possible to get the current night mode status using the getDefaultNightMode() method. For example: int modeType = AppCompatDelegate.getDefaultNightMode(); Please note, however, that the theme switch will not persist if you kill the app and reopen it. If you do that, the theme will switch back to AppCompatDelegate.MODE_NIGHT_AUTO, which is the default value. If you want the theme switch to persist, make sure you store the value in shared preferences and load the stored value each time the app is opened after it has been destroyed. GoalKicker.com Android Notes for Professionals 447 Chapter 61: Glide **** WARNING This documentation is unmaintained and frequently inaccurate **** Glide's ocial documentation is a much better source: For Glide v4, see http://bumptech.github.io/glide/. For Glide v3, see https://github.com/bumptech/glide/wiki. Section 61.1: Loading an image ImageView To load an image from a specied URL, Uri, resource id, or any other model into an ImageView: ImageView imageView = (ImageView) findViewById(R.id.imageView); String yourUrl = "http://www.yoururl.com/image.png"; Glide.with(context) .load(yourUrl) .into(imageView); For Uris, replace yourUrl with your Uri (content://media/external/images/1). For Drawables replace yourUrl with your resource id (R.drawable.image). RecyclerView and ListView In ListView or RecyclerView, you can use exactly the same lines: @Override public void onBindViewHolder(RecyclerView.ViewHolder viewHolder, int position) { MyViewHolder myViewHolder = (MyViewHolder) viewHolder; String currentUrl = myUrls.get(position); Glide.with(context) .load(currentUrl) .into(myViewHolder.imageView); } If you don't want to start a load in onBindViewHolder, make sure you clear() any ImageView Glide may be managing before modifying the ImageView: @Override public void onBindViewHolder(RecyclerView.ViewHolder viewHolder, int position) { MyViewHolder myViewHolder = (MyViewHolder) viewHolder; String currentUrl = myUrls.get(position); if (TextUtils.isEmpty(currentUrl)) { Glide.clear(viewHolder.imageView); // Now that the view has been cleared, you can safely set your own resource viewHolder.imageView.setImageResource(R.drawable.missing_image); } else { Glide.with(context) .load(currentUrl) .into(myViewHolder.imageView); } } GoalKicker.com Android Notes for Professionals 448 Section 61.2: Add Glide to your project From the ocial documentation: With Gradle: repositories { mavenCentral() // jcenter() works as well because it pulls from Maven Central } dependencies { compile 'com.github.bumptech.glide:glide:4.0.0' compile 'com.android.support:support-v4:25.3.1' annotationProcessor 'com.github.bumptech.glide:compiler:4.0.0' } With Maven: <dependency> <groupId>com.github.bumptech.glide</groupId> <artifactId>glide</artifactId> <version>4.0.0</version> </dependency> <dependency> <groupId>com.google.android</groupId> <artifactId>support-v4</artifactId> <version>r7</version> </dependency> <dependency> <groupId>com.github.bumptech.glide</groupId> <artifactId>compiler</artifactId> <version>4.0.0</version> <optional>true</optional> </dependency> Depending on your ProGuard (DexGuard) cong and usage, you may also need to include the following lines in your proguard.cfg (See Glide's wiki for more info): -keep public class * implements com.bumptech.glide.module.GlideModule -keep public class * extends com.bumptech.glide.AppGlideModule -keep public enum com.bumptech.glide.load.resource.bitmap.ImageHeaderParser$** **[] $VALUES; public *; } { # for DexGuard only -keepresourcexmlelements manifest/application/meta-data@value=GlideModule Section 61.3: Glide circle transformation (Load image in a circular ImageView) Create a circle image with glide. public class CircleTransform extends BitmapTransformation { public CircleTransform(Context context) { super(context); } GoalKicker.com Android Notes for Professionals 449 @Override protected Bitmap transform(BitmapPool pool, Bitmap toTransform, int outWidth, int outHeight) { return circleCrop(pool, toTransform); } private static Bitmap circleCrop(BitmapPool pool, Bitmap source) { if (source == null) return null; int size = Math.min(source.getWidth(), source.getHeight()); int x = (source.getWidth() - size) / 2; int y = (source.getHeight() - size) / 2; Bitmap squared = Bitmap.createBitmap(source, x, y, size, size); Bitmap result = pool.get(size, size, Bitmap.Config.ARGB_8888); if (result == null) { result = Bitmap.createBitmap(size, size, Bitmap.Config.ARGB_8888); } Canvas canvas = new Canvas(result); Paint paint = new Paint(); paint.setShader(new BitmapShader(squared, BitmapShader.TileMode.CLAMP, BitmapShader.TileMode.CLAMP)); paint.setAntiAlias(true); float r = size / 2f; canvas.drawCircle(r, r, r, paint); return result; } @Override public String getId() { return getClass().getName(); } } Usage: Glide.with(context) .load(yourimageurl) .transform(new CircleTransform(context)) .into(userImageView); Section 61.4: Default transformations Glide includes two default transformations, t center and center crop. Fit center: Glide.with(context) .load(yourUrl) .fitCenter() .into(yourView); Fit center performs the same transformation as Android's ScaleType.FIT_CENTER. Center crop: Glide.with(context) .load(yourUrl) .centerCrop() GoalKicker.com Android Notes for Professionals 450 .into(yourView); Center crop performs the same transformation as Android's ScaleType.CENTER_CROP. For more information, see Glide's wiki. Section 61.5: Glide rounded corners image with custom Glide target First make utility class or use this method in class needed public class UIUtils { public static BitmapImageViewTarget getRoundedImageTarget(@NonNull final Context context, @NonNull final ImageView imageView, final float radius) { return new BitmapImageViewTarget(imageView) { @Override protected void setResource(final Bitmap resource) { RoundedBitmapDrawable circularBitmapDrawable = RoundedBitmapDrawableFactory.create(context.getResources(), resource); circularBitmapDrawable.setCornerRadius(radius); imageView.setImageDrawable(circularBitmapDrawable); } }; } Loading image: Glide.with(context) .load(imageUrl) .asBitmap() .into(UIUtils.getRoundedImageTarget(context, imageView, radius)); Because you use asBitmap() the animations will be removed though. You can use your own animation in this place using the animate() method. Example with similar fade in to default Glide animation. Glide.with(context) .load(imageUrl) .asBitmap() .animate(R.anim.abc_fade_in) .into(UIUtils.getRoundedImageTarget(context, imageView, radius)); Please note this animation is support library private resource - it is unrecommended to use as it can change or even be removed. Note you also need to have support library to use RoundedBitmapDrawableFactory Section 61.6: Placeholder and Error handling If you want to add a Drawable be shown during the load, you can add a placeholder: Glide.with(context) .load(yourUrl) .placeholder(R.drawable.placeholder) GoalKicker.com Android Notes for Professionals 451 .into(imageView); If you want a Drawable to be shown if the load fails for any reason: Glide.with(context) .load(yourUrl) .error(R.drawable.error) .into(imageView); If you want a Drawable to be shown if you provide a null model (URL, Uri, le path etc): Glide.with(context) .load(maybeNullUrl) .fallback(R.drawable.fallback) .into(imageView); Section 61.7: Preloading images To preload remote images and ensure that the image is only downloaded once: Glide.with(context) .load(yourUrl) .diskCacheStrategy(DiskCacheStrategy.SOURCE) .preload(); Then: Glide.with(context) .load(yourUrl) .diskCacheStrategy(DiskCacheStrategy.SOURCE) // ALL works here too .into(imageView); To preload local images and make sure a transformed copy is in the disk cache (and maybe the memory cache): Glide.with(context) .load(yourFilePathOrUri) .fitCenter() // Or whatever transformation you want .preload(200, 200); // Or whatever width and height you want Then: Glide.with(context) .load(yourFilePathOrUri) .fitCenter() // You must use the same transformation as above .override(200, 200) // You must use the same width and height as above .into(imageView); Section 61.8: Handling Glide image load failed Glide .with(context) .load(currentUrl) .into(new BitmapImageViewTarget(profilePicture) { @Override protected void setResource(Bitmap resource) { RoundedBitmapDrawable circularBitmapDrawable = RoundedBitmapDrawableFactory.create(context.getResources(), resource); GoalKicker.com Android Notes for Professionals 452 circularBitmapDrawable.setCornerRadius(radius); imageView.setImageDrawable(circularBitmapDrawable); } @Override public void onLoadFailed(@NonNull Exception e, Drawable errorDrawable) { super.onLoadFailed(e, SET_YOUR_DEFAULT_IMAGE); Log.e(TAG, e.getMessage(), e); } }); Here at SET_YOUR_DEFAULT_IMAGE place you can set any default Drawable. This image will be shown if Image loading is failed. Section 61.9: Load image in a circular ImageView without custom transformations Create a custom BitmapImageViewTarget to load the image into: public class CircularBitmapImageViewTarget extends BitmapImageViewTarget { private Context context; private ImageView imageView; public CircularBitmapImageViewTarget(Context context, ImageView imageView) { super(imageView); this.context = context; this.imageView = imageView; } @Override protected void setResource(Bitmap resource) { RoundedBitmapDrawable bitmapDrawable = RoundedBitmapDrawableFactory.create(context.getResources(), resource); bitmapDrawable.setCircular(true); imageView.setImageDrawable(bitmapDrawable); } } Usage: Glide .with(context) .load(yourimageidentifier) .asBitmap() .into(new CircularBitmapImageViewTarget(context, imageView)); GoalKicker.com Android Notes for Professionals 453 Chapter 62: Dialog Line show(); Description Shows the dialog setContentView(R.layout.yourlayout); sets the ContentView of the dialog to your custom layout. dismiss() Closes the dialog Section 62.1: Adding Material Design AlertDialog to your app using Appcompat AlertDialog is a subclass of Dialog that can display one, two or three buttons. If you only want to display a String in this dialog box, use the setMessage() method. The AlertDialog from android.app package displays dierently on dierent Android OS Versions. The Android V7 Appcompat library provides an AlertDialog implementation which will display with Material Design on all supported Android OS versions, as shown below: First you need to add the V7 Appcompat library to your project. you can do this in the app level build.gradle le: dependencies { compile 'com.android.support:appcompat-v7:24.2.1' //........ } Be sure to import the correct class: import android.support.v7.app.AlertDialog; Then Create AlertDialog like this: AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle("Are you sure?"); builder.setMessage("You'll lose all photos and media!"); builder.setPositiveButton("ERASE", null); builder.setNegativeButton("CANCEL", null); builder.show(); Section 62.2: A Basic Alert Dialog AlertDialog.Builder builder = new AlertDialog.Builder(context); //Set Title builder.setTitle("Reset...") //Set Message .setMessage("Are you sure?") GoalKicker.com Android Notes for Professionals 454 //Set the icon of the dialog .setIcon(drawable) //Set the positive button, in this case, OK, which will dismiss the dialog and do everything in the onClick method .setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { // Reset } }); AlertDialog dialog = builder.create(); //Now, any time you can call on: dialog.show(); //So you can show the dialog. Now this code will achieve this: (Image source: WikiHow) Section 62.3: ListView in AlertDialog We can always use ListView or RecyclerView for selection from list of items, but if we have small amount of choices and among those choices we want user to select one, we can use AlertDialog.Builder setAdapter. private void showDialog() { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle("Choose any item"); final List<String> lables = new ArrayList<>(); lables.add("Item 1"); lables.add("Item 2"); lables.add("Item 3"); lables.add("Item 4"); ArrayAdapter<String> dataAdapter = new ArrayAdapter<String>(this, android.R.layout.simple_dropdown_item_1line, lables); builder.setAdapter(dataAdapter, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { Toast.makeText(MainActivity.this,"You have selected " + lables.get(which),Toast.LENGTH_LONG).show(); } }); AlertDialog dialog = builder.create(); GoalKicker.com Android Notes for Professionals 455 dialog.show(); } Perhaps, if we don't need any particular ListView, we can use a basic way: AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle("Select an item") .setItems(R.array.your_array, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // The 'which' argument contains the index position of the selected item Log.v(TAG, "Selected item on position " + which); } }); builder.create().show(); Section 62.4: Custom Alert Dialog with EditText void alertDialogDemo() { // get alert_dialog.xml view LayoutInflater li = LayoutInflater.from(getApplicationContext()); View promptsView = li.inflate(R.layout.alert_dialog, null); AlertDialog.Builder alertDialogBuilder = new AlertDialog.Builder( getApplicationContext()); // set alert_dialog.xml to alertdialog builder alertDialogBuilder.setView(promptsView); final EditText userInput = (EditText) promptsView.findViewById(R.id.etUserInput); // set dialog message alertDialogBuilder .setCancelable(false) .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // get user input and set it to result // edit text Toast.makeText(getApplicationContext(), "Entered: "+userInput.getText().toString(), Toast.LENGTH_LONG).show(); } }) .setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); // create alert dialog AlertDialog alertDialog = alertDialogBuilder.create(); // show it alertDialog.show(); } Xml le: res/layout/alert_dialog.xml <TextView android:id="@+id/textView1" GoalKicker.com Android Notes for Professionals 456 android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Type Your Message : " android:textAppearance="?android:attr/textAppearanceLarge" /> <EditText android:id="@+id/etUserInput" android:layout_width="match_parent" android:layout_height="wrap_content" > <requestFocus /> </EditText> Section 62.5: DatePickerDialog DatePickerDialog is the simplest way to use DatePicker, because you can show dialog anywhere in your app. You don't have to implement your own layout with DatePicker widget. How to show dialog: DatePickerDialog datePickerDialog = new DatePickerDialog(context, listener, year, month, day); datePickerDialog.show(); You can get DataPicker widget from dialog above, to get access to more functions, and for example set minimum date in milliseconds: DatePicker datePicker = datePickerDialog.getDatePicker(); datePicker.setMinDate(System.currentTimeMillis()); Section 62.6: DatePicker DatePicker allows user to pick date. When we create new instance of DatePicker, we can set initial date. If we don't set initial date, current date will be set by default. We can show DatePicker to user by using DatePickerDialog or by creating our own layout with DatePicker widget. Also we can limit range of date, which user can pick. By setting minimum date in milliseconds //In this case user can pick date only from future GoalKicker.com Android Notes for Professionals 457 datePicker.setMinDate(System.currentTimeMillis()); By setting maximum date in milliseconds //In this case user can pick date only, before following week. datePicker.setMaxDate(System.currentTimeMillis() + TimeUnit.DAYS.toMillis(7)); To receive information, about which date was picked by user, we have to use Listener. If we are using DatePickerDialog, we can set OnDateSetListener in constructor when we are creating new instance of DatePickerDialog: Sample use of DatePickerDialog public class SampleActivity extends AppCompatActivity implements DatePickerDialog.OnDateSetListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... } private void showDatePicker() { //We need calendar to set current date as initial date in DatePickerDialog. Calendar calendar = new GregorianCalendar(Locale.getDefault()); int year = calendar.get(Calendar.YEAR); int month = calendar.get(Calendar.MONTH); int day = calendar.get(Calendar.DAY_OF_MONTH); DatePickerDialog datePickerDialog = new DatePickerDialog(this, this, year, month, day); datePickerDialog.show(); } @Override public void onDateSet(DatePicker datePicker, int year, int month, int day) { } } Otherwise, if we are creating our own layout with DatePicker widget, we also have to create our own listener as it was shown in other example Section 62.7: Alert Dialog AlertDialog.Builder alertDialogBuilder = new AlertDialog.Builder( MainActivity.this); alertDialogBuilder.setTitle("Title Dialog"); alertDialogBuilder .setMessage("Message Dialog") .setCancelable(true) .setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int arg1) { // Handle Positive Button GoalKicker.com Android Notes for Professionals 458 } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int arg1) { // Handle Negative Button dialog.cancel(); } }); AlertDialog alertDialog = alertDialogBuilder.create(); alertDialog.show(); Section 62.8: Alert Dialog with Multi-line Title The setCustomTitle() method of AlertDialog.Builder lets you specify an arbitrary view to be used for the dialog title. One common use for this method is to build an alert dialog that has a long title. AlertDialog.Builder builder = new AlertDialog.Builder(context, Theme_Material_Light_Dialog); builder.setCustomTitle(inflate(context, R.layout.my_dialog_title, null)) .setView(inflate(context, R.layout.my_dialog, null)) .setPositiveButton("OK", null); Dialog dialog = builder.create(); dialog.show(); my_dialog_title.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="16dp"> <TextView style="@android:style/TextAppearance.Small" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur tincidunt condimentum tristique. Vestibulum ante ante, pretium porttitor iaculis vitae, congue ut sem. Curabitur ac feugiat ligula. Nulla tincidunt est eu sapien iaculis rhoncus. Mauris eu risus sed justo pharetra semper faucibus vel velit." android:textStyle="bold"/> </LinearLayout> my_dialog.xml: <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 459 android:orientation="vertical" android:padding="16dp" android:scrollbars="vertical"> <TextView style="@android:style/TextAppearance.Small" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="10dp" android:text="Hello world!"/> <TextView style="@android:style/TextAppearance.Small" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="10dp" android:text="Hello world again!"/> <TextView style="@android:style/TextAppearance.Small" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="10dp" android:text="Hello world again!"/> <TextView style="@android:style/TextAppearance.Small" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="10dp" android:text="Hello world again!"/> </LinearLayout> </ScrollView> GoalKicker.com Android Notes for Professionals 460 Section 62.9: Date Picker within DialogFragment xml of the Dialog: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <DatePicker android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/datePicker" android:layout_gravity="center_horizontal" android:calendarViewShown="false"/> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:text="ACCEPT" android:id="@+id/buttonAccept" /> </LinearLayout> Dialog Class: public class ChooseDate extends DialogFragment implements View.OnClickListener { private DatePicker datePicker; private Button acceptButton; private boolean isDateSetted = false; private int year; GoalKicker.com Android Notes for Professionals 461 private int month; private int day; private DateListener listener; public interface DateListener { onDateSelected(int year, int month, int day); } public ChooseDate(){} @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.dialog_year_picker, container); getDialog().setTitle(getResources().getString("TITLE")); datePicker = (DatePicker) rootView.findViewById(R.id.datePicker); acceptButton = (Button) rootView.findViewById(R.id.buttonAccept); acceptButton.setOnClickListener(this); if (isDateSetted) { datePicker.updateDate(year, month, day); } return rootView; } @Override public void onClick(View v) { switch(v.getId()){ case R.id.buttonAccept: int year = datePicker.getYear(); int month = datePicker.getMonth() + 1; // months start in 0 int day = datePicker.getDayOfMonth(); listener.onDateSelected(year, month, day); break; } this.dismiss(); } @Override public void onAttach(Context context) { super.onAttach(context); listener = (DateListener) context; } public void setDate(int year, int month, int day) { this.year = year; this.month = month; this.day = day; this.isDateSetted = true; } } Activity calling the dialog: GoalKicker.com Android Notes for Professionals 462 public class MainActivity extends AppCompatActivity implements ChooseDate.DateListener{ private int year; private int month; private int day; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); private void showDateDialog(); } private void showDateDialog(){ ChooseDate pickDialog = new ChooseDate(); // We could set a date // pickDialog.setDate(23, 10, 2016); pickDialog.show(getFragmentManager(), ""); } @Override onDateSelected(int year, int month, int day){ this.day = day; this.month = month; this.year = year; } } Section 62.10: Fullscreen Custom Dialog with no background and no title in styles.xml add your custom style: <?xml version="1.0" encoding="utf-8"?> <resources> <style name="AppBaseTheme" parent="@android:style/Theme.Light.NoTitleBar.Fullscreen"> </style> </resources> Create your custom layout for the dialog: fullscreen.xml: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > </RelativeLayout> Then in java le you can use it for an Activity or Dialog etc: import android.app.Activity; import android.app.Dialog; import android.os.Bundle; public class FullscreenActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 463 //You can set no content for the activity. Dialog mDialog = new Dialog(this, R.style.AppBaseTheme); mDialog.setContentView(R.layout.fullscreen); mDialog.show(); } } GoalKicker.com Android Notes for Professionals 464 Chapter 63: Enhancing Alert Dialogs This topic is about enhancing an AlertDialog with additional features. Section 63.1: Alert dialog containing a clickable link In order to show an alert dialog containing a link which can be opened by clicking it, you can use the following code: AlertDialog.Builder builder1 = new AlertDialog.Builder(youractivity.this); builder1.setMessage(Html.fromHtml("your message,<a href=\"http://www.google.com\">link</a>")); builder1.setCancelable(false); builder1.setPositiveButton("ok", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }); AlertDialog Alert1 = builder1.create(); Alert1 .show(); ((TextView)Alert1.findViewById(android.R.id.message)).setMovementMethod(LinkMovementMethod.getInsta nce()); GoalKicker.com Android Notes for Professionals 465 Chapter 64: Animated AlertDialog Box Animated Alert Dialog Which display with some animation eects.. You Can Get Some Animation for dialog box like Fadein, Slideleft, Slidetop, SlideBottom, Slideright, Fall, Newspager, Fliph, Flipv, RotateBottom, RotateLeft, Slit, Shake, Sidell to make Your application attractive.. Section 64.1: Put Below code for Animated dialog.. animated_android_dialog_box.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="16dp"> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:background="#1184be" android:onClick="animatedDialog1" android:text="Animated Fall Dialog" android:textColor="#fff" /> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="16dp" android:layout_marginTop="16dp" android:background="#1184be" android:onClick="animatedDialog2" android:text="Animated Material Flip Dialog" android:textColor="#fff" /> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:background="#1184be" android:onClick="animatedDialog3" android:text="Animated Material Shake Dialog" android:textColor="#fff" /> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="16dp" android:layout_marginTop="16dp" android:background="#1184be" android:onClick="animatedDialog4" android:text="Animated Slide Top Dialog" android:textColor="#fff" /> AnimatedAndroidDialogExample.java public class AnimatedAndroidDialogExample extends AppCompatActivity { NiftyDialogBuilder materialDesignAnimatedDialog; @Override GoalKicker.com Android Notes for Professionals 466 protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.animated_android_dialog_box); materialDesignAnimatedDialog = NiftyDialogBuilder.getInstance(this); } public void animatedDialog1(View view) { materialDesignAnimatedDialog .withTitle("Animated Fall Dialog Title") .withMessage("Add your dialog message here. Animated dialog description place.") .withDialogColor("#FFFFFF") .withButton1Text("OK") .withButton2Text("Cancel") .withDuration(700) .withEffect(Effectstype.Fall) .show(); } public void animatedDialog2(View view) { materialDesignAnimatedDialog .withTitle("Animated Flip Dialog Title") .withMessage("Add your dialog message here. Animated dialog description place.") .withDialogColor("#1c90ec") .withButton1Text("OK") .withButton2Text("Cancel") .withDuration(700) .withEffect(Effectstype.Fliph) .show(); } public void animatedDialog3(View view) { materialDesignAnimatedDialog .withTitle("Animated Shake Dialog Title") .withMessage("Add your dialog message here. Animated dialog description place.") .withDialogColor("#1c90ec") .withButton1Text("OK") .withButton2Text("Cancel") .withDuration(700) .withEffect(Effectstype.Shake) .show(); } public void animatedDialog4(View view) { materialDesignAnimatedDialog .withTitle("Animated Slide Top Dialog Title") .withMessage("Add your dialog message here. Animated dialog description place.") .withDialogColor("#1c90ec") .withButton1Text("OK") .withButton2Text("Cancel") .withDuration(700) .withEffect(Effectstype.Slidetop) .show(); } } Add the below lines in your build.gradle to include the NifyBuilder(CustomView) build.gradle dependencies { GoalKicker.com Android Notes for Professionals 467 compile 'com.nineoldandroids:library:2.4.0' compile 'com.github.sd6352051.niftydialogeffects:niftydialogeffects:1.0.0@aar' } Reference Link : https://github.com/sd6352051/NiftyDialogEects Goal<EMAIL> Android Notes for Professionals 468 Chapter 65: GreenDAO GreenDAO is an Object-Relational Mapping library to help developers use SQLite databases for persistent local storage. Section 65.1: Helper methods for SELECT, INSERT, DELETE, UPDATE queries This example shows a helper class that contains methods useful, when executing the queries for data. Every method here uses Java Generic's in order to be very exible. public <T> List<T> selectElements(AbstractDao<T, ?> dao) { if (dao == null) { return null; } QueryBuilder<T> qb = dao.queryBuilder(); return qb.list(); } public <T> void insertElements(AbstractDao<T, ?> absDao, List<T> items) { if (items == null || items.size() == 0 || absDao == null) { return; } absDao.insertOrReplaceInTx(items); } public <T> T insertElement(AbstractDao<T, ?> absDao, T item) { if (item == null || absDao == null) { return null; } absDao.insertOrReplaceInTx(item); return item; } public <T> void updateElements(AbstractDao<T, ?> absDao, List<T> items) { if (items == null || items.size() == 0 || absDao == null) { return; } absDao.updateInTx(items); } public <T> T selectElementByCondition(AbstractDao<T, ?> absDao, WhereCondition... conditions) { if (absDao == null) { return null; } QueryBuilder<T> qb = absDao.queryBuilder(); for (WhereCondition condition : conditions) { qb = qb.where(condition); } List<T> items = qb.list(); return items != null && items.size() > 0 ? items.get(0) : null; } public <T> List<T> selectElementsByCondition(AbstractDao<T, ?> absDao, WhereCondition... conditions) { if (absDao == null) { return null; } GoalKicker.com Android Notes for Professionals 469 QueryBuilder<T> qb = absDao.queryBuilder(); for (WhereCondition condition : conditions) { qb = qb.where(condition); } List<T> items = qb.list(); return items != null ? items : null; } public <T> List<T> selectElementsByConditionAndSort(AbstractDao<T, ?> absDao, Property sortProperty, String sortStrategy, WhereCondition... conditions) { if (absDao == null) { return null; } QueryBuilder<T> qb = absDao.queryBuilder(); for (WhereCondition condition : conditions) { qb = qb.where(condition); } qb.orderCustom(sortProperty, sortStrategy); List<T> items = qb.list(); return items != null ? items : null; } public <T> List<T> selectElementsByConditionAndSortWithNullHandling(AbstractDao<T, ?> absDao, Property sortProperty, boolean handleNulls, String sortStrategy, WhereCondition... conditions) { if (!handleNulls) { return selectElementsByConditionAndSort(absDao, sortProperty, sortStrategy, conditions); } if (absDao == null) { return null; } QueryBuilder<T> qb = absDao.queryBuilder(); for (WhereCondition condition : conditions) { qb = qb.where(condition); } qb.orderRaw("(CASE WHEN " + "T." + sortProperty.columnName + " IS NULL then 1 ELSE 0 END)," + "T." + sortProperty.columnName + " " + sortStrategy); List<T> items = qb.list(); return items != null ? items : null; } public <T, V extends Class> List<T> selectByJoin(AbstractDao<T, ?> absDao, V className, Property property, WhereCondition whereCondition) { QueryBuilder<T> qb = absDao.queryBuilder(); qb.join(className, property).where(whereCondition); return qb.list(); } public <T> void deleteElementsByCondition(AbstractDao<T, ?> absDao, WhereCondition... conditions) { if (absDao == null) { return; } QueryBuilder<T> qb = absDao.queryBuilder(); for (WhereCondition condition : conditions) { qb = qb.where(condition); GoalKicker.com Android Notes for Professionals 470 } List<T> list = qb.list(); absDao.deleteInTx(list); } public <T> T deleteElement(DaoSession session, AbstractDao<T, ?> absDao, T object) { if (absDao == null) { return null; } absDao.delete(object); session.clear(); return object; } public <T, V extends Class> void deleteByJoin(AbstractDao<T, ?> absDao, V className, Property property, WhereCondition whereCondition) { QueryBuilder<T> qb = absDao.queryBuilder(); qb.join(className, property).where(whereCondition); qb.buildDelete().executeDeleteWithoutDetachingEntities(); } public <T> void deleteAllFromTable(AbstractDao<T, ?> absDao) { if (absDao == null) { return; } absDao.deleteAll(); } public <T> long countElements(AbstractDao<T, ?> absDao) { if (absDao == null) { return 0; } return absDao.count(); } Section 65.2: Creating an Entity with GreenDAO 3.X that has a Composite Primary Key When creating a model for a table that has a composite primary key, additional work is required on the Object for the model Entity to respect those constraints. The following example SQL table and Entity demonstrates the structure to store a review left by a customer for an item in an online store. In this example, we want the customer_id and item_id columns to be a composite primary key, allowing only one review to exist between a specic customer and item. SQL Table CREATE TABLE review ( customer_id STRING NOT NULL, item_id STRING NOT NULL, star_rating INTEGER NOT NULL, content STRING, PRIMARY KEY (customer_id, item_id) ); Usually we would use the @Id and @Unique annotations above the respective elds in the entity class, however for a composite primary key we do the following: GoalKicker.com Android Notes for Professionals 471 1. Add the @Index annotation inside the class-level @Entity annotation. The value property contains a commadelimited list of the elds that make up the key. Use the unique property as shown to enforce uniqueness on the key. 2. GreenDAO requires every Entity have a long or Long object as a primary key. We still need to add this to the Entity class, however we do not need to use it or worry about it aecting our implementation. In the example below it is called localID Entity @Entity(indexes = { @Index(value = "customer_id,item_id", unique = true)}) public class Review { @Id(autoincrement = true) private Long localID; private String customer_id; private String item_id; @NotNull private Integer star_rating; private String content; public Review() {} } Section 65.3: Getting started with GreenDao v3.X After adding the GreenDao library dependency and Gradle plugin, we need to rst create an entity object. Entity An entity is a Plain Old Java Object (POJO) that models some data in the database. GreenDao will use this class to create a table in the SQLite database and automatically generate helper classes we can use to access and store data without having to write SQL statements. @Entity public class Users { @Id(autoincrement = true) private Long id; private String firstname; private String lastname; @Unique private String email; // Getters and setters for the fields... } One-time GreenDao setup Each time an application is launched GreenDao needs to be initialized. GreenDao suggests keeping this code in an Application class or somewhere it will only be run once. GoalKicker.com Android Notes for Professionals 472 DaoMaster.DevOpenHelper helper = new DaoMaster.DevOpenHelper(this, "mydatabase", null); db = helper.getWritableDatabase(); DaoMaster daoMaster = new DaoMaster(db); DaoSession daoSession = daoMaster.newSession(); GreenDao Helper Classes After the entity object is created, GreenDao automatically creates the helper classes used to interact with the database. These are named similarly to the name of the entity object that was created, followed by Dao and are retrieved from the daoSession object. UsersDao usersDao = daoSession.getUsersDao(); Many typical database actions can now be performed using this Dao object with the entity object. Query String email = "<EMAIL>"; String firstname = "John"; // Single user query WHERE email matches "<EMAIL>" Users user = userDao.queryBuilder() .where(UsersDao.Properties.Email.eq(email)).build().unique(); // Multiple user query WHERE firstname = "John" List<Users> user = userDao.queryBuilder() .where(UsersDao.Properties.Firstname.eq(firstname)).build().list(); Insert Users newUser = new User("John","Doe","<EMAIL>"); usersDao.insert(newUser); Update // Modify a previously retrieved user object and update user.setLastname("Dole"); usersDao.update(user); Delete // Delete a previously retrieved user object usersDao.delete(user); GoalKicker.com Android Notes for Professionals 473 Chapter 66: Tools Attributes Section 66.1: Designtime Layout Attributes These attributes are used when the layout is rendered in Android Studio, but have no impact on the runtime. In general you can use any Android framework attribute, just using the tools: namespace rather than the android: namespace for layout preview. You can add both the android: namespace attribute (which is used at runtime) and the matching tools: attribute (which overrides the runtime attribute in the layout preview only). Just dene the tools namespace as described in the remarks section. For example the text attribute: <EditText tools:text="My Text" android:layout_width="wrap_content" android:layout_height="wrap_content" /> Or the visibility attribute to unset a view for preview: <LinearLayout android:id="@+id/ll1" android:layout_width="wrap_content" android:layout_height="wrap_content" tools:visibility="gone" /> Or the context attribute to associate the layout with activity or fragment <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" tools:context=".MainActivity" > Or the showIn attribute to see and included layout preview in another layout <EditText xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/text" tools:showIn="@layout/activity_main" /> GoalKicker.com Android Notes for Professionals 474 Chapter 67: Formatting Strings Section 67.1: Format a string resource You can add wildcards in string resources and populate them at runtime: 1. Edit strings.xml <string name="my_string">This is %1$s</string> 2. Format string as needed String fun = "fun"; context.getString(R.string.my_string, fun); Section 67.2: Formatting data types to String and vise versa Data types to string formatting Data types like int, oat, double, long, boolean can be formatted to string using String.valueOf(). String.valueOf(1); //Output -> "1" String.valueOf(1.0); //Output -> "1.0" String.valueOf(1.2345); //Output -> "1.2345" String.valueOf(true); //Output -> "true" Vise versa of this, formatting string to other data type Integer.parseInt("1"); //Output -> 1 Float.parseFloat("1.2"); //Output -> 1.2 Boolean.parseBoolean("true"); //Output -> true Section 67.3: Format a timestamp to string For full description of patterns, see SimpleDateFormat reference Date now = new Date(); long timestamp = now.getTime(); SimpleDateFormat sdf = new SimpleDateFormat("MM/dd/yyyy", Locale.US); String dateStr = sdf.format(timestamp); GoalKicker.com Android Notes for Professionals 475 Chapter 68: SpannableString Section 68.1: Add styles to a TextView In the following example, we create an Activity to display a single TextView. The TextView will use a SpannableString as its content, which will illustrate some of the available styles. Here' what we're gonna do with the text : Make it larger Bold Underline Italicize Strike-through Colored Highlighted Show as superscript Show as subscript Show as a link Make it clickable. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); SpannableString styledString = new SpannableString("Large\n\n" // index 0 - 5 + "Bold\n\n" // index 7 - 11 + "Underlined\n\n" // index 13 - 23 + "Italic\n\n" // index 25 - 31 + "Strikethrough\n\n" // index 33 - 46 + "Colored\n\n" // index 48 - 55 + "Highlighted\n\n" // index 57 - 68 + "K Superscript\n\n" // "Superscript" index 72 - 83 + "K Subscript\n\n" // "Subscript" index 87 - 96 + "Url\n\n" // index 98 - 101 + "Clickable\n\n"); // index 103 - 112 // make the text twice as large styledString.setSpan(new RelativeSizeSpan(2f), 0, 5, 0); // make text bold styledString.setSpan(new StyleSpan(Typeface.BOLD), 7, 11, 0); // underline text styledString.setSpan(new UnderlineSpan(), 13, 23, 0); // make text italic styledString.setSpan(new StyleSpan(Typeface.ITALIC), 25, 31, 0); styledString.setSpan(new StrikethroughSpan(), 33, 46, 0); // change text color styledString.setSpan(new ForegroundColorSpan(Color.GREEN), 48, 55, 0); // highlight text styledString.setSpan(new BackgroundColorSpan(Color.CYAN), 57, 68, 0); GoalKicker.com Android Notes for Professionals 476 // superscript styledString.setSpan(new SuperscriptSpan(), 72, 83, 0); // make the superscript text smaller styledString.setSpan(new RelativeSizeSpan(0.5f), 72, 83, 0); // subscript styledString.setSpan(new SubscriptSpan(), 87, 96, 0); // make the subscript text smaller styledString.setSpan(new RelativeSizeSpan(0.5f), 87, 96, 0); // url styledString.setSpan(new URLSpan("http://www.google.com"), 98, 101, 0); // clickable text ClickableSpan clickableSpan = new ClickableSpan() { @Override public void onClick(View widget) { // We display a Toast. You could do anything you want here. Toast.makeText(SpanExample.this, "Clicked", Toast.LENGTH_SHORT).show(); } }; styledString.setSpan(clickableSpan, 103, 112, 0); // Give the styled string to a TextView TextView textView = new TextView(this); // this step is mandated for the url and clickable styles. textView.setMovementMethod(LinkMovementMethod.getInstance()); // make it neat textView.setGravity(Gravity.CENTER); textView.setBackgroundColor(Color.WHITE); textView.setText(styledString); setContentView(textView); } GoalKicker.com Android Notes for Professionals 477 And the result will look like this: Section 68.2: Multi string , with multi color Method: setSpanColor public Spanned setSpanColor(String string, int color){ SpannableStringBuilder builder = new SpannableStringBuilder(); SpannableString ss = new SpannableString(string); ss.setSpan(new ForegroundColorSpan(color), 0, string.length(), 0); builder.append(ss); return ss; } GoalKicker.com Android Notes for Professionals 478 Usage: String a = getString(R.string.string1); String b = getString(R.string.string2); Spanned color1 = setSpanColor(a,Color.CYAN); Spanned color2 = setSpanColor(b,Color.RED); Spanned mixedColor = TextUtils.concat(color1, " ", color2); // Now we use `mixedColor` GoalKicker.com Android Notes for Professionals 479 Chapter 69: Notications Section 69.1: Heads Up Notication with Ticker for older devices Here is how to make a Heads Up Notication for capable devices, and use a Ticker for older devices. // Tapping the Notification will open up MainActivity Intent i = new Intent(this, MainActivity.class); // an action to use later // defined as an app constant: // public static final String MESSAGE_CONSTANT = "com.example.myapp.notification"; i.setAction(MainActivity.MESSAGE_CONSTANT); // you can use extras as well i.putExtra("some_extra", "testValue"); i.setFlags(Intent.FLAG_ACTIVITY_REORDER_TO_FRONT | Intent.FLAG_ACTIVITY_SINGLE_TOP); PendingIntent notificationIntent = PendingIntent.getActivity(this, 999, i, PendingIntent.FLAG_UPDATE_CURRENT); NotificationCompat.Builder builder = new NotificationCompat.Builder(this.getApplicationContext()); builder.setContentIntent(notificationIntent); builder.setAutoCancel(true); builder.setLargeIcon(BitmapFactory.decodeResource(this.getResources(), android.R.drawable.ic_menu_view)); builder.setSmallIcon(android.R.drawable.ic_dialog_map); builder.setContentText("Test Message Text"); builder.setTicker("Test Ticker Text"); builder.setContentTitle("Test Message Title"); // set high priority for Heads Up Notification builder.setPriority(NotificationCompat.PRIORITY_HIGH); builder.setVisibility(NotificationCompat.VISIBILITY_PUBLIC); // It won't show "Heads Up" unless it plays a sound if (Build.VERSION.SDK_INT >= 21) builder.setVibrate(new long[0]); NotificationManager mNotificationManager = (NotificationManager)getSystemService(Context.NOTIFICATION_SERVICE); mNotificationManager.notify(999, builder.build()); Here is what it looks like on Android Marshmallow with the Heads Up Notication: GoalKicker.com Android Notes for Professionals 480 Here is what it looks like on Android KitKat with the Ticker: GoalKicker.com Android Notes for Professionals 481 On all Android versions, the Notification is shown in the notication drawer. Android 6.0 Marshmallow: GoalKicker.com Android Notes for Professionals 482 Android 4.4.x KitKat: GoalKicker.com Android Notes for Professionals 483 Section 69.2: Creating a simple Notication This example shows how to create a simple notication that starts an application when the user clicks it. Specify the notication's content: NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(this) .setSmallIcon(R.drawable.ic_launcher) // notification icon .setContentTitle("Simple notification") // title .setContentText("<NAME>") // body message .setAutoCancel(true); // clear notification when clicked Create the intent to re on click: Intent intent = new Intent(this, MainActivity.class); PendingIntent pi = PendingIntent.getActivity(this, 0, intent, Intent.FLAG_ACTIVITY_NEW_TASK); mBuilder.setContentIntent(pi); Finally, build the notication and show it NotificationManager mNotificationManager = (NotificationManager)getSystemService(Context.NOTIFICATION_SERVICE); mNotificationManager.notify(0, mBuilder.build()); Section 69.3: Set custom notication - show full content text If you want have a long text to display in the context, you need to set a custom content. GoalKicker.com Android Notes for Professionals 484 For example, you have this: But you wish your text will be fully shown: All you need to do, is to add a style to your content like below: private void generateNotification(Context context) { String message = "This is a custom notification with a very very very very very very very very very very long text"; Bitmap largeIcon = BitmapFactory.decodeResource(getResources(), android.R.drawable.ic_dialog_alert); NotificationCompat.Builder builder = new NotificationCompat.Builder(context); builder.setContentTitle("Title").setContentText(message) .setSmallIcon(android.R.drawable.ic_dialog_alert) .setLargeIcon(largeIcon) .setAutoCancel(true) .setWhen(System.currentTimeMillis()) .setStyle(new NotificationCompat.BigTextStyle().bigText(message)); Notification notification = builder.build(); NotificationManagerCompat notificationManager = NotificationManagerCompat.from(context); notificationManager.notify(101, notification); } Section 69.4: Dynamically getting the correct pixel size for the large icon If you're creating an image, decoding an image, or resizing an image to t the large notication image area, you can get the correct pixel dimensions like so: Resources resources = context.getResources(); int width = resources.getDimensionPixelSize(android.R.dimen.notification_large_icon_width); int height = resources.getDimensionPixelSize(android.R.dimen.notification_large_icon_height); Section 69.5: Ongoing notication with Action button // Cancel older notification with same id, NotificationManager notificationMgr = GoalKicker.com Android Notes for Professionals 485 (NotificationManager)context.getSystemService(Context.NOTIFICATION_SERVICE); notificationMgr.cancel(CALL_NOTIFY_ID);// any constant value // Create Pending Intent, Intent notificationIntent = null; PendingIntent contentIntent = null; notificationIntent = new Intent (context, YourActivityName); contentIntent = PendingIntent.getActivity(context, 0, notificationIntent, PendingIntent.FLAG_UPDATE_CURRENT); // Notification builder builder = new NotificationCompat.Builder(context); builder.setContentText("Ongoing Notification.."); builder.setContentTitle("ongoing notification sample"); builder.setSmallIcon(R.drawable.notification_icon); builder.setUsesChronometer(true); builder.setDefaults(Notification.DEFAULT_LIGHTS); builder.setContentIntent(contentIntent); builder.setOngoing(true); // Add action button in the notification Intent intent = new Intent("action.name"); PendingIntent pIntent = PendingIntent.getBroadcast(context, 1, intent, 0); builder.addAction(R.drawable.action_button_icon, "Action button name",pIntent); // Notify using notificationMgr Notification finalNotification = builder.build(); notificationMgr.notify(CALL_NOTIFY_ID, finalNotification); Register a broadcast receiver for the same action to handle action button click event. Section 69.6: Setting Dierent priorities in notication NotificationCompat.Builder mBuilder = (NotificationCompat.Builder) new NotificationCompat.Builder(context) .setSmallIcon(R.drawable.some_small_icon) .setContentTitle("Title") .setContentText("This is a test notification with MAX priority") .setPriority(Notification.PRIORITY_MAX); When notication contains image and you want to auto expand image when notication received use "PRIORITY_MAX", you can use other priority levels as per requirments Dierent Priority Levels Info: PRIORITY_MAX -- Use for critical and urgent notications that alert the user to a condition that is time-critical or needs to be resolved before they can continue with a particular task. PRIORITY_HIGH -- Use primarily for important communication, such as message or chat events with content that is particularly interesting for the user. High-priority notications trigger the heads-up notication display. PRIORITY_DEFAULT -- Use for all notications that don't fall into any of the other priorities described here. PRIORITY_LOW -- Use for notications that you want the user to be informed about, but that are less urgent. Lowpriority notications tend to show up at the bottom of the list, which makes them a good choice for things like GoalKicker.com Android Notes for Professionals 486 public or undirected social updates: The user has asked to be notied about them, but these notications should never take precedence over urgent or direct communication. PRIORITY_MIN -- Use for contextual or background information such as weather information or contextual location information. Minimum-priority notications do not appear in the status bar. The user discovers them on expanding the notication shade. References: Material Design Guidelines - notications Section 69.7: Set custom notication icon using `Picasso` library PendingIntent pendingIntent = PendingIntent.getActivity(context, uniqueIntentId, intent, PendingIntent.FLAG_CANCEL_CURRENT); final RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.remote_view_notification); remoteViews.setImageViewResource(R.id.remoteview_notification_icon, R.mipmap.ic_navigation_favorites); Uri defaultSoundUri = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION); NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(context) .setSmallIcon(R.mipmap.ic_navigation_favorites) //just dummy icon .setContent(remoteViews) // here we apply our view .setAutoCancel(true) .setContentIntent(pendingIntent) .setPriority(NotificationCompat.PRIORITY_DEFAULT); final Notification notification = notificationBuilder.build(); if (android.os.Build.VERSION.SDK_INT >= 16) { notification.bigContentView = remoteViews; } NotificationManager notificationManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE); notificationManager.notify(uniqueIntentId, notification); //don't forget to include picasso to your build.gradle file. Picasso.with(context) .load(avatar) .into(remoteViews, R.id.remoteview_notification_icon, uniqueIntentId, notification); And then dene a layout inside your layouts folder: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@android:color/white" android:orientation="vertical"> <ImageView GoalKicker.com Android Notes for Professionals 487 android:id="@+id/remoteview_notification_icon" android:layout_width="60dp" android:layout_height="60dp" android:layout_marginRight="2dp" android:layout_weight="0" android:scaleType="centerCrop"/> </LinearLayout> Section 69.8: Scheduling notications Sometimes it is required to display a notication at a specic time, a task that unfortunately is not trivial on the Android system, as there is no method setTime() or similiar for notications. This example outlines the steps needed to schedule notications using the AlarmManager: 1. Add a BroadcastReceiver that listens to Intents broadcasted by the Android AlarmManager. This is the place where you build your notication based on the extras provided with the Intent: public class NotificationReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { // Build notification based on Intent Notification notification = new NotificationCompat.Builder(context) .setSmallIcon(R.drawable.ic_notification_small_icon) .setContentTitle(intent.getStringExtra("title", "")) .setContentText(intent.getStringExtra("text", "")) .build(); // Show notification NotificationManager manager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE); manager.notify(42, notification); } } 2. Register the BroadcastReceiver in your AndroidManifest.xml le (otherwise the receiver won't receive any Intents from the AlarmManager): <receiver android:name=".NotificationReceiver" android:enabled="true" /> 3. Schedule a notication by passing a PendingIntent for your BroadcastReceiver with the needed Intent extras to the system AlarmManager. Your BroadcastReceiver will receive the Intent once the given time has arrived and display the notication. The following method schedules a notication: public static void scheduleNotification(Context context, long time, String title, String text) { Intent intent = new Intent(context, NotificationReceiver.class); intent.putExtra("title", title); intent.putExtra("text", text); PendingIntent pending = PendingIntent.getBroadcast(context, 42, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Schdedule notification AlarmManager manager = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE); manager.setExactAndAllowWhileIdle(AlarmManager.RTC_WAKEUP, time, pending); } GoalKicker.com Android Notes for Professionals 488 Please note that the 42 above needs to be unique for each scheduled notication, otherwise the PendingIntents will replace each other causing undesired eects! 4. Cancel a notication by rebuilding the associated PendingIntent and canceling it on the system AlarmManager. The following method cancels a notication: public static void cancelNotification(Context context, String title, String text) { Intent intent = new Intent(context, NotificationReceiver.class); intent.putExtra("title", title); intent.putExtra("text", text); PendingIntent pending = PendingIntent.getBroadcast(context, 42, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Cancel notification AlarmManager manager = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE); manager.cancel(pending); } Note that the 42 above needs to match the number from step 3! GoalKicker.com Android Notes for Professionals 489 Chapter 70: AlarmManager Section 70.1: How to Cancel an Alarm If you want to cancel an alarm, and you don't have a reference to the original PendingIntent used to set the alarm, you need to recreate a PendingIntent exactly as it was when it was originally created. An Intent is considered equal by the AlarmManager: if their action, data, type, class, and categories are the same. This does not compare any extra data included in the intents. Usually the request code for each alarm is dened as a constant: public static final int requestCode = 9999; So, for a simple alarm set up like this: Intent intent = new Intent(this, AlarmReceiver.class); intent.setAction("SomeAction"); PendingIntent pendingIntent = PendingIntent.getBroadcast(this, requestCode, intent, PendingIntent.FLAG_UPDATE_CURRENT); AlarmManager alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE); alarmManager.setExact(AlarmManager.RTC_WAKEUP, targetTimeInMillis, pendingIntent); Here is how you would create a new PendingIntent reference that you can use to cancel the alarm with a new AlarmManager reference: Intent intent = new Intent(this, AlarmReceiver.class); intent.setAction("SomeAction"); PendingIntent pendingIntent = PendingIntent.getBroadcast(this, requestCode, intent, PendingIntent.FLAG_NO_CREATE); AlarmManager alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE); if(pendingIntent != null) { alarmManager.cancel(pendingIntent); } Section 70.2: Creating exact alarms on all Android versions With more and more battery optimizations being put into the Android system over time, the methods of the AlarmManager have also signicantly changed (to allow for more lenient timing). However, for some applications it is still required to be as exact as possible on all Android versions. The following helper uses the most accurate method available on all platforms to schedule a PendingIntent: public static void setExactAndAllowWhileIdle(AlarmManager alarmManager, int type, long triggerAtMillis, PendingIntent operation) { if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.M){ alarmManager.setExactAndAllowWhileIdle(type, triggerAtMillis, operation); } else if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT){ alarmManager.setExact(type, triggerAtMillis, operation); } else { alarmManager.set(type, triggerAtMillis, operation); } GoalKicker.com Android Notes for Professionals 490 } Section 70.3: API23+ Doze mode interferes with AlarmManager Android 6 (API23) introduced Doze mode which interferes with AlarmManager. It uses certain maintenance windows to handle alarms, so even if you used setExactAndAllowWhileIdle() you cannot make sure that your alarm res at the desired point of time. You can turn this behavior o for your app using your phone's settings (Settings/General/Battery & power saving/Battery usage/Ignore optimizations or similar) Inside your app you can check this setting ... String packageName = getPackageName(); PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE); if (pm.isIgnoringBatteryOptimizations(packageName)) { // your app is ignoring Doze battery optimization } ... and eventually show the respective settings dialog: Intent intent = new Intent(); String packageName = getPackageName(); PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE); intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS); intent.setData(Uri.parse("package:" + packageName)); startActivity(intent); Section 70.4: Run an intent at a later time 1. Create a receiver. This class will receive the intent and handle it how you wish. public class AlarmReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { // Handle intent int reqCode = intent.getExtras().getInt("requestCode"); ... } } 2. Give an intent to AlarmManager. This example will trigger the intent to be sent to AlarmReceiver after 1 minute. final int requestCode = 1337; AlarmManager am = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(context, AlarmReceiver.class); PendingIntent pendingIntent = PendingIntent.getBroadcast(context, requestCode, intent, PendingIntent.FLAG_UPDATE_CURRENT); am.set( AlarmManager.RTC_WAKEUP, System.currentTimeMillis() + 60000 , pendingIntent ); GoalKicker.com Android Notes for Professionals 491 Chapter 71: Handler Section 71.1: HandlerThreads and communication between Threads As Handlers are used to send Messages and Runnables to a Thread's message queue it's easy to implement event based communication between multiple Threads. Every Thread that has a Looper is able to receive and process messages. A HandlerThread is a Thread that implements such a Looper, for example the main Thread (UI Thread) implements the features of a HandlerThread. Creating a Handler for the current Thread Handler handler = new Handler(); Creating a Handler for the main Thread (UI Thread) Handler handler = new Handler(Looper.getMainLooper()); Send a Runnable from another Thread to the main Thread new Thread(new Runnable() { public void run() { // this is executed on another Thread // create a Handler associated with the main Thread Handler handler = new Handler(Looper.getMainLooper()); // post a Runnable to the main Thread handler.post(new Runnable() { public void run() { // this is executed on the main Thread } }); } }).start(); Creating a Handler for another HandlerThread and sending events to it // create another Thread HandlerThread otherThread = new HandlerThread("name"); // create a Handler associated with the other Thread Handler handler = new Handler(otherThread.getLooper()); // post an event to the other Thread handler.post(new Runnable() { public void run() { // this is executed on the other Thread } }); Section 71.2: Use Handler to create a Timer (similar to javax.swing.Timer) This can be useful if you're writing a game or something that needs to execute a piece of code every a few seconds. import android.os.Handler; GoalKicker.com Android Notes for Professionals 492 public class Timer { private Handler handler; private boolean paused; private int interval; private Runnable task = new Runnable () { @Override public void run() { if (!paused) { runnable.run (); Timer.this.handler.postDelayed (this, interval); } } }; private Runnable runnable; public int getInterval() { return interval; } public void setInterval(int interval) { this.interval = interval; } public void startTimer () { paused = false; handler.postDelayed (task, interval); } public void stopTimer () { paused = true; } public Timer (Runnable runnable, int interval, boolean started) { handler = new Handler (); this.runnable = runnable; this.interval = interval; if (started) startTimer (); } } Example usage: Timer timer = new Timer(new Runnable() { public void run() { System.out.println("Hello"); } }, 1000, true) This code will print "Hello" every second. Section 71.3: Using a Handler to execute code after a delayed amount of time Executing code after 1.5 seconds: Handler handler = new Handler(); GoalKicker.com Android Notes for Professionals 493 handler.postDelayed(new Runnable() { @Override public void run() { //The code you want to run after the time is up } }, 1500); //the time you want to delay in milliseconds Executing code repeatedly every 1 second: Handler handler = new Handler(); handler.postDelayed(new Runnable() { @Override public void run() { handler.postDelayed(this, 1000); } }, 1000); //the time you want to delay in milliseconds Section 71.4: Stop handler from execution To stop the Handler from execution remove the callback attached to it using the runnable running inside it: Runnable my_runnable = new Runnable() { @Override public void run() { // your code here } }; public Handler handler = new Handler(); // use 'new Handler(Looper.getMainLooper());' if you want this handler to control something in the UI // to start the handler public void start() { handler.postDelayed(my_runnable, 10000); } // to stop the handler public void stop() { handler.removeCallbacks(my_runnable); } // to reset the handler public void restart() { handler.removeCallbacks(my_runnable); handler.postDelayed(my_runnable, 10000); } GoalKicker.com Android Notes for Professionals 494 Chapter 72: BroadcastReceiver BroadcastReceiver (receiver) is an Android component which allows you to register for system or application events. All registered receivers for an event are notied by the Android runtime once this event happens. for example, a broadcast announcing that the screen has turned o, the battery is low, or a picture was captured. Applications can also initiate broadcastsfor example, to let other applications know that some data has been downloaded to the device and is available for them to use. Section 72.1: Using LocalBroadcastManager LocalBroadcastManager is used to send Broadcast Intents within an application, without exposing them to unwanted listeners. Using LocalBroadcastManager is more ecient and safer than using context.sendBroadcast() directly, because you don't need to worry about any broadcasts faked by other Applications, which may pose a security hazard. Here is a simple example of sending and receiving local broadcasts: BroadcastReceiver receiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { if (intent.getAction().equals("Some Action")) { //Do something } } }); LocalBroadcastManager manager = LocalBroadcastManager.getInstance(mContext); manager.registerReceiver(receiver, new IntentFilter("Some Action")); // onReceive() will be called as a result of this call: manager.sendBroadcast(new Intent("Some Action"));//See also sendBroadcastSync //Remember to unregister the receiver when you are done with it: manager.unregisterReceiver(receiver); Section 72.2: BroadcastReceiver Basics BroadcastReceivers are used to receive broadcast Intents that are sent by the Android OS, other apps, or within the same app. Each Intent is created with an Intent Filter, which requires a String action. Additional information can be congured in the Intent. Likewise, BroadcastReceivers register to receive Intents with a particular Intent Filter. They can be registered programmatically: mContext.registerReceiver(new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { //Your implementation goes here. } }, new IntentFilter("Some Action")); GoalKicker.com Android Notes for Professionals 495 or in the AndroidManifest.xml le: <receiver android:name=".MyBroadcastReceiver"> <intent-filter> <action android:name="Some Action"/> </intent-filter> </receiver> To receive the Intent, set the Action to something documented by Android OS, by another app or API, or within your own application, using sendBroadcast: mContext.sendBroadcast(new Intent("Some Action")); Additionally, the Intent can contain information, such as Strings, primitives, and Parcelables, that can be viewed in onReceive. Section 72.3: Introduction to Broadcast receiver A Broadcast receiver is an Android component which allows you to register for system or application events. A receiver can be registered via the AndroidManifest.xml le or dynamically via the Context.registerReceiver() method. public class MyReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { //Your implementation goes here. } } Here I have taken an example of ACTION_BOOT_COMPLETED which is red by the system once the Android has completed the boot process. You can register a receiver in manifest le like this: <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <receiver android:name="MyReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED"> </action> </intent-filter> </receiver> </application> Now device gets booted, onReceive() method will be called and then you can do your work (e.g. start a service, start an alarm). Section 72.4: Using ordered broadcasts Ordered broadcasts are used when you need to specify a priority for broadcast listeners. In this example firstReceiver will receive broadcast always before than a secondReceiver: GoalKicker.com Android Notes for Professionals 496 final int highPriority = 2; final int lowPriority = 1; final String action = "action"; // intent filter for first receiver with high priority final IntentFilter firstFilter = new IntentFilter(action); first Filter.setPriority(highPriority); final BroadcastReceiver firstReceiver = new MyReceiver(); // intent filter for second receiver with low priority final IntentFilter secondFilter = new IntentFilter(action); secondFilter.setPriority(lowPriority); final BroadcastReceiver secondReceiver = new MyReceiver(); // register our receivers context.registerReceiver(firstReceiver, firstFilter); context.registerReceiver(secondReceiver, secondFilter); // send ordered broadcast context.sendOrderedBroadcast(new Intent(action), null); Furthermore broadcast receiver can abort ordered broadcast: @Override public void onReceive(final Context context, final Intent intent) { abortBroadcast(); } in this case all receivers with lower priority will not receive a broadcast message. Section 72.5: Sticky Broadcast If we are using method sendStickyBroadcast(intent) the corresponding intent is sticky, meaning the intent you are sending stays around after broadcast is complete. A StickyBroadcast as the name suggests is a mechanism to read the data from a broadcast, after the broadcast is complete. This can be used in a scenario where you may want to check say in an Activity's onCreate() the value of a key in the intent before that Activity was launched. Intent intent = new Intent("com.org.action"); intent.putExtra("anIntegerKey", 0); sendStickyBroadcast(intent); Section 72.6: Enabling and disabling a Broadcast Receiver programmatically To enable or disable a BroadcastReceiver, we need to get a reference to the PackageManager and we need a ComponentName object containing the class of the receiver we want to enable/disable: ComponentName componentName = new ComponentName(context, MyBroadcastReceiver.class); PackageManager packageManager = context.getPackageManager(); Now we can call the following method to enable the BroadcastReceiver: packageManager.setComponentEnabledSetting( componentName, PackageManager.COMPONENT_ENABLED_STATE_ENABLED, PackageManager.DONT_KILL_APP); GoalKicker.com Android Notes for Professionals 497 Or we can instead use COMPONENT_ENABLED_STATE_DISABLED to disable the receiver: packageManager.setComponentEnabledSetting( componentName, PackageManager.COMPONENT_ENABLED_STATE_DISABLED, PackageManager.DONT_KILL_APP); Section 72.7: Example of a LocalBroadcastManager A BroadcastReceiver is basically a mechanism to relay Intents through the OS to perform specic actions. A classic denition being "A Broadcast receiver is an Android component which allows you to register for system or application events." LocalBroadcastManager is a way to send or receive broadcasts within an application process. This mechanism has a lot of advantages 1. since the data remains inside the application process, the data cannot be leaked. 2. LocalBroadcasts are resolved faster, since the resolution of a normal broadcast happens at runtime throughout the OS. A simple example of a LocalBroastManager is: SenderActivity Intent intent = new Intent("anEvent"); intent.putExtra("key", "This is an event"); LocalBroadcastManager.getInstance(this).sendBroadcast(intent); ReceiverActivity 1. Register a receiver LocalBroadcastManager.getInstance(this).registerReceiver(aLBReceiver, new IntentFilter("anEvent")); 2. A concrete object for performing action when the receiver is called private BroadcastReceiver aLBReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { // perform action here. } }; 3. unregister when the view is not visible any longer. @Override protected void onPause() { // Unregister since the activity is about to be closed. LocalBroadcastManager.getInstance(this).unregisterReceiver(aLBReceiver); super.onDestroy(); } GoalKicker.com Android Notes for Professionals 498 Section 72.8: Android stopped state Starting with Android 3.1 all applications, upon installation, are placed in a stopped state. While in stopped state, the application will not run for any reason, except by a manual launch of an activity, or an explicit intent that addresses an activity ,service or broadcast. When writing system app that installs APKs directly, please take into account that the newly installed APP won't receive any broadcasts until moved into a non stopped state. An easy way to to activate an app is to sent a explicit broadcast to this app. as most apps implement INSTALL_REFERRER, we can use it as a hooking point Scan the manifest of the installed app, and send an explicit broadcast to to each receiver: Intent intent = new Intent(); intent.addFlags(Intent.FLAG_INCLUDE_STOPPED_PACKAGES); intent.setComponent(new ComponentName(packageName, fullClassName)); sendBroadcast(intent); Section 72.9: Communicate two activities through custom Broadcast receiver You can communicate two activities so that Activity A can be notied of an event happening in Activity B. Activity A final String eventName = "your.package.goes.here.EVENT"; @Override protected void onCreate(Bundle savedInstanceState) { registerEventReceiver(); super.onCreate(savedInstanceState); } @Override protected void onDestroy() { unregisterEventReceiver(eventReceiver); super.onDestroy(); } private void registerEventReceiver() { IntentFilter eventFilter = new IntentFilter(); eventFilter.addAction(eventName); registerReceiver(eventReceiver, eventFilter); } private BroadcastReceiver eventReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { //This code will be executed when the broadcast in activity B is launched } }; Activity B final String eventName = "your.package.goes.here.EVENT"; private void launchEvent() { GoalKicker.com Android Notes for Professionals 499 Intent eventIntent = new Intent(eventName); this.sendBroadcast(eventIntent); } Of course you can add more information to the broadcast adding extras to the Intent that is passed between the activities. Not added to keep the example as simple as possible. Section 72.10: BroadcastReceiver to handle BOOT_COMPLETED events Example below shows how to create a BroadcastReceiver which is able to receive BOOT_COMPLETED events. This way, you are able to start a Service or start an Activity as soon device was powered up. Also, you can use BOOT_COMPLETED events to restore your alarms since they are destroyed when device is powered o. NOTE: The user needs to have started the application at least once before you can receive the BOOT_COMPLETED action. AndroidManifest.xml <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.test.example" > ... <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> ... <application> ... <receiver android:name="com.test.example.MyCustomBroadcastReceiver"> <intent-filter> <!-- REGISTER TO RECEIVE BOOT_COMPLETED EVENTS --> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> MyCustomBroadcastReceiver.java public class MyCustomBroadcastReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(action != null) { if (action.equals(Intent.ACTION_BOOT_COMPLETED) ) { // TO-DO: Code to handle BOOT COMPLETED EVENT // TO-DO: I can start an service.. display a notification... start an activity } } } } GoalKicker.com Android Notes for Professionals 500 Section 72.11: Bluetooth Broadcast receiver add permission in your manifest le <uses-permission android:name="android.permission.BLUETOOTH" /> In your Fragment(or Activity) Add the receiver method private BroadcastReceiver mBluetoothStatusChangedReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { final Bundle extras = intent.getExtras(); final int bluetoothState = extras.getInt(Constants.BUNDLE_BLUETOOTH_STATE); switch(bluetoothState) { case BluetoothAdapter.STATE_OFF: // Bluetooth OFF break; case BluetoothAdapter.STATE_TURNING_OFF: // Turning OFF break; case BluetoothAdapter.STATE_ON: // Bluetooth ON break; case BluetoothAdapter.STATE_TURNING_ON: // Turning ON break; } }; Register broadcast Call this method on onResume() private void registerBroadcastManager(){ final LocalBroadcastManager manager = LocalBroadcastManager.getInstance(getActivity()); manager.registerReceiver(mBluetoothStatusChangedReceiver, new IntentFilter(Constants.BROADCAST_BLUETOOTH_STATE)); } Unregister broadcast Call this method on onPause() private void unregisterBroadcastManager(){ final LocalBroadcastManager manager = LocalBroadcastManager.getInstance(getActivity()); // Beacon manager.unregisterReceiver(mBluetoothStatusChangedReceiver); } GoalKicker.com Android Notes for Professionals 501 Chapter 73: UI Lifecycle Section 73.1: Saving data on memory trimming public class ExampleActivity extends Activity { private final static String EXAMPLE_ARG = "example_arg"; private int mArg; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_example); if(savedInstanceState != null) { mArg = savedInstanceState.getInt(EXAMPLE_ARG); } } @Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); outState.putInt(EXAMPLE_ARG, mArg); } } Explanation So, what is happening here? The Android system will always strive to clear as much memory as it can. So, if your activity is down to the background, and another foreground activity is demanding its share, the Android system will call onTrimMemory() on your activity. But that doesn't mean that all your properties should vanish. What you should do is to save them into a Bundle object. Bundle object are much better handled memory wise. Inside a bundle every object is identied by unique text sequence - in the example above integer value variable mArg is hold under reference name EXAMPLE_ARG. And when the activity is recreated extract your old values from the Bundle object instead of recreating them from scratch GoalKicker.com Android Notes for Professionals 502 Chapter 74: HttpURLConnection Section 74.1: Creating an HttpURLConnection In order to create a new Android HTTP Client HttpURLConnection, call openConnection() on a URL instance. Since openConnection() returns a URLConnection, you need to explicitly cast the returned value. URL url = new URL("http://example.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // do something with the connection If you are creating a new URL, you also have to handle the exceptions associated with URL parsing. try { URL url = new URL("http://example.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // do something with the connection } catch (MalformedURLException e) { e.printStackTrace(); } Once the response body has been read and the connection is no longer required, the connection should be closed by calling disconnect(). Here is an example: URL url = new URL("http://example.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); try { // do something with the connection } finally { connection.disconnect(); } Section 74.2: Sending an HTTP GET request URL url = new URL("http://example.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); try { BufferedReader br = new BufferedReader(new InputStreamReader(connection.getInputStream())); // read the input stream // in this case, I simply read the first line of the stream String line = br.readLine(); Log.d("HTTP-GET", line); } finally { connection.disconnect(); } Please note that exceptions are not handled in the example above. A full example, including (a trivial) exception handling, would be: URL url; HttpURLConnection connection = null; GoalKicker.com Android Notes for Professionals 503 try { url = new URL("http://example.com"); connection = (HttpURLConnection) url.openConnection(); BufferedReader br = new BufferedReader(new InputStreamReader(connection.getInputStream())); // read the input stream // in this case, I simply read the first line of the stream String line = br.readLine(); Log.d("HTTP-GET", line); } catch (IOException e) { e.printStackTrace(); } finally { if (connection != null) { connection.disconnect(); } } Section 74.3: Reading the body of an HTTP GET request URL url = new URL("http://example.com"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); try { BufferedReader br = new BufferedReader(new InputStreamReader(connection.getInputStream())); // use a string builder to bufferize the response body // read from the input strea. StringBuilder sb = new StringBuilder(); String line; while ((line = br.readLine()) != null) { sb.append(line).append('\n'); } // use the string builder directly, // or convert it into a String String body = sb.toString(); Log.d("HTTP-GET", body); } finally { connection.disconnect(); } Please note that exceptions are not handled in the example above. Section 74.4: Sending an HTTP POST request with parameters Use a HashMap to store the parameters that should be sent to the server through POST parameters: HashMap<String, String> params; Once the params HashMap is populated, create the StringBuilder that will be used to send them to the server: StringBuilder sbParams = new StringBuilder(); int i = 0; for (String key : params.keySet()) { try { GoalKicker.com Android Notes for Professionals 504 if (i != 0){ sbParams.append("&"); } sbParams.append(key).append("=") .append(URLEncoder.encode(params.get(key), "UTF-8")); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } i++; } Then, create the HttpURLConnection, open the connection, and send the POST parameters: try{ String url = "http://www.example.com/test.php"; URL urlObj = new URL(url); HttpURLConnection conn = (HttpURLConnection) urlObj.openConnection(); conn.setDoOutput(true); conn.setRequestMethod("POST"); conn.setRequestProperty("Accept-Charset", "UTF-8"); conn.setReadTimeout(10000); conn.setConnectTimeout(15000); conn.connect(); String paramsString = sbParams.toString(); DataOutputStream wr = new DataOutputStream(conn.getOutputStream()); wr.writeBytes(paramsString); wr.flush(); wr.close(); } catch (IOException e) { e.printStackTrace(); } Then receive the result that the server sends back: try { InputStream in = new BufferedInputStream(conn.getInputStream()); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder result = new StringBuilder(); String line; while ((line = reader.readLine()) != null) { result.append(line); } Log.d("test", "result from server: " + result.toString()); } catch (IOException e) { e.printStackTrace(); } finally { if (conn != null) { conn.disconnect(); } } GoalKicker.com Android Notes for Professionals 505 Section 74.5: A multi-purpose HttpURLConnection class to handle all types of HTTP requests The following class can be used as a single class that can handle GET, POST, PUT, PATCH, and other requests: class APIResponseObject{ int responseCode; String response; APIResponseObject(int responseCode,String response) { this.responseCode = responseCode; this.response = response; } } public class APIAccessTask extends AsyncTask<String,Void,APIResponseObject> { URL requestUrl; Context context; HttpURLConnection urlConnection; List<Pair<String,String>> postData, headerData; String method; int responseCode = HttpURLConnection.HTTP_OK; interface OnCompleteListener{ void onComplete(APIResponseObject result); } public OnCompleteListener delegate = null; APIAccessTask(Context context, String requestUrl, String method, OnCompleteListener delegate){ this.context = context; this.delegate = delegate; this.method = method; try { this.requestUrl = new URL(requestUrl); } catch(Exception ex){ ex.printStackTrace(); } } APIAccessTask(Context context, String requestUrl, String method, List<Pair<String,String>> postData, OnCompleteListener delegate){ this(context, requestUrl, method, delegate); this.postData = postData; } APIAccessTask(Context context, String requestUrl, String method, List<Pair<String,String>> postData, List<Pair<String,String>> headerData, OnCompleteListener delegate ){ this(context, requestUrl,method,postData,delegate); this.headerData = headerData; } @Override protected void onPreExecute() { super.onPreExecute(); } @Override GoalKicker.com Android Notes for Professionals 506 protected APIResponseObject doInBackground(String... params) { Log.d("debug", "url = "+ requestUrl); try { urlConnection = (HttpURLConnection) requestUrl.openConnection(); if(headerData != null) { for (Pair pair : headerData) { urlConnection.setRequestProperty(pair.first.toString(),pair.second.toString()); } } urlConnection.setDoInput(true); urlConnection.setChunkedStreamingMode(0); urlConnection.setRequestMethod(method); urlConnection.connect(); StringBuilder sb = new StringBuilder(); if(!(method.equals("GET"))) { OutputStream out = new BufferedOutputStream(urlConnection.getOutputStream()); BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(out, "UTF-8")); writer.write(getPostDataString(postData)); writer.flush(); writer.close(); out.close(); } urlConnection.connect(); responseCode = urlConnection.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK) { InputStream in = new BufferedInputStream(urlConnection.getInputStream()); BufferedReader reader = new BufferedReader(new InputStreamReader(in, "UTF-8")); String line; while ((line = reader.readLine()) != null) { sb.append(line); } } return new APIResponseObject(responseCode, sb.toString()); } catch(Exception ex){ ex.printStackTrace(); } return null; } @Override protected void onPostExecute(APIResponseObject result) { delegate.onComplete(result); super.onPostExecute(result); } private String getPostDataString(List<Pair<String, String>> params) throws UnsupportedEncodingException { StringBuilder result = new StringBuilder(); boolean first = true; for(Pair<String,String> pair : params){ if (first) first = false; else result.append("&"); GoalKicker.com Android Notes for Professionals 507 result.append(URLEncoder.encode(pair.first,"UTF-8")); result.append("="); result.append(URLEncoder.encode(pair.second, "UTF-8")); } return result.toString(); } } Usage Use any of the given constructors of the class depending on whether you need to send POST data or any extra headers. The onComplete() method will be called when the data fetching is complete. The data is returned as an object of the APIResponseObject class, which has a status code stating the HTTP status code of the request and a string containing the response. You can parse this response in your class, i.e. XML or JSON. Call execute() on the object of the class to execute the request, as shown in the following example: class MainClass { String url = "https://example.com./api/v1/ex"; String method = "POST"; List<Pair<String,String>> postData = new ArrayList<>(); postData.add(new Pair<>("email","whatever"); postData.add(new Pair<>("password", "whatever"); new APIAccessTask(MainActivity.this, url, method, postData, new APIAccessTask.OnCompleteListener() { @Override public void onComplete(APIResponseObject result) { if (result.responseCode == HttpURLConnection.HTTP_OK) { String str = result.response; // Do your XML/JSON parsing here } } }).execute(); } Section 74.6: Use HttpURLConnection for multipart/formdata Create custom class for calling multipart/form-data HttpURLConnection request MultipartUtility.java public class MultipartUtility { private final String boundary; private static final String LINE_FEED = "\r\n"; private HttpURLConnection httpConn; private String charset; private OutputStream outputStream; private PrintWriter writer; /** * This constructor initializes a new HTTP POST request with content type * is set to multipart/form-data * * @param requestURL * @param charset GoalKicker.com Android Notes for Professionals 508 * @throws IOException */ public MultipartUtility(String requestURL, String charset) throws IOException { this.charset = charset; // creates a unique boundary based on time stamp boundary = "===" + System.currentTimeMillis() + "==="; URL url = new URL(requestURL); httpConn = (HttpURLConnection) url.openConnection(); httpConn.setUseCaches(false); httpConn.setDoOutput(true); // indicates POST method httpConn.setDoInput(true); httpConn.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary); outputStream = httpConn.getOutputStream(); writer = new PrintWriter(new OutputStreamWriter(outputStream, charset), true); } /** * Adds a form field to the request * * @param name field name * @param value field value */ public void addFormField(String name, String value) { writer.append("--" + boundary).append(LINE_FEED); writer.append("Content-Disposition: form-data; name=\"" + name + "\"") .append(LINE_FEED); writer.append("Content-Type: text/plain; charset=" + charset).append( LINE_FEED); writer.append(LINE_FEED); writer.append(value).append(LINE_FEED); writer.flush(); } /** * Adds a upload file section to the request * * @param fieldName name attribute in <input type="file" name="..." /> * @param uploadFile a File to be uploaded * @throws IOException */ public void addFilePart(String fieldName, File uploadFile) throws IOException { String fileName = uploadFile.getName(); writer.append("--" + boundary).append(LINE_FEED); writer.append( "Content-Disposition: form-data; name=\"" + fieldName + "\"; filename=\"" + fileName + "\"") .append(LINE_FEED); writer.append( "Content-Type: " + URLConnection.guessContentTypeFromName(fileName)) .append(LINE_FEED); writer.append("Content-Transfer-Encoding: binary").append(LINE_FEED); writer.append(LINE_FEED); writer.flush(); FileInputStream inputStream = new FileInputStream(uploadFile); byte[] buffer = new byte[4096]; GoalKicker.com Android Notes for Professionals 509 int bytesRead = -1; while ((bytesRead = inputStream.read(buffer)) != -1) { outputStream.write(buffer, 0, bytesRead); } outputStream.flush(); inputStream.close(); writer.append(LINE_FEED); writer.flush(); } /** * Adds a header field to the request. * * @param name - name of the header field * @param value - value of the header field */ public void addHeaderField(String name, String value) { writer.append(name + ": " + value).append(LINE_FEED); writer.flush(); } /** * Completes the request and receives response from the server. * * @return a list of Strings as response in case the server returned * status OK, otherwise an exception is thrown. * @throws IOException */ public List<String> finish() throws IOException { List<String> response = new ArrayList<String>(); writer.append(LINE_FEED).flush(); writer.append("--" + boundary + "--").append(LINE_FEED); writer.close(); // checks server's status code first int status = httpConn.getResponseCode(); if (status == HttpURLConnection.HTTP_OK) { BufferedReader reader = new BufferedReader(new InputStreamReader( httpConn.getInputStream())); String line = null; while ((line = reader.readLine()) != null) { response.add(line); } reader.close(); httpConn.disconnect(); } else { throw new IOException("Server returned non-OK status: " + status); } return response; } } Use it (Async way) MultipartUtility multipart = new MultipartUtility(requestURL, charset); // In your case you are not adding form data so ignore this /*This is to add parameter values */ for (int i = 0; i < myFormDataArray.size(); i++) { multipart.addFormField(myFormDataArray.get(i).getParamName(), myFormDataArray.get(i).getParamValue()); GoalKicker.com Android Notes for Professionals 510 } //add your file here. /*This is to add file content*/ for (int i = 0; i < myFileArray.size(); i++) { multipart.addFilePart(myFileArray.getParamName(), new File(myFileArray.getFileName())); } List<String> response = multipart.finish(); Debug.e(TAG, "SERVER REPLIED:"); for (String line : response) { Debug.e(TAG, "Upload Files Response:::" + line); // get your server response here. responseString = line; } Section 74.7: Upload (POST) le using HttpURLConnection Quite often it's necessary to send/upload a le to a remote server, for example, an image, video, audio or a backup of the application database to a remote private server. Assuming the server is expecting a POST request with the content, here's a simple example of how to complete this task in Android. File uploads are sent using multipart/form-data POST requests. It's very easy to implement: URL url = new URL(postTarget); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); String auth = "Bearer " + oauthToken; connection.setRequestProperty("Authorization", basicAuth); String boundary = UUID.randomUUID().toString(); connection.setRequestMethod("POST"); connection.setDoOutput(true); connection.setRequestProperty("Content-Type", "multipart/form-data;boundary=" + boundary); DataOutputStream request = new DataOutputStream(uc.getOutputStream()); request.writeBytes("--" + boundary + "\r\n"); request.writeBytes("Content-Disposition: form-data; name=\"description\"\r\n\r\n"); request.writeBytes(fileDescription + "\r\n"); request.writeBytes("--" + boundary + "\r\n"); request.writeBytes("Content-Disposition: form-data; name=\"file\"; filename=\"" + file.fileName + "\"\r\n\r\n"); request.write(FileUtils.readFileToByteArray(file)); request.writeBytes("\r\n"); request.writeBytes("--" + boundary + "--\r\n"); request.flush(); int respCode = connection.getResponseCode(); switch(respCode) { case 200: //all went ok - read response ... break; case 301: case 302: GoalKicker.com Android Notes for Professionals 511 case 307: //handle redirect - for example, re-post to the new location ... break; ... default: //do something sensible } Of course, exceptions will need to be caught or declared as being thrown. A couple points to note about this code: 1. postTarget is the destination URL of the POST; oauthToken is the authentication token; fileDescription is the description of the le, which is sent as the value of eld description; file is the le to be sent - it's of type java.io.File - if you have the le path, you can use new File(filePath) instead. 2. It sets Authorization header for an oAuth auth 3. It uses Apache Common FileUtil to read the le into a byte array - if you already have the content of the le in a byte array or in some other way in memory, then there's no need to read it. GoalKicker.com Android Notes for Professionals 512 Chapter 75: Callback URL Section 75.1: Callback URL example with Instagram OAuth One of the use cases of callback URLs is OAuth. Let us do this with an Instagram Login: If the user enters their credentials and clicks the Login button, Instagram will validate the credentials and return an access_token. We need that access_token in our app. For our app to be able to listen to such links, we need to add a callback URL to our Activity. We can do this by adding an <intent-filter/> to our Activity, which will react to that callback URL. Assume that our callback URL is appSchema://appName.com. Then you have to add the following lines to your desired Activity in the Manifest.xml le: <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.BROWSABLE"/> <data android:host="appName.com" android:scheme="appSchema"/> Explanation of the lines above: <category android:name="android.intent.category.BROWSABLE"/> makes the target activity allow itself to be started by a web browser to display data referenced by a link. <data android:host="appName.com" android:scheme="appSchema"/> species our schema and host of our callback URL. All together, these lines will cause the specic Activity to be opened whenever the callback URL is called in a browser. Now, in order to get the contents of the URL in your Activity, you need to override the onResume() method as follows: @Override public void onResume() { // The following line will return "appSchema://appName.com". String CALLBACK_URL = getResources().getString(R.string.insta_callback); Uri uri = getIntent().getData(); if (uri != null && uri.toString().startsWith(CALLBACK_URL)) { String access_token = uri.getQueryParameter("access_token"); } // Perform other operations here. } Now you have retrieved the access_token from Instagram, that is used in various API endpoints of Instagram. GoalKicker.com Android Notes for Professionals 513 Chapter 76: Snackbar Parameter view View: The view to nd a parent from. Description text CharSequence: The text to show. Can be formatted text. resId int: The resource id of the string resource to use. Can be formatted text. duration int: How long to display the message. This can be LENGTH_SHORT, LENGTH_LONG or LENGTH_INDEFINITE Section 76.1: Creating a simple Snackbar Creating a Snackbar can be done as follows: Snackbar.make(view, "Text to display", Snackbar.LENGTH_LONG).show(); The view is used to nd a suitable parent to use to display the Snackbar. Typically this would be a CoordinatorLayout that you've dened in your XML, which enables adding functionality such as swipe to dismiss and automatically moving of other widgets (e.g. FloatingActionButton). If there's no CoordinatorLayout then the window decor's content view is used. Very often we also add an action to the Snackbar. A common use case would be an "Undo" action. Snackbar.make(view, "Text to display", Snackbar.LENGTH_LONG) .setAction("UNDO", new View.OnClickListener() { @Override public void onClick(View view) { // put your logic here } }) .show(); You can create a Snackbar and show it later: Snackbar snackbar = Snackbar.make(view, "Text to display", Snackbar.LENGTH_LONG); snackbar.show(); If you want to change the color of the Snackbar's text: Snackbar snackbar = Snackbar.make(view, "Text to display", Snackbar.LENGTH_LONG); View view = snackbar .getView(); TextView textView = (TextView) view.findViewById(android.support.design.R.id.snackbar_text); textView.setTextColor(Color.parseColor("#FF4500")); snackbar.show(); By default Snackbar dismisses on it's right swipe.This example demonstrates how to dismiss the snackBar on it's left swipe. Section 76.2: Custom Snack Bar Function to customize snackbar public static Snackbar makeText(Context context, String message, int duration) { Activity activity = (Activity) context; GoalKicker.com Android Notes for Professionals 514 View layout; Snackbar snackbar = Snackbar .make(activity.findViewById(android.R.id.content), message, duration); layout = snackbar.getView(); //setting background color layout.setBackgroundColor(context.getResources().getColor(R.color.orange)); android.widget.TextView text = (android.widget.TextView) layout.findViewById(android.support.design.R.id.snackbar_text); //setting font color text.setTextColor(context.getResources().getColor(R.color.white)); Typeface font = null; //Setting font font = Typeface.createFromAsset(context.getAssets(), "DroidSansFallbackanmol256.ttf"); text.setTypeface(font); return snackbar; } Call the function from fragment or activity SnackBar.makeText(MyActivity.this, "Please Locate your address at Map", Snackbar.LENGTH_SHORT).show(); Section 76.3: Custom Snackbar (no need view) Creating an Snackbar without the need pass view to Snackbar, all layout create in android in android.R.id.content. public class CustomSnackBar { public static final int STATE_ERROR = 0; public static final int STATE_WARNING = 1; public static final int STATE_SUCCESS = 2; public static final int VIEW_PARENT = android.R.id.content; public CustomSnackBar(View view, String message, int actionType) { super(); Snackbar snackbar = Snackbar.make(view, message, Snackbar.LENGTH_LONG); View sbView = snackbar.getView(); TextView textView = (TextView) sbView.findViewById(android.support.design.R.id.snackbar_text); textView.setTextColor(Color.parseColor("#ffffff")); textView.setTextSize(TypedValue.COMPLEX_UNIT_SP, 14); textView.setGravity(View.TEXT_ALIGNMENT_CENTER); textView.setLayoutDirection(View.LAYOUT_DIRECTION_RTL); switch (actionType) { case STATE_ERROR: snackbar.getView().setBackgroundColor(Color.parseColor("#F12B2B")); break; case STATE_WARNING: snackbar.getView().setBackgroundColor(Color.parseColor("#000000")); break; case STATE_SUCCESS: snackbar.getView().setBackgroundColor(Color.parseColor("#7ED321")); break; } snackbar.show(); } } GoalKicker.com Android Notes for Professionals 515 for call class new CustomSnackBar(ndViewById(CustomSnackBar.VIEW_PARENT),"message", CustomSnackBar.STATE_ERROR); Section 76.4: Snackbar with Callback You can use Snackbar.Callback to listen if the snackbar was dismissed by user or timeout. Snackbar.make(getView(), "Hi snackbar!", Snackbar.LENGTH_LONG).setCallback( new Snackbar.Callback() { @Override public void onDismissed(Snackbar snackbar, int event) { switch(event) { case Snackbar.Callback.DISMISS_EVENT_ACTION: Toast.makeText(getActivity(), "Clicked the action", Toast.LENGTH_LONG).show(); break; case Snackbar.Callback.DISMISS_EVENT_TIMEOUT: Toast.makeText(getActivity(), "Time out", Toast.LENGTH_LONG).show(); break; } } @Override public void onShown(Snackbar snackbar) { Toast.makeText(getActivity(), "This is my annoying step-brother", Toast.LENGTH_LONG).show(); } }).setAction("Go!", new View.OnClickListener() { @Override public void onClick(View v) { } }).show(); Section 76.5: Snackbar vs Toasts: Which one should I use? Toasts are generally used when we want to display an information to the user regarding some action that has successfully (or not) happened and this action does not require the user to take any other action. Like when a message has been sent, for example: Toast.makeText(this, "Message Sent!", Toast.LENGTH_SHORT).show(); Snackbars are also used to display an information. But this time, we can give the user an opportunity to take an action. For example, let's say the user deleted a picture by mistake and he wants to get it back. We can provide a Snackbar with the "Undo" action. Like this: Snackbar.make(getCurrentFocus(), "Picture Deleted", Snackbar.LENGTH_SHORT) .setAction("Undo", new View.OnClickListener() { @Override public void onClick(View view) { //Return his picture } }) .show(); Conclusion: Toasts are used when we don't need user interaction. Snackbars are used to allow users to take GoalKicker.com Android Notes for Professionals 516 another action or undo a previous one. Section 76.6: Custom Snackbar This example shows a white Snackbar with custom Undo icon. Snackbar customBar = Snackbar.make(view , "Text to be displayed", Snackbar.LENGTH_LONG); customBar.setAction("UNDO", new View.OnClickListener() { @Override public void onClick(View view) { //Put the logic for undo button here } }); View sbView = customBar.getView(); //Changing background to White sbView.setBackgroundColor(Color.WHITE)); TextView snackText = (TextView) sbView.findViewById(android.support.design.R.id.snackbar_text); if (snackText!=null) { //Changing text color to Black snackText.setTextColor(Color.BLACK); } TextView actionText = (TextView) sbView.findViewById(android.support.design.R.id.snackbar_action); if (actionText!=null) { // Setting custom Undo icon actionText.setCompoundDrawablesRelativeWithIntrinsicBounds(R.drawable.custom_undo, 0, 0, 0); } customBar.show(); GoalKicker.com Android Notes for Professionals 517 Chapter 77: Widgets Section 77.1: Manifest Declaration Declare the AppWidgetProvider class in your application's AndroidManifest.xml le. For example: <receiver android:name="ExampleAppWidgetProvider" > <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/example_appwidget_info" /> </receiver> Section 77.2: Metadata Add the AppWidgetProviderInfo metadata in res/xml: <appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android" android:minWidth="40dp" android:minHeight="40dp" android:updatePeriodMillis="86400000" android:previewImage="@drawable/preview" android:initialLayout="@layout/example_appwidget" android:configure="com.example.android.ExampleAppWidgetConfigure" android:resizeMode="horizontal|vertical" android:widgetCategory="home_screen"> </appwidget-provider> Section 77.3: AppWidgetProvider Class The most important AppWidgetProvider callback is onUpdate(). It is called every time an appwidget is added. public class ExampleAppWidgetProvider extends AppWidgetProvider { public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { final int N = appWidgetIds.length; // Perform this loop procedure for each App Widget that belongs to this provider for (int i=0; i<N; i++) { int appWidgetId = appWidgetIds[i]; // Create an Intent to launch ExampleActivity Intent intent = new Intent(context, ExampleActivity.class); PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0); // Get the layout for the App Widget and attach an on-click listener // to the button RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.appwidget_provider_layout); views.setOnClickPendingIntent(R.id.button, pendingIntent); // Tell the AppWidgetManager to perform an update on the current app widget appWidgetManager.updateAppWidget(appWidgetId, views); } } } GoalKicker.com Android Notes for Professionals 518 onAppWidgetOptionsChanged() is called when the widget is placed or resized. onDeleted(Context, int[]) is called when the widget is deleted. Section 77.4: Create/Integrate Basic Widget using Android Studio Latest Android Studio will create & integrate a Basic Widget to your Application in 2 steps. Right on your Application ==> New ==> Widget ==> App Widget . It will show a Screen like below & ll the elds GoalKicker.com Android Notes for Professionals 519 Its Done. It will create & integrate a basic HelloWorld Widget(Including Layout File , Meta Data File , Declaration in Manifest File etc.) to your Application. Section 77.5: Two widgets with dierent layouts declaration 1. Declare two receivers in a manifest le: <receiver android:name=".UVMateWidget" android:label="UVMate Widget 1x1"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_1x1" /> </receiver> <receiver android:name=".UVMateWidget2x2" android:label="UVMate Widget 2x2"> GoalKicker.com Android Notes for Professionals 520 <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_2x2" /> </receiver> 2. Create two layouts @xml/widget_1x1 @xml/widget_2x2 3. Declare the subclass UVMateWidget2x2 from the UVMateWidget class with extended behavior: package au.com.aershov.uvmate; import android.content.Context; import android.widget.RemoteViews; public class UVMateWidget2x2 extends UVMateWidget { public RemoteViews getRemoteViews(Context context, int minWidth, int minHeight) { mUVMateHelper.saveWidgetSize(mContext.getString(R.string.app_ws_2x2)); return new RemoteViews(context.getPackageName(), R.layout.widget_2x2); } } GoalKicker.com Android Notes for Professionals 521 Chapter 78: Toast Parameter context Details The context to display your Toast in. this is commonly used in an Activity and getActivity() is commonly used in a Fragment text A CharSequence that species what text will be shown in the Toast. Any object that implements CharSequence can be used, including a String resId A resource ID that can be used to provide a resource String to display in the Toast duration Integer ag representing how long the Toast will show. Options are Toast.LENGTH_SHORT and Toast.LENGTH_LONG gravity Integer specifying the position, or "gravity" of the Toast. See options here xOset Species the horizontal oset for the Toast position yOset Species the vertical oset for the Toast position A Toast provides simple feedback about an operation in a small popup and automatically disappears after a timeout. It only lls the amount of space required for the message and the current activity remains visible and interactive. Section 78.1: Creating a custom Toast If you don't want to use the default Toast view, you can provide your own using the setView(View) method on a Toast object. First, create the XML layout you would like to use in your Toast. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/toast_layout_root" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="8dp" android:background="#111"> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textColor="#FFF"/> <TextView android:id="@+id/description" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textColor="#FFF"/> </LinearLayout> Then, when creating your Toast, inate your custom View from XML, and call setView // Inflate the custom view from XML LayoutInflater inflater = getLayoutInflater(); View layout = inflater.inflate(R.layout.custom_toast_layout, (ViewGroup) findViewById(R.id.toast_layout_root)); // Set the title and description TextViews from our custom layout TextView title = (TextView) layout.findViewById(R.id.title); title.setText("Toast Title"); GoalKicker.com Android Notes for Professionals 522 TextView description = (TextView) layout.findViewById(R.id.description); description.setText("Toast Description"); // Create and show the Toast object Toast toast = new Toast(getApplicationContext()); toast.setGravity(Gravity.CENTER, 0, 0); toast.setDuration(Toast.LENGTH_LONG); toast.setView(layout); toast.show(); Section 78.2: Set position of a Toast A standard toast notication appears at the bottom of the screen aligned in horizontal centre. You can change this position with the setGravity(int, int, int). This accepts three parameters: a Gravity constant, an x-position oset, and a y-position oset. For example, if you decide that the toast should appear in the top-left corner, you can set the gravity like this: toast.setGravity(Gravity.TOP|Gravity.LEFT, 0, 0); Section 78.3: Showing a Toast Message In Android, a Toast is a simple UI element that can be used to give contextual feedback to a user. To display a simple Toast message, we can do the following. // Declare the parameters to use for the Toast Context context = getApplicationContext(); // in an Activity, you may also use "this" // in a fragment, you can use getActivity() CharSequence message = "I'm an Android Toast!"; int duration = Toast.LENGTH_LONG; // Toast.LENGTH_SHORT is the other option // Create the Toast object, and show it! Toast myToast = Toast.makeText(context, message, duration); myToast.show(); Or, to show a Toast inline, without holding on to the Toast object you can: Toast.makeText(context, "Ding! Your Toast is ready.", Toast.LENGTH_SHORT).show(); IMPORTANT: Make sure that the show() method is called from the UI thread. If you're trying to show a Toast from a dierent thread you can e.g. use runOnUiThread method of an Activity. Failing to do so, meaning trying to modify the UI by creating a Toast, will throw a RuntimeException which will look like this: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() The simplest way of handling this exception is just by using runOnUiThread: syntax is shown below. runOnUiThread(new Runnable() { @Override GoalKicker.com Android Notes for Professionals 523 public void run() { // Your code here } }); Section 78.4: Show Toast Message Above Soft Keyboard By default, Android will display Toast messages at the bottom of the screen even if the keyboard is showing. This will show a Toast message just above the keyboard. public void showMessage(final String message, final int length) { View root = findViewById(android.R.id.content); Toast toast = Toast.makeText(this, message, length); int yOffset = Math.max(0, root.getHeight() - toast.getYOffset()); toast.setGravity(Gravity.TOP | Gravity.CENTER_HORIZONTAL, 0, yOffset); toast.show(); } Section 78.5: Thread safe way of displaying Toast (Application Wide) public class MainApplication extends Application { private static Context context; //application context private Handler mainThreadHandler; private Toast toast; public Handler getMainThreadHandler() { if (mainThreadHandler == null) { mainThreadHandler = new Handler(Looper.getMainLooper()); } return mainThreadHandler; } @Override public void onCreate() { super.onCreate(); context = this; } public static MainApplication getApp(){ return (MainApplication) context; } /** * Thread safe way of displaying toast. * @param message * @param duration */ public void showToast(final String message, final int duration) { getMainThreadHandler().post(new Runnable() { @Override public void run() { if (!TextUtils.isEmpty(message)) { if (toast != null) { toast.cancel(); //dismiss current toast if visible toast.setText(message); } else { toast = Toast.makeText(App.this, message, duration); GoalKicker.com Android Notes for Professionals 524 } toast.show(); } } }); } Remember to add MainApplication in manifest. Now call it from any thread to display a toast message. MainApplication.getApp().showToast("Some message", Toast.LENGTH_LONG); Section 78.6: Thread safe way of displaying a Toast Message (For AsyncTask) If you don't want to extend Application and keep your toast messages thread safe, make sure you show them in the post execute section of your AsyncTasks. public class MyAsyncTask extends AsyncTask <Void, Void, Void> { @Override protected Void doInBackground(Void... params) { // Do your background work here } @Override protected void onPostExecute(Void aVoid) { // Show toast messages here Toast.makeText(context, "Ding! Your Toast is ready.", } Toast.LENGTH_SHORT).show(); } GoalKicker.com Android Notes for Professionals 525 Chapter 79: Create Singleton Class for Toast Message Parameter context details Relevant context which needs to display your toast message. If you use this in the activity pass "this" keyword or If you use in fragement pass as "getActivity()". view Create a custom view and pass that view object to this. gravity Pass the gravity position of the toaster. All the position has added under the Gravity class as the static variables . The Most common positions are Gravity.TOP, Gravity.BOTTOM, Gravity.LEFT, Gravity.RIGHT. xOset Horizontal oset of the toast message. yOset Vertical oset of the toast message. duration Duration of the toast show. We can set either Toast.LENGTH_SHORT or Toast.LENGTH_LONG Toast messages are the most simple way of providing feedback to the user. By default, Android provide gray color message toast where we can set the message and the duration of the message. If we need to create more customizable and reusable toast message, we can implement it by ourselves with the use of a custom layout. More importantly when we are implementing it, the use of Singelton design pattern will make it easy for maintaining and development of the custom toast message class. Section 79.1: Create own singleton class for toast massages Here is how to create your own singleton class for toast messages, If your application need to show success, warning and the danger messages for dierent use cases you can use this class after you have modied it to your own specications. public class ToastGenerate { private static ToastGenerate ourInstance; public ToastGenerate (Context context) { this.context = context; } public static ToastGenerate getInstance(Context context) { if (ourInstance == null) ourInstance = new ToastGenerate(context); return ourInstance; } //pass message and message type to this method public void createToastMessage(String message,int type){ //inflate the custom layout LayoutInflater layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); LinearLayout toastLayout = (LinearLayout) layoutInflater.inflate(R.layout.layout_custome_toast,null); TextView toastShowMessage = (TextView) toastLayout.findViewById(R.id.textCustomToastTopic); switch (type){ case 0: //if the message type is 0 fail toaster method will call createFailToast(toastLayout,toastShowMessage,message); break; case 1: GoalKicker.com Android Notes for Professionals 526 //if the message type is 1 success toaster method will call createSuccessToast(toastLayout,toastShowMessage,message); break; case 2: createWarningToast( toastLayout, toastShowMessage, message); //if the message type is 2 warning toaster method will call break; default: createFailToast(toastLayout,toastShowMessage,message); } } //Failure toast message method private final void createFailToast(LinearLayout toastLayout,TextView toastMessage,String message){ toastLayout.setBackgroundColor(context.getResources().getColor(R.color.button_alert_normal)); toastMessage.setText(message); toastMessage.setTextColor(context.getResources().getColor(R.color.white)); showToast(context,toastLayout); } //warning toast message method private final void createWarningToast( LinearLayout toastLayout, TextView toastMessage, String message) { toastLayout.setBackgroundColor(context.getResources().getColor(R.color.warning_toast)); toastMessage.setText(message); toastMessage.setTextColor(context.getResources().getColor(R.color.white)); showToast(context, toastLayout); } //success toast message method private final void createSuccessToast(LinearLayout toastLayout,TextView toastMessage,String message){ toastLayout.setBackgroundColor(context.getResources().getColor(R.color.success_toast)); toastMessage.setText(message); toastMessage.setTextColor(context.getResources().getColor(R.color.white)); showToast(context,toastLayout); } private void showToast(View view){ Toast toast = new Toast(context); toast.setGravity(Gravity.TOP,0,0); // show message in the top of the device toast.setDuration(Toast.LENGTH_SHORT); toast.setView(view); toast.show(); } } GoalKicker.com Android Notes for Professionals 527 Chapter 80: Interfaces Section 80.1: Custom Listener Dene interface //In this interface, you can define messages, which will be send to owner. public interface MyCustomListener { //In this case we have two messages, //the first that is sent when the process is successful. void onSuccess(List<Bitmap> bitmapList); //And The second message, when the process will fail. void onFailure(String error); } Create listener In the next step we need to dene an instance variable in the object that will send callback via MyCustomListener. And add setter for our listener. public class SampleClassB { private MyCustomListener listener; public void setMyCustomListener(MyCustomListener listener) { this.listener = listener; } } Implement listener Now, in other class, we can create instance of SampleClassB. public class SomeActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { SampleClassB sampleClass = new SampleClassB(); } } next we can set our listener, to sampleClass, in two ways: by implements MyCustomListener in our class: public class SomeActivity extends Activity implements MyCustomListener { @Override protected void onCreate(Bundle savedInstanceState) { SampleClassB sampleClass = new SampleClassB(); sampleClass.setMyCustomListener(this); } @Override public void onSuccess(List<Bitmap> bitmapList) { } @Override public void onFailure(String error) { } GoalKicker.com Android Notes for Professionals 528 } or just instantiate an anonymous inner class: public class SomeActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { SampleClassB sampleClass = new SampleClassB(); sampleClass.setMyCustomListener(new MyCustomListener() { @Override public void onSuccess(List<Bitmap> bitmapList) { } @Override public void onFailure(String error) { } }); } } Trigger listener public class SampleClassB { private MyCustomListener listener; public void setMyCustomListener(MyCustomListener listener) { this.listener = listener; } public void doSomething() { fetchImages(); } private void fetchImages() { AsyncImagefetch imageFetch = new AsyncImageFetch(); imageFetch.start(new Response<Bitmap>() { @Override public void onDone(List<Bitmap> bitmapList, Exception e) { //do some stuff if needed //check if listener is set or not. if(listener == null) return; //Fire proper event. bitmapList or error message will be sent to //class which set listener. if(e == null) listener.onSuccess(bitmapList); else listener.onFailure(e.getMessage()); } }); } } Section 80.2: Basic Listener The "listener" or "observer" pattern is the most common strategy for creating asynchronous callbacks in Android GoalKicker.com Android Notes for Professionals 529 development. public class MyCustomObject { //1 - Define the interface public interface MyCustomObjectListener { public void onAction(String action); } //2 - Declare your listener object private MyCustomObjectListener listener; // and initialize it in the costructor public MyCustomObject() { this.listener = null; } //3 - Create your listener setter public void setCustomObjectListener(MyCustomObjectListener listener) { this.listener = listener; } // 4 - Trigger listener event public void makeSomething(){ if (this.listener != null){ listener.onAction("hello!"); } } Now on your Activity: public class MyActivity extends Activity { public final String TAG = "MyActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main_activity); MyCustomObject mObj = new MyCustomObject(); //5 - Implement listener callback mObj.setCustomObjectListener(new MyCustomObjectListener() { @Override public void onAction(String action) { Log.d(TAG, "Value: "+action); } }); } } GoalKicker.com Android Notes for Professionals 530 Chapter 81: Animators Section 81.1: TransitionDrawable animation This example displays a transaction for an image view with only two images.(can use more images as well one after the other for the rst and second layer positions after each transaction as a loop) add a image array to res/values/arrays.xml <resources> <array name="splash_images"> <item>@drawable/spash_imge_first</item> <item>@drawable/spash_img_second</item> </array> </resources> private Drawable[] backgroundsDrawableArrayForTransition; private TransitionDrawable transitionDrawable; private void backgroundAnimTransAction() { // set res image array Resources resources = getResources(); TypedArray icons = resources.obtainTypedArray(R.array.splash_images); @SuppressWarnings("ResourceType") Drawable drawable = icons.getDrawable(0); @SuppressWarnings("ResourceType") Drawable drawableTwo = icons.getDrawable(1); // ending image // starting image backgroundsDrawableArrayForTransition = new Drawable[2]; backgroundsDrawableArrayForTransition[0] = drawable; backgroundsDrawableArrayForTransition[1] = drawableTwo; transitionDrawable = new TransitionDrawable(backgroundsDrawableArrayForTransition); // your image view here - backgroundImageView backgroundImageView.setImageDrawable(transitionDrawable); transitionDrawable.startTransition(4000); transitionDrawable.setCrossFadeEnabled(false); // call public methods } Section 81.2: Fade in/out animation In order to get a view to slowly fade in or out of view, use an ObjectAnimator. As seen in the code below, set a duration using .setDuration(millis) where the millis parameter is the duration (in milliseconds) of the animation. In the below code, the views will fade in / out over 500 milliseconds, or 1/2 second. To start the ObjectAnimator's animation, call .start(). Once the animation is complete, onAnimationEnd(Animator animation) is called. Here is a good place to set your view's visibility to View.GONE or View.VISIBLE. GoalKicker.com Android Notes for Professionals 531 import android.animation.Animator; import android.animation.AnimatorListenerAdapter; import android.animation.ValueAnimator; void fadeOutAnimation(View viewToFadeOut) { ObjectAnimator fadeOut = ObjectAnimator.ofFloat(viewToFadeOut, "alpha", 1f, 0f); fadeOut.setDuration(500); fadeOut.addListener(new AnimatorListenerAdapter() { @Override public void onAnimationEnd(Animator animation) { // We wanna set the view to GONE, after it's fade out. so it actually disappear from the layout & don't take up space. viewToFadeOut.setVisibility(View.GONE); } }); fadeOut.start(); } void fadeInAnimation(View viewToFadeIn) { ObjectAnimator fadeIn = ObjectAnimator.ofFloat(viewToFadeIn, "alpha", 0f, 1f); fadeIn.setDuration(500); fadeIn.addListener(new AnimatorListenerAdapter() { @Override public void onAnimationStar(Animator animation) { // We wanna set the view to VISIBLE, but with alpha 0. So it appear invisible in the layout. viewToFadeIn.setVisibility(View.VISIBLE); viewToFadeIn.setAlpha(0); } }); fadeIn.start(); } Section 81.3: ValueAnimator ValueAnimator introduces a simple way to animate a value (of a particular type, e.g. int, float, etc.). The usual way of using it is: 1. Create a ValueAnimator that will animate a value from min to max 2. Add an UpdateListener in which you will use the calculated animated value (which you can obtain with getAnimatedValue()) There are two ways you can create the ValueAnimator: (the example code animates a float from 20f to 40f in 250ms) 1. From xml (put it in the /res/animator/): <animator xmlns:android="http://schemas.android.com/apk/res/android" android:duration="250" android:valueFrom="20" android:valueTo="40" android:valueType="floatType"/> ValueAnimator animator = (ValueAnimator) AnimatorInflater.loadAnimator(context, GoalKicker.com Android Notes for Professionals 532 R.animator.example_animator); animator.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() { @Override public void onAnimationUpdate(ValueAnimator anim) { // ... use the anim.getAnimatedValue() } }); // set all the other animation-related stuff you want (interpolator etc.) animator.start(); 2. From the code: ValueAnimator animator = ValueAnimator.ofFloat(20f, 40f); animator.setDuration(250); animator.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() { @Override public void onAnimationUpdate(ValueAnimator anim) { // use the anim.getAnimatedValue() } }); // set all the other animation-related stuff you want (interpolator etc.) animator.start(); Section 81.4: Expand and Collapse animation of View public class ViewAnimationUtils { public static void expand(final View v) { v.measure(LayoutParams.MATCH_PARENT, LayoutParams.WRAP_CONTENT); final int targtetHeight = v.getMeasuredHeight(); v.getLayoutParams().height = 0; v.setVisibility(View.VISIBLE); Animation a = new Animation() { @Override protected void applyTransformation(float interpolatedTime, Transformation t) { v.getLayoutParams().height = interpolatedTime == 1 ? LayoutParams.WRAP_CONTENT : (int)(targtetHeight * interpolatedTime); v.requestLayout(); } @Override public boolean willChangeBounds() { return true; } }; a.setDuration((int)(targtetHeight / v.getContext().getResources().getDisplayMetrics().density)); v.startAnimation(a); } public static void collapse(final View v) { final int initialHeight = v.getMeasuredHeight(); Animation a = new Animation() { @Override protected void applyTransformation(float interpolatedTime, Transformation t) { GoalKicker.com Android Notes for Professionals 533 if(interpolatedTime == 1){ v.setVisibility(View.GONE); }else{ v.getLayoutParams().height = initialHeight - (int)(initialHeight * interpolatedTime); v.requestLayout(); } } @Override public boolean willChangeBounds() { return true; } }; a.setDuration((int)(initialHeight / v.getContext().getResources().getDisplayMetrics().density)); v.startAnimation(a); } } Section 81.5: ObjectAnimator ObjectAnimator is a subclass of ValueAnimator with the added ability to set the calculated value to the property of a target View. Just like in the ValueAnimator, there are two ways you can create the ObjectAnimator: (the example code animates an alpha of a View from 0.4f to 0.2f in 250ms) 1. From xml (put it in the /res/animator) <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android" android:duration="250" android:propertyName="alpha" android:valueFrom="0.4" android:valueTo="0.2" android:valueType="floatType"/> ObjectAnimator animator = (ObjectAnimator) AnimatorInflater.loadAnimator(context, R.animator.example_animator); animator.setTarget(exampleView); // set all the animation-related stuff you want (interpolator etc.) animator.start(); 2. From code: ObjectAnimator animator = ObjectAnimator.ofFloat(exampleView, View.ALPHA, 0.4f, 0.2f); animator.setDuration(250); // set all the animation-related stuff you want (interpolator etc.) animator.start(); Section 81.6: ViewPropertyAnimator ViewPropertyAnimator is a simplied and optimized way to animate properties of a View. Every single View has a ViewPropertyAnimator object available through the animate() method. You can use that to animate multiple properties at once with a simple call. Every single method of a ViewPropertyAnimator species the target value of a specic parameter that the ViewPropertyAnimator should animate to. GoalKicker.com Android Notes for Professionals 534 View exampleView = ...; exampleView.animate() .alpha(0.6f) .translationY(200) .translationXBy(10) .scaleX(1.5f) .setDuration(250) .setInterpolator(new FastOutLinearInInterpolator()); Note: Calling start() on a ViewPropertyAnimator object is NOT mandatory. If you don't do that you're just letting the platform to handle the starting of the animation in the appropriate time (next animation handling pass). If you actually do that (call start()) you're making sure the animation is started immediately. Section 81.7: Shake animation of an ImageView Under res folder, create a new folder called "anim" to store your animation resources and put this on that folder. shakeanimation.xml <?xml version="1.0" encoding="utf-8"?> <rotate xmlns:android="http://schemas.android.com/apk/res/android" android:duration="100" android:fromDegrees="-15" android:pivotX="50%" android:pivotY="50%" android:repeatCount="infinite" android:repeatMode="reverse" android:toDegrees="15" /> Create a blank activity called Landing activity_landing.xml <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:id="@+id/imgBell" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:src="@mipmap/ic_notifications_white_48dp"/> </RelativeLayout> And the method for animate the imageview on Landing.java Context mContext; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mContext=this; setContentView(R.layout.activity_landing); AnimateBell(); GoalKicker.com Android Notes for Professionals 535 } public void AnimateBell() { Animation shake = AnimationUtils.loadAnimation(mContext, R.anim.shakeanimation); ImageView imgBell= (ImageView) findViewById(R.id.imgBell); imgBell.setImageResource(R.mipmap.ic_notifications_active_white_48dp); imgBell.setAnimation(shake); } GoalKicker.com Android Notes for Professionals 536 Chapter 82: Location Android Location APIs are used in a wide variety of apps for dierent purposes such as nding user location, notifying when a user has left a general area (Geofencing), and help interpret user activity (walking, running, driving, etc). However, Android Location APIs are not the only means of acquiring user location. The following will give examples of how to use Android's LocationManager and other common location libraries. Section 82.1: Fused location API Example Using Activity w/ LocationRequest /* * This example is useful if you only want to receive updates in this * activity only, and have no use for location anywhere else. */ public class LocationActivity extends AppCompatActivity implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener, LocationListener { private GoogleApiClient mGoogleApiClient; private LocationRequest mLocationRequest; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); mLocationRequest = new LocationRequest() .setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY) //GPS quality location points .setInterval(2000) //At least once every 2 seconds .setFastestInterval(1000); //At most once a second } @Override protected void onStart(){ super.onStart(); mGoogleApiClient.connect(); } @Override protected void onResume(){ super.onResume(); //Permission check for Android 6.0+ if(ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { if(mGoogleApiClient.isConnected()) { LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, this); } } } GoalKicker.com Android Notes for Professionals 537 @Override protected void onPause(){ super.onPause(); //Permission check for Android 6.0+ if(ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { if(mGoogleApiClient.isConnected()) { LocationServices.FusedLocationApi.removeLocationUpdates(mGoogleApiClient, this); } } } @Override protected void onStop(){ super.onStop(); mGoogleApiClient.disconnect(); } @Override public void onConnected(@Nullable Bundle bundle) { if(ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, this); } } @Override public void onConnectionSuspended(int i) { mGoogleApiClient.connect(); } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { } @Override public void onLocationChanged(Location location) { //Handle your location update code here } } Example Using Service w/ PendingIntent and BroadcastReceiver ExampleActivity Recommended reading: LocalBroadcastManager /* * This example is useful if you have many different classes that should be * receiving location updates, but want more granular control over which ones * listen to the updates. * * For example, this activity will stop getting updates when it is not visible, but a database * class with a registered local receiver will continue to receive updates, until "stopUpdates()" is called here. * */ public class ExampleActivity extends AppCompatActivity { private InternalLocationReceiver mInternalLocationReceiver; GoalKicker.com Android Notes for Professionals 538 @Override protected void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); //Create internal receiver object in this method only. mInternalLocationReceiver = new InternalLocationReceiver(this); } @Override protected void onResume(){ super.onResume(); //Register to receive updates in activity only when activity is visible LocalBroadcastManager.getInstance(this).registerReceiver(mInternalLocationReceiver, new IntentFilter("googleLocation")); } @Override protected void onPause(){ super.onPause(); //Unregister to stop receiving updates in activity when it is not visible. //NOTE: You will still receive updates even if this activity is killed. LocalBroadcastManager.getInstance(this).unregisterReceiver(mInternalLocationReceiver); } //Helper method to get updates private void requestUpdates(){ startService(new Intent(this, LocationService.class).putExtra("request", true)); } //Helper method to stop updates private void stopUpdates(){ startService(new Intent(this, LocationService.class).putExtra("remove", true)); } /* * Internal receiver used to get location updates for this activity. * * This receiver and any receiver registered with LocalBroadcastManager does * not need to be registered in the Manifest. * */ private static class InternalLocationReceiver extends BroadcastReceiver{ private ExampleActivity mActivity; InternalLocationReceiver(ExampleActivity activity){ mActivity = activity; } @Override public void onReceive(Context context, Intent intent) { final ExampleActivity activity = mActivity; if(activity != null) { LocationResult result = intent.getParcelableExtra("result"); //Handle location update here } } } } GoalKicker.com Android Notes for Professionals 539 LocationService NOTE: Don't forget to register this service in the Manifest! public class LocationService extends Service implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener { private GoogleApiClient mGoogleApiClient; private LocationRequest mLocationRequest; @Override public void onCreate(){ super.onCreate(); mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); mLocationRequest = new LocationRequest() .setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY) //GPS quality location points .setInterval(2000) //At least once every 2 seconds .setFastestInterval(1000); //At most once a second } @Override public int onStartCommand(Intent intent, int flags, int startId){ super.onStartCommand(intent, flags, startId); //Permission check for Android 6.0+ if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { if (intent.getBooleanExtra("request", false)) { if (mGoogleApiClient.isConnected()) { LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, getPendingIntent()); } else { mGoogleApiClient.connect(); } } else if(intent.getBooleanExtra("remove", false)){ stopSelf(); } } return START_STICKY; } @Override public void onDestroy(){ super.onDestroy(); if(mGoogleApiClient.isConnected()){ LocationServices.FusedLocationApi.removeLocationUpdates(mGoogleApiClient, getPendingIntent()); mGoogleApiClient.disconnect(); } } private PendingIntent getPendingIntent(){ //Example for IntentService //return PendingIntent.getService(this, 0, new Intent(this, **YOUR_INTENT_SERVICE_CLASS_HERE**), PendingIntent.FLAG_UPDATE_CURRENT); GoalKicker.com Android Notes for Professionals 540 //Example for BroadcastReceiver return PendingIntent.getBroadcast(this, 0, new Intent(this, LocationReceiver.class), PendingIntent.FLAG_UPDATE_CURRENT); } @Override public void onConnected(@Nullable Bundle bundle) { //Permission check for Android 6.0+ if(ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, getPendingIntent()); } } @Override public void onConnectionSuspended(int i) { mGoogleApiClient.connect(); } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { } @Nullable @Override public IBinder onBind(Intent intent) { return null; } } LocationReceiver NOTE: Don't forget to register this receiver in the Manifest! public class LocationReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if(LocationResult.hasResult(intent)){ LocationResult locationResult = LocationResult.extractResult(intent); LocalBroadcastManager.getInstance(context).sendBroadcast(new Intent("googleLocation").putExtra("result", locationResult)); } } } Section 82.2: Get Address From Location using Geocoder After you got the Location object from FusedAPI, you can easily acquire Address information from that object. private Address getCountryInfo(Location location) { Address address = null; Geocoder geocoder = new Geocoder(getActivity(), Locale.getDefault()); String errorMessage; List<Address> addresses = null; try { addresses = geocoder.getFromLocation( GoalKicker.com Android Notes for Professionals 541 location.getLatitude(), location.getLongitude(), // In this sample, get just a single address. 1); } catch (IOException ioException) { // Catch network or other I/O problems. errorMessage = "IOException>>" + ioException.getMessage(); } catch (IllegalArgumentException illegalArgumentException) { // Catch invalid latitude or longitude values. errorMessage = "IllegalArgumentException>>" + illegalArgumentException.getMessage(); } if (addresses != null && !addresses.isEmpty()) { address = addresses.get(0); } return country; } Section 82.3: Requesting location updates using LocationManager As always, you need to make sure you have the required permissions. public class MainActivity extends AppCompatActivity implements LocationListener{ private LocationManager mLocationManager = null; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main2); mLocationManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); } @Override protected void onResume() { super.onResume(); try { mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, this); } catch(SecurityException e){ // The app doesn't have the correct permissions } } @Override protected void onPause() { try{ mLocationManager.removeUpdates(this); } catch (SecurityException e){ // The app doesn't have the correct permissions } super.onPause(); } GoalKicker.com Android Notes for Professionals 542 @Override public void onLocationChanged(Location location) { // We received a location update! Log.i("onLocationChanged", location.toString()); } @Override public void onStatusChanged(String provider, int status, Bundle extras) { } @Override public void onProviderEnabled(String provider) { } @Override public void onProviderDisabled(String provider) { } } Section 82.4: Requesting location updates on a separate thread using LocationManager As always, you need to make sure you have the required permissions. public class MainActivity extends AppCompatActivity implements LocationListener{ private LocationManager mLocationManager = null; HandlerThread mLocationHandlerThread = null; Looper mLocationHandlerLooper = null; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main2); mLocationManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); mLocationHandlerThread = new HandlerThread("locationHandlerThread"); } @Override protected void onResume() { super.onResume(); mLocationHandlerThread.start(); mLocationHandlerLooper = mLocationHandlerThread.getLooper(); try { mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, this, mLocationHandlerLooper); } catch(SecurityException e){ // The app doesn't have the correct permissions } } GoalKicker.com Android Notes for Professionals 543 @Override protected void onPause() { try{ mLocationManager.removeUpdates(this); } catch (SecurityException e){ // The app doesn't have the correct permissions } mLocationHandlerLooper = null; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) mLocationHandlerThread.quitSafely(); else mLocationHandlerThread.quit(); mLocationHandlerThread = null; super.onPause(); } @Override public void onLocationChanged(Location location) { // We received a location update on a separate thread! Log.i("onLocationChanged", location.toString()); // You can verify which thread you're on by something like this: // Log.d("Which thread?", Thread.currentThread() == Looper.getMainLooper().getThread() ? "UI Thread" : "New thread"); } @Override public void onStatusChanged(String provider, int status, Bundle extras) { } @Override public void onProviderEnabled(String provider) { } @Override public void onProviderDisabled(String provider) { } } Section 82.5: Getting location updates in a BroadcastReceiver First create a BroadcastReceiver class to handle the incoming Location updates: public class LocationReceiver extends BroadcastReceiver implements Constants { @Override public void onReceive(Context context, Intent intent) { if (LocationResult.hasResult(intent)) { GoalKicker.com Android Notes for Professionals 544 LocationResult locationResult = LocationResult.extractResult(intent); Location location = locationResult.getLastLocation(); if (location != null) { // Do something with your location } else { Log.d(LocationReceiver.class.getSimpleName(), "*** location object is null ***"); } } } } Then when you connect to the GoogleApiClient in the onConnected callback: @Override public void onConnected(Bundle connectionHint) { Intent backgroundIntent = new Intent(this, LocationReceiver.class); mBackgroundPendingIntent = backgroundPendingIntent.getBroadcast(getApplicationContext(), LOCATION_REUEST_CODE, backgroundIntent, PendingIntent.FLAG_CANCEL_CURRENT); mFusedLocationProviderApi.requestLocationUpdates(mLocationClient, mLocationRequest, backgroundPendingIntent); } Don't forget to remove the location update intent in the appropriate lifecycle callback: @Override public void onDestroy() { if (servicesAvailable && mLocationClient != null) { if (mLocationClient.isConnected()) { fusedLocationProviderApi.removeLocationUpdates(mLocationClient, backgroundPendingIntent); // Destroy the current location client mLocationClient = null; } else { mLocationClient.unregisterConnectionCallbacks(this); mLocationClient = null; } } super.onDestroy(); } Section 82.6: Register geofence I have created GeoFenceObserversationService singleton class. GeoFenceObserversationService.java: public class GeoFenceObserversationService extends Service implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener, ResultCallback<Status> { protected static final String TAG = "GeoFenceObserversationService"; protected GoogleApiClient mGoogleApiClient; protected ArrayList<Geofence> mGeofenceList; private boolean mGeofencesAdded; private SharedPreferences mSharedPreferences; private static GeoFenceObserversationService mInstant; public static GeoFenceObserversationService getInstant(){ return mInstant; } GoalKicker.com Android Notes for Professionals 545 @Override public void onCreate() { super.onCreate(); mInstant = this; mGeofenceList = new ArrayList<Geofence>(); mSharedPreferences = getSharedPreferences(AppConstants.SHARED_PREFERENCES_NAME, MODE_PRIVATE); mGeofencesAdded = mSharedPreferences.getBoolean(AppConstants.GEOFENCES_ADDED_KEY, false); buildGoogleApiClient(); } @Override public void onDestroy() { mGoogleApiClient.disconnect(); super.onDestroy(); } @Nullable @Override public IBinder onBind(Intent intent) { return null; } @Override public int onStartCommand(Intent intent, int flags, int startId) { return START_STICKY; } protected void buildGoogleApiClient() { mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); mGoogleApiClient.connect(); } @Override public void onConnected(Bundle connectionHint) { } @Override public void onConnectionFailed(ConnectionResult result) { } @Override public void onConnectionSuspended(int cause) { } private GeofencingRequest getGeofencingRequest() { GeofencingRequest.Builder builder = new GeofencingRequest.Builder(); builder.setInitialTrigger(GeofencingRequest.INITIAL_TRIGGER_ENTER); builder.addGeofences(mGeofenceList); return builder.build(); } public void addGeofences() { GoalKicker.com Android Notes for Professionals 546 if (!mGoogleApiClient.isConnected()) { Toast.makeText(this, getString(R.string.not_connected), Toast.LENGTH_SHORT).show(); return; } populateGeofenceList(); if(!mGeofenceList.isEmpty()){ try { LocationServices.GeofencingApi.addGeofences(mGoogleApiClient, getGeofencingRequest(), getGeofencePendingIntent()).setResultCallback(this); } catch (SecurityException securityException) { securityException.printStackTrace(); } } } public void removeGeofences() { if (!mGoogleApiClient.isConnected()) { Toast.makeText(this, getString(R.string.not_connected), Toast.LENGTH_SHORT).show(); return; } try { LocationServices.GeofencingApi.removeGeofences(mGoogleApiClient,getGeofencePendingIntent()).setResu ltCallback(this); } catch (SecurityException securityException) { securityException.printStackTrace(); } } public void onResult(Status status) { if (status.isSuccess()) { mGeofencesAdded = !mGeofencesAdded; SharedPreferences.Editor editor = mSharedPreferences.edit(); editor.putBoolean(AppConstants.GEOFENCES_ADDED_KEY, mGeofencesAdded); editor.apply(); } else { String errorMessage = AppConstants.getErrorString(this,status.getStatusCode()); Log.i("Geofence", errorMessage); } } private PendingIntent getGeofencePendingIntent() { Intent intent = new Intent(this, GeofenceTransitionsIntentService.class); return PendingIntent.getService(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); } private void populateGeofenceList() { mGeofenceList.clear(); List<GeoFencingResponce> geoFenceList = getGeofencesList; if(geoFenceList!=null&&!geoFenceList.isEmpty()){ for (GeoFencingResponce obj : geoFenceList){ mGeofenceList.add(obj.getGeofence()); Log.i(TAG,"Registered Geofences : " + obj.Id+"-"+obj.Name+"-"+obj.Lattitude+""+obj.Longitude); } } } } GoalKicker.com Android Notes for Professionals 547 AppConstant: public static final String SHARED_PREFERENCES_NAME = PACKAGE_NAME + ".SHARED_PREFERENCES_NAME"; public static final String GEOFENCES_ADDED_KEY = PACKAGE_NAME + ".GEOFENCES_ADDED_KEY"; public static final String DETECTED_GEOFENCES = "detected_geofences"; public static final String DETECTED_BEACONS = "detected_beacons"; public static String getErrorString(Context context, int errorCode) { Resources mResources = context.getResources(); switch (errorCode) { case GeofenceStatusCodes.GEOFENCE_NOT_AVAILABLE: return mResources.getString(R.string.geofence_not_available); case GeofenceStatusCodes.GEOFENCE_TOO_MANY_GEOFENCES: return mResources.getString(R.string.geofence_too_many_geofences); case GeofenceStatusCodes.GEOFENCE_TOO_MANY_PENDING_INTENTS: return mResources.getString(R.string.geofence_too_many_pending_intents); default: return mResources.getString(R.string.unknown_geofence_error); } } Where I started Service ? From Application class startService(new Intent(getApplicationContext(),GeoFenceObserversationService.class)); How I registered Geofences ? GeoFenceObserversationService.getInstant().addGeofences(); GoalKicker.com Android Notes for Professionals 548 Chapter 83: Theme, Style, Attribute Section 83.1: Dene primary, primary dark, and accent colors You can customize your themes color palette. Using framework APIs Version 5.0 <style name="AppTheme" parent="Theme.Material"> <item name="android:colorPrimary">@color/primary</item> <item name="android:colorPrimaryDark">@color/primary_dark</item> <item name="android:colorAccent">@color/accent</item> </style> Using the Appcompat support library (and AppCompatActivity) Version 2.1.x <style name="AppTheme" parent="Theme.AppCompat"> <item name="colorPrimary">@color/primary</item> <item name="colorPrimaryDark">@color/primary_dark</item> <item name="colorAccent">@color/accent</item> </style> Section 83.2: Multiple Themes in one App Using more than one theme in your Android application, you can add custom colors to every theme, to be like this: First, we have to add our themes to style.xml like this: <style name="OneTheme" parent="Theme.AppCompat.Light.DarkActionBar"> </style> <!-- --> <style name="TwoTheme" parent="Theme.AppCompat.Light.DarkActionBar" > GoalKicker.com Android Notes for Professionals 549 </style> ...... Above you can see OneTheme and TwoTheme. Now, go to your AndroidManifest.xml and add this line: android:theme="@style/OneTheme" to your application tag, this will make OneTheme the default theme: <application android:theme="@style/OneTheme" ...> Create new xml le named attrs.xml and add this code : <?xml version="1.0" encoding="utf-8"?> <resources> <attr name="custom_red" format="color" /> <attr name="custom_blue" format="color" /> <attr name="custom_green" format="color" /> </resources> <!-- add all colors you need (just color's name) --> Go back to style.xml and add these colors with its values for each theme : <style name="OneTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <item name="custom_red">#8b030c</item> <item name="custom_blue">#0f1b8b</item> <item name="custom_green">#1c7806</item> </style> <style name="TwoTheme" parent="Theme.AppCompat.Light.DarkActionBar" > <item name="custom_red">#ff606b</item> <item name="custom_blue">#99cfff</item> <item name="custom_green">#62e642</item> </style> Now you have custom colors for each theme, let's add these color to our views. Add custom_blue color to the TextView by using "?attr/" : Go to your imageView and add this color : <TextView> android:id="@+id/txte_view" android:textColor="?attr/custom_blue" /> Mow we can change the theme just by single line setTheme(R.style.TwoTheme); this line must be before setContentView() method in onCreate() method, like this Activity.java : @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setTheme(R.style.TwoTheme); setContentView(R.layout.main_activity); .... } GoalKicker.com Android Notes for Professionals 550 change theme for all activities at once If we want to change the theme for all activities, we have to create new class named MyActivity extends AppCompatActivity class (or Activity class) and add line setTheme(R.style.TwoTheme); to onCreate() method: public class MyActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (new MySettings(this).isDarkTheme()) setTheme(R.style.TwoTheme); } } Finally, go to all your activities add make all of them extend the MyActivity base class: public class MainActivity extends MyActivity { .... } In order to change the theme, just go to MyActivity and change R.style.TwoTheme to your theme (R.style.OneTheme , R.style.ThreeTheme ....). Section 83.3: Navigation Bar Color (API 21+) Version 5.0 This attribute is used to change the navigation bar (one, that contain Back, Home Recent button). Usually it is black, however it's color can be changed. <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:navigationBarColor">@color/my_color</item> </style> Section 83.4: Use Custom Theme Per Activity In themes.xml: <style name="MyActivityTheme" parent="Theme.AppCompat"> <!-- Theme attributes here --> </style> In AndroidManifest.xml: <application android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:theme="@style/Theme.AppCompat"> <activity android:name=".MyActivity" android:theme="@style/MyActivityTheme" /> </application> GoalKicker.com Android Notes for Professionals 551 Section 83.5: Light Status Bar (API 23+) This attribute can change the background of the Status Bar icons (at the top of the screen) to white. <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:windowLightStatusBar">true</item> </style> Section 83.6: Use Custom Theme Globally In themes.xml: <style name="AppTheme" parent="Theme.AppCompat"> <!-- Theme attributes here --> </style> In AndroidManifest.xml: <application android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme"> <!-- Activity declarations here --> </application> Section 83.7: Overscroll Color (API 21+) <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:colorEdgeEffect">@color/my_color</item> </style> Section 83.8: Ripple Color (API 21+) Version 5.0 The ripple animation is shown when user presses clickable views. You can use the same ripple color used by your app assigning the ?android:colorControlHighlight in your views. You can customize this color by changing the android:colorControlHighlight attribute in your theme: This eect color can be changed: <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:colorControlHighlight">@color/my_color</item> </style> Or, if you are using a Material Theme: <style name="AppTheme" parent="android:Theme.Material.Light"> <item name="android:colorControlHighlight">@color/your_custom_color</item> </style> GoalKicker.com Android Notes for Professionals 552 Section 83.9: Translucent Navigation and Status Bars (API 19+) The navigation bar (at the bottom of the screen) can be transparent. Here is the way to achieve it. <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:windowTranslucentNavigation">true</item> </style> The Status Bar (top of the screen) can be made transparent, by applying this attribute to the style: <style name="AppTheme" parent="Theme.AppCompat"> <item name="android:windowTranslucentStatus">true</item> </style> Section 83.10: Theme inheritance When dening themes, one usually uses the theme provided by the system, and then changes modies the look to t his own application. For example, this is how the Theme.AppCompat theme is inherited: <style name="AppTheme" parent="Theme.AppCompat"> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> This theme now has all the properties of the standard Theme.AppCompat theme, except the ones we explicitly changed. There is also a shortcut when inheriting, usually used when one inherits from his own theme: <style name="AppTheme.Red"> <item name="colorAccent">@color/red</item> </style> Since it already has AppTheme. in the start of it's name, it automatically inherits it, without needing to dene the parent theme. This is useful when you need to create specic styles for a part (for example, a single Activity) of your app. GoalKicker.com Android Notes for Professionals 553 Chapter 84: MediaPlayer Section 84.1: Basic creation and playing MediaPlayer class can be used to control playback of audio/video les and streams. Creation of MediaPlayer object can be of three types: 1. Media from local resource MediaPlayer mediaPlayer = MediaPlayer.create(context, R.raw.resource); mediaPlayer.start(); // no need to call prepare(); create() does that for you 2. From local URI (obtained from ContentResolver) Uri myUri = ....; // initialize Uri here MediaPlayer mediaPlayer = new MediaPlayer(); mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); mediaPlayer.setDataSource(getApplicationContext(), myUri); mediaPlayer.prepare(); mediaPlayer.start(); 3. From external URL String url = "http://........"; // your URL here MediaPlayer mediaPlayer = new MediaPlayer(); mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); mediaPlayer.setDataSource(url); mediaPlayer.prepare(); // might take long! (for buffering, etc) mediaPlayer.start(); Section 84.2: Media Player with Buer progress and play position public class SoundActivity extends Activity { private MediaPlayer mediaPlayer; ProgressBar progress_bar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_tool_sound); mediaPlayer = new MediaPlayer(); mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); progress_bar = (ProgressBar) findViewById(R.id.progress_bar); btn_play_stop.setEnabled(false); btn_play_stop.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if(mediaPlayer.isPlaying()) { mediaPlayer.pause(); btn_play_stop.setImageResource(R.drawable.ic_pause_black_24dp); } else { mediaPlayer.start(); GoalKicker.com Android Notes for Professionals 554 btn_play_stop.setImageResource(R.drawable.ic_play_arrow_black_24px); } } }); mediaPlayer.setDataSource(proxyUrl); mediaPlayer.setOnCompletionListener(new MediaPlayer.OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { observer.stop(); progress_bar.setProgress(mp.getCurrentPosition()); // TODO Auto-generated method stub mediaPlayer.stop(); mediaPlayer.reset(); } }); mediaPlayer.setOnBufferingUpdateListener(new MediaPlayer.OnBufferingUpdateListener() { @Override public void onBufferingUpdate(MediaPlayer mp, int percent) { progress_bar.setSecondaryProgress(percent); } }); mediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { @Override public void onPrepared(MediaPlayer mediaPlayer) { btn_play_stop.setEnabled(true); } }); observer = new MediaObserver(); mediaPlayer.prepare(); mediaPlayer.start(); new Thread(observer).start(); } private MediaObserver observer = null; private class MediaObserver implements Runnable { private AtomicBoolean stop = new AtomicBoolean(false); public void stop() { stop.set(true); } @Override public void run() { while (!stop.get()) { progress_bar.setProgress((int)((double)mediaPlayer.getCurrentPosition() / (double)mediaPlayer.getDuration()*100)); try { Thread.sleep(200); } catch (Exception ex) { Logger.log(ToolSoundActivity.this, ex); } } } } @Override protected void onDestroy() { GoalKicker.com Android Notes for Professionals 555 super.onDestroy(); mediaPlayer.stop(); } } <LinearLayout android:gravity="bottom" android:layout_gravity="bottom" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" android:weightSum="1"> <LinearLayout android:orientation="horizontal" android:layout_width="match_parent" android:layout_height="wrap_content"> <ImageButton app:srcCompat="@drawable/ic_play_arrow_black_24px" android:layout_width="48dp" android:layout_height="48dp" android:id="@+id/btn_play_stop" /> <ProgressBar android:padding="8dp" android:progress="0" android:id="@+id/progress_bar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="center" /> </LinearLayout> </LinearLayout> Section 84.3: Getting system ringtones This example demonstrates how to fetch the URI's of system ringtones (RingtoneManager.TYPE_RINGTONE): private List<Uri> loadLocalRingtonesUris() { List<Uri> alarms = new ArrayList<>(); try { RingtoneManager ringtoneMgr = new RingtoneManager(getActivity()); ringtoneMgr.setType(RingtoneManager.TYPE_RINGTONE); Cursor alarmsCursor = ringtoneMgr.getCursor(); int alarmsCount = alarmsCursor.getCount(); if (alarmsCount == 0 && !alarmsCursor.moveToFirst()) { alarmsCursor.close(); return null; } while (!alarmsCursor.isAfterLast() && alarmsCursor.moveToNext()) { int currentPosition = alarmsCursor.getPosition(); alarms.add(ringtoneMgr.getRingtoneUri(currentPosition)); } GoalKicker.com Android Notes for Professionals 556 } catch (Exception ex) { ex.printStackTrace(); } return alarms; } The list depends on the types of requested ringtones. The possibilities are: RingtoneManager.TYPE_RINGTONE RingtoneManager.TYPE_NOTIFICATION RingtoneManager.TYPE_ALARM RingtoneManager.TYPE_ALL = TYPE_RINGTONE | TYPE_NOTIFICATION | TYPE_ALARM In order to get the Ringtones as android.media.Ringtone every Uri must be resolved by the RingtoneManager: android.media.Ringtone osRingtone = RingtoneManager.getRingtone(context, uri); To play the sound, use the method: public void setDataSource(Context context, Uri uri) from android.media.MediaPlayer. MediaPlayer must be initialised and prepared according to the State diagram Section 84.4: Asynchronous prepare The MediaPlayer$prepare() is a blocking call and will freeze the UI till execution completes. To solve this problem, MediaPlayer$prepareAsync() can be used. mMediaPlayer = ... // Initialize it here mMediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener(){ @Override public void onPrepared(MediaPlayer player) { // Called when the MediaPlayer is ready to play mMediaPlayer.start(); } }); // Set callback for when prepareAsync() finishes mMediaPlayer.prepareAsync(); // Prepare asynchronously to not block the Main Thread On synchronous operations, errors would normally be signaled with an exception or an error code, but whenever you use asynchronous resources, you should make sure your application is notied of errors appropriately. For MediaPlayer, mMediaPlayer.setOnErrorListener(new MediaPlayer.OnErrorListener(){ @Override public boolean onError(MediaPlayer mp, int what, int extra) { // ... react appropriately ... // The MediaPlayer has moved to the Error state, must be reset! // Then return true if the error has been handled } }); Section 84.5: Import audio into androidstudio and play it This is an example how to get the play an audio le which you already have on your pc/laptop .First create a new GoalKicker.com Android Notes for Professionals 557 directory under res and name it as raw like this copy the audio which you want to play into this folder .It may be a .mp3 or .wav le. Now for example on button click you want to play this sound ,here is how it is done public class MainActivity extends AppCompatActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.aboutapp_activity); MediaPlayer song=MediaPlayer.create(this, R.raw.song); Button button=(Button)findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { song.start(); } }); } } This will play the song only once when the button is clicked,if you want to replay the song on every button click write code like this public class MainActivity extends AppCompatActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.aboutapp_activity); MediaPlayer song=MediaPlayer.create(this, R.raw.song); Button button=(Button)findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener() { GoalKicker.com Android Notes for Professionals 558 @Override public void onClick(View view) { if (song.isPlaying()) { song.reset(); song= MediaPlayer.create(getApplicationContext(), R.raw.song); } song.start(); } }); } } Section 84.6: Getting and setting system volume Audio stream types There are dierent proles of ringtone streams. Each one of them has it's dierent volume. Every example here is written for AudioManager.STREAM_RING stream type. However this is not the only one. The available stream types are: STREAM_ALARM STREAM_DTMF STREAM_MUSIC STREAM_NOTIFICATION STREAM_RING STREAM_SYSTEM STREAM_VOICE_CALL Setting volume To get the volume of specic prole, call: AudioManager audio = (AudioManager) getActivity().getSystemService(Context.AUDIO_SERVICE); int currentVolume = audioManager.getStreamVolume(AudioManager.STREAM_RING); This value is very little useful, when the maximum value for the stream is not known: AudioManager audio = (AudioManager) getActivity().getSystemService(Context.AUDIO_SERVICE); int streamMaxVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_RING); The ratio of those two value will give a relative volume (0 < volume < 1): float volume = ((float) currentVolume) / streamMaxVolume Adjusting volume by one step To make the volume for the stream higher by one step, call: AudioManager audio = (AudioManager) getActivity().getSystemService(Context.AUDIO_SERVICE); audio.adjustStreamVolume(AudioManager.STREAM_RING, AudioManager.ADJUST_RAISE, 0); GoalKicker.com Android Notes for Professionals 559 To make the volume for the stream lower by one step, call: AudioManager audio = (AudioManager) getActivity().getSystemService(Context.AUDIO_SERVICE); audio.adjustStreamVolume(AudioManager.STREAM_RING, AudioManager.ADJUST_LOWER, 0); Setting MediaPlayer to use specic stream type There is a helper function from MediaPlayer class to do this. Just call void setAudioStreamType(int streamtype): MediaPlayer mMedia = new MediaPlayer(); mMedia.setAudioStreamType(AudioManager.STREAM_RING); GoalKicker.com Android Notes for Professionals 560 Chapter 85: Android Sound and Media Section 85.1: How to pick image and video for api >19 Here is a tested code for image and video.It will work for all APIs less than 19 and greater than 19 as well. Image: if (Build.VERSION.SDK_INT <= 19) { Intent i = new Intent(); i.setType("image/*"); i.setAction(Intent.ACTION_GET_CONTENT); i.addCategory(Intent.CATEGORY_OPENABLE); startActivityForResult(i, 10); } else if (Build.VERSION.SDK_INT > 19) { Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); startActivityForResult(intent, 10); } Video: if (Build.VERSION.SDK_INT <= 19) { Intent i = new Intent(); i.setType("video/*"); i.setAction(Intent.ACTION_GET_CONTENT); i.addCategory(Intent.CATEGORY_OPENABLE); startActivityForResult(i, 20); } else if (Build.VERSION.SDK_INT > 19) { Intent intent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Video.Media.EXTERNAL_CONTENT_URI); startActivityForResult(intent, 20); } . @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (resultCode == Activity.RESULT_OK) { if (requestCode == 10) { Uri selectedImageUri = data.getData(); String selectedImagePath = getRealPathFromURI(selectedImageUri); } else if (requestCode == 20) { Uri selectedVideoUri = data.getData(); String selectedVideoPath = getRealPathFromURI(selectedVideoUri); } public String getRealPathFromURI(Uri uri) { if (uri == null) { return null; } String[] projection = {MediaStore.Images.Media.DATA}; Cursor cursor = getActivity().getContentResolver().query(uri, projection, null, null, null); if (cursor != null) { int column_index = cursor .getColumnIndexOrThrow(MediaStore.Images.Media.DATA); GoalKicker.com Android Notes for Professionals 561 cursor.moveToFirst(); return cursor.getString(column_index); } return uri.getPath(); } Section 85.2: Play sounds via SoundPool public class PlaySound extends Activity implements OnTouchListener { private SoundPool soundPool; private int soundID; boolean loaded = false; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); View view = findViewById(R.id.textView1); view.setOnTouchListener(this); // Set the hardware buttons to control the music this.setVolumeControlStream(AudioManager.STREAM_MUSIC); // Load the sound soundPool = new SoundPool(10, AudioManager.STREAM_MUSIC, 0); soundPool.setOnLoadCompleteListener(new OnLoadCompleteListener() { @Override public void onLoadComplete(SoundPool soundPool, int sampleId, int status) { loaded = true; } }); soundID = soundPool.load(this, R.raw.sound1, 1); } @Override public boolean onTouch(View v, MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN) { // Getting the user sound settings AudioManager audioManager = (AudioManager) getSystemService(AUDIO_SERVICE); float actualVolume = (float) audioManager .getStreamVolume(AudioManager.STREAM_MUSIC); float maxVolume = (float) audioManager .getStreamMaxVolume(AudioManager.STREAM_MUSIC); float volume = actualVolume / maxVolume; // Is the sound loaded already? if (loaded) { soundPool.play(soundID, volume, volume, 1, 0, 1f); Log.e("Test", "Played sound"); } } return false; } } GoalKicker.com Android Notes for Professionals 562 Chapter 86: MediaSession Section 86.1: Receiving and handling button events This example creates a MediaSession object when a Service is started. The MediaSession object is released when the Service gets destroyed: public final class MyService extends Service { private static MediaSession s_mediaSession; @Override public void onCreate() { // Instantiate new MediaSession object. configureMediaSession(); } @Override public void onDestroy() { if (s_mediaSession != null) s_mediaSession.release(); } } The following method instantiates and congures the MediaSession button callbacks: private void configureMediaSession { s_mediaSession = new MediaSession(this, "MyMediaSession"); // Overridden methods in the MediaSession.Callback class. s_mediaSession.setCallback(new MediaSession.Callback() { @Override public boolean onMediaButtonEvent(Intent mediaButtonIntent) { Log.d(TAG, "onMediaButtonEvent called: " + mediaButtonIntent); KeyEvent ke = mediaButtonIntent.getParcelableExtra(Intent.EXTRA_KEY_EVENT); if (ke != null && ke.getAction() == KeyEvent.ACTION_DOWN) { int keyCode = ke.getKeyCode(); Log.d(TAG, "onMediaButtonEvent Received command: " + ke); } return super.onMediaButtonEvent(mediaButtonIntent); } @Override public void onSkipToNext() { Log.d(TAG, "onSkipToNext called (media button pressed)"); Toast.makeText(getApplicationContext(), "onSkipToNext called", Toast.LENGTH_SHORT).show(); skipToNextPlaylistItem(); // Handle this button press. super.onSkipToNext(); } @Override public void onSkipToPrevious() { Log.d(TAG, "onSkipToPrevious called (media button pressed)"); Toast.makeText(getApplicationContext(), "onSkipToPrevious called", Toast.LENGTH_SHORT).show(); skipToPreviousPlaylistItem(); // Handle this button press. super.onSkipToPrevious(); } GoalKicker.com Android Notes for Professionals 563 @Override public void onPause() { Log.d(TAG, "onPause called (media button pressed)"); Toast.makeText(getApplicationContext(), "onPause called", Toast.LENGTH_SHORT).show(); mpPause(); // Pause the player. super.onPause(); } @Override public void onPlay() { Log.d(TAG, "onPlay called (media button pressed)"); mpStart(); // Start player/playback. super.onPlay(); } @Override public void onStop() { Log.d(TAG, "onStop called (media button pressed)"); mpReset(); // Stop and/or reset the player. super.onStop(); } }); s_mediaSession.setFlags(MediaSession.FLAG_HANDLES_MEDIA_BUTTONS | MediaSession.FLAG_HANDLES_TRANSPORT_CONTROLS); s_mediaSession.setActive(true); } The following method sends meta data (stored in a HashMap) to the device using A2DP: void sendMetaData(@NonNull final HashMap<String, String> hm) { // Return if Bluetooth A2DP is not in use. if (!((AudioManager) getSystemService(Context.AUDIO_SERVICE)).isBluetoothA2dpOn()) return; MediaMetadata metadata = new MediaMetadata.Builder() .putString(MediaMetadata.METADATA_KEY_TITLE, hm.get("Title")) .putString(MediaMetadata.METADATA_KEY_ALBUM, hm.get("Album")) .putString(MediaMetadata.METADATA_KEY_ARTIST, hm.get("Artist")) .putString(MediaMetadata.METADATA_KEY_AUTHOR, hm.get("Author")) .putString(MediaMetadata.METADATA_KEY_COMPOSER, hm.get("Composer")) .putString(MediaMetadata.METADATA_KEY_WRITER, hm.get("Writer")) .putString(MediaMetadata.METADATA_KEY_DATE, hm.get("Date")) .putString(MediaMetadata.METADATA_KEY_GENRE, hm.get("Genre")) .putLong(MediaMetadata.METADATA_KEY_YEAR, tryParse(hm.get("Year"))) .putLong(MediaMetadata.METADATA_KEY_DURATION, tryParse(hm.get("Raw Duration"))) .putLong(MediaMetadata.METADATA_KEY_TRACK_NUMBER, tryParse(hm.get("Track Number"))) .build(); s_mediaSession.setMetadata(metadata); } The following method sets the PlaybackState. It also sets which button actions the MediaSession will respond to: private void setPlaybackState(@NonNull final int stateValue) { PlaybackState state = new PlaybackState.Builder() .setActions(PlaybackState.ACTION_PLAY | PlaybackState.ACTION_SKIP_TO_NEXT | PlaybackState.ACTION_PAUSE | PlaybackState.ACTION_SKIP_TO_PREVIOUS | PlaybackState.ACTION_STOP | PlaybackState.ACTION_PLAY_PAUSE) .setState(stateValue, PlaybackState.PLAYBACK_POSITION_UNKNOWN, 0) .build(); GoalKicker.com Android Notes for Professionals 564 s_mediaSession.setPlaybackState(state); } GoalKicker.com Android Notes for Professionals 565 Chapter 87: MediaStore Section 87.1: Fetch Audio/MP3 les from specic folder of device or fetch all les First, add the following permissions to the manifest of your project in order to enable device storage access: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> Then, create the le AudioModel.class and put the following model class into it in order to allow getting and setting list items: public class AudioModel { String aPath; String aName; String aAlbum; String aArtist; public String getaPath() { return aPath; } public void setaPath(String aPath) { this.aPath = aPath; } public String getaName() { return aName; } public void setaName(String aName) { this.aName = aName; } public String getaAlbum() { return aAlbum; } public void setaAlbum(String aAlbum) { this.aAlbum = aAlbum; } public String getaArtist() { return aArtist; } public void setaArtist(String aArtist) { this.aArtist = aArtist; } } Next, use the following method to read all MP3 les from a folder of your device or to read all les of your device: public List<AudioModel> getAllAudioFromDevice(final Context context) { final List<AudioModel> tempAudioList = new ArrayList<>(); Uri uri = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; String[] projection = {MediaStore.Audio.AudioColumns.DATA, MediaStore.Audio.AudioColumns.TITLE, MediaStore.Audio.AudioColumns.ALBUM, MediaStore.Audio.ArtistColumns.ARTIST,}; Cursor c = context.getContentResolver().query(uri, projection, MediaStore.Audio.Media.DATA + " like ? ", new String[]{"%utm%"}, null); if (c != null) { while (c.moveToNext()) { GoalKicker.com Android Notes for Professionals 566 AudioModel audioModel = new AudioModel(); String path = c.getString(0); String name = c.getString(1); String album = c.getString(2); String artist = c.getString(3); audioModel.setaName(name); audioModel.setaAlbum(album); audioModel.setaArtist(artist); audioModel.setaPath(path); Log.e("Name :" + name, " Album :" + album); Log.e("Path :" + path, " Artist :" + artist); tempAudioList.add(audioModel); } c.close(); } return tempAudioList; } The code above will return a list of all MP3 les with the music's name, path, artist, and album. For more details please refer to the Media.Store.Audio documentation. In order to read les of a specic folder, use the following query (you need to replace the folder name): Cursor c = context.getContentResolver().query(uri, projection, MediaStore.Audio.Media.DATA + " like ? ", new String[]{"%yourFolderName%"}, // Put your device folder / file location here. null); If you want to retrieve all les from your device, then use the following query: Cursor c = context.getContentResolver().query(uri, projection, null, null, null); Note: Don't forget to enable storage access permissions. Now, all you have to do is to call the method above in order to get the MP3 les: getAllAudioFromDevice(this); Example with Activity public class ReadAudioFilesActivity extends AppCompatActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_audio_list); /** * This will return a list of all MP3 files. Use the list to display data. */ getAllAudioFromDevice(this); } GoalKicker.com Android Notes for Professionals 567 // Method to read all the audio/MP3 files. public List<AudioModel> getAllAudioFromDevice(final Context context) { final List<AudioModel> tempAudioList = new ArrayList<>(); Uri uri = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; String[] projection = {MediaStore.Audio.AudioColumns.DATA,MediaStore.Audio.AudioColumns.TITLE ,MediaStore.Audio.AudioColumns.ALBUM, MediaStore.Audio.ArtistColumns.ARTIST,}; Cursor c = context.getContentResolver().query(uri, projection, MediaStore.Audio.Media.DATA + " like ? ", new String[]{"%utm%"}, null); if (c != null) { while (c.moveToNext()) { // Create a model object. AudioModel audioModel = new AudioModel(); String path = c.getString(0); // Retrieve path. String name = c.getString(1); // Retrieve name. String album = c.getString(2); // Retrieve album name. String artist = c.getString(3); // Retrieve artist name. // Set data to the model object. audioModel.setaName(name); audioModel.setaAlbum(album); audioModel.setaArtist(artist); audioModel.setaPath(path); Log.e("Name :" + name, " Album :" + album); Log.e("Path :" + path, " Artist :" + artist); // Add the model object to the list . tempAudioList.add(audioModel); } c.close(); } // Return the list. return tempAudioList; } } GoalKicker.com Android Notes for Professionals 568 Chapter 88: Multidex and the Dex Method Limit DEX means Android app's (APK) executable bytecode les in the form of Dalvik Executable (DEX) les, which contain the compiled code used to run your app. The Dalvik Executable specication limits the total number of methods that can be referenced within a single DEX le to 65,536 (64K)including Android framework methods, library methods, and methods in your own code. To overcome this limit requires congure your app build process to generate more than one DEX le, known as a Multidex. Section 88.1: Enabling Multidex In order to enable a multidex conguration you need: to change your Gradle build conguration to use a MultiDexApplication or enable the MultiDex in your Application class Gradle conguration In app/build.gradle add these parts: android { compileSdkVersion 24 buildToolsVersion "24.0.1" defaultConfig { ... minSdkVersion 14 targetSdkVersion 24 ... // Enabling multidex support. multiDexEnabled true } ... } dependencies { compile 'com.android.support:multidex:1.0.1' } Enable MultiDex in your Application Then proceed with one of three options: Multidex by extending Application Multidex by extending MultiDexApplication Multidex by using MultiDexApplication directly When these conguration settings are added to an app, the Android build tools construct a primary dex (classes.dex) and supporting (classes2.dex, classes3.dex) as needed. The build system will then package them into an APK le for distribution. GoalKicker.com Android Notes for Professionals 569 Section 88.2: Multidex by extending Application Use this option if your project requires an Application subclass. Specify this Application subclass using the android:name property in the manifest le inside the application tag. In the Application subclass, add the attachBaseContext() method override, and in that method call MultiDex.install(): package com.example; import android.app.Application; import android.content.Context; /** * Extended application that support multidex */ public class MyApplication extends Application { @Override protected void attachBaseContext(Context base) { super.attachBaseContext(base); MultiDex.install(this); } } Ensure that the Application subclass is specied in the application tag of your AndroidManifest.xml: <application android:name="com.example.MyApplication" android:icon="@drawable/ic_launcher" android:label="@string/app_name"> </application> Section 88.3: Multidex by extending MultiDexApplication This is very similar to using an Application subclass and overriding the attachBaseContext() method. However, using this method, you don't need to override attachBaseContext() as this is already done in the MultiDexApplication superclass. Extend MultiDexApplication instead of Application: package com.example; import android.support.multidex.MultiDexApplication; import android.content.Context; /** * Extended MultiDexApplication */ public class MyApplication extends MultiDexApplication { // No need to override attachBaseContext() //.......... } GoalKicker.com Android Notes for Professionals 570 Add this class to your AndroidManifest.xml exactly as if you were extending Application: <application android:name="com.example.MyApplication" android:icon="@drawable/ic_launcher" android:label="@string/app_name"> </application> Section 88.4: Multidex by using MultiDexApplication directly Use this option if you don't need an Application subclass. This is the simplest option, but this way you can't provide your own Application subclass. If an Application subclass is needed, you will have to switch to one of the other options to do so. For this option, simply specify the fully-qualied class name android.support.multidex.MultiDexApplication for the android:name property of the application tag in the AndroidManifest.xml: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.android.multidex.myapplication"> <application ... android:name="android.support.multidex.MultiDexApplication"> ... </application> </manifest> Section 88.5: Counting Method References On Every Build (Dexcount Gradle Plugin) The dexcount plugin counts methods and class resource count after a successful build. Add the plugin in the app/build.gradle: apply plugin: 'com.android.application' buildscript { repositories { mavenCentral() // or jcenter() } dependencies { classpath 'com.getkeepsafe.dexcount:dexcount-gradle-plugin:0.5.5' } } Apply the plugin in the app/build.gradle le: apply plugin: 'com.getkeepsafe.dexcount' Look for the output data generated by the plugin in: ../app/build/outputs/dexcount Especially useful is the .html chart in: GoalKicker.com Android Notes for Professionals 571 ../app/build/outputs/dexcount/debugChart/index.html GoalKicker.com Android Notes for Professionals 572 Chapter 89: Data Synchronization with Sync Adapter Section 89.1: Dummy Sync Adapter with Stub Provider SyncAdapter /** * Define a sync adapter for the app. * <p/> * <p>This class is instantiated in {@link SyncService}, which also binds SyncAdapter to the system. * SyncAdapter should only be initialized in SyncService, never anywhere else. * <p/> * <p>The system calls onPerformSync() via an RPC call through the IBinder object supplied by * SyncService. */ class SyncAdapter extends AbstractThreadedSyncAdapter { /** * Constructor. Obtains handle to content resolver for later use. */ public SyncAdapter(Context context, boolean autoInitialize) { super(context, autoInitialize); } /** * Constructor. Obtains handle to content resolver for later use. */ public SyncAdapter(Context context, boolean autoInitialize, boolean allowParallelSyncs) { super(context, autoInitialize, allowParallelSyncs); } @Override public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) { //Jobs you want to perform in background. Log.e("" + account.name, "Sync Start"); } } Sync Service /** * Define a Service that returns an IBinder for the * sync adapter class, allowing the sync adapter framework to call * onPerformSync(). */ public class SyncService extends Service { // Storage for an instance of the sync adapter private static SyncAdapter sSyncAdapter = null; // Object to use as a thread-safe lock private static final Object sSyncAdapterLock = new Object(); /* * Instantiate the sync adapter object. */ @Override public void onCreate() { /* * Create the sync adapter as a singleton. * Set the sync adapter as syncable GoalKicker.com Android Notes for Professionals 573 * Disallow parallel syncs */ synchronized (sSyncAdapterLock) { if (sSyncAdapter == null) { sSyncAdapter = new SyncAdapter(getApplicationContext(), true); } } } /** * Return an object that allows the system to invoke * the sync adapter. */ @Override public IBinder onBind(Intent intent) { /* * Get the object that allows external processes * to call onPerformSync(). The object is created * in the base class code when the SyncAdapter * constructors call super() */ return sSyncAdapter.getSyncAdapterBinder(); } } Authenticator public class Authenticator extends AbstractAccountAuthenticator { // Simple constructor public Authenticator(Context context) { super(context); } // Editing properties is not supported @Override public Bundle editProperties( AccountAuthenticatorResponse r, String s) { throw new UnsupportedOperationException(); } // Don't add additional accounts @Override public Bundle addAccount( AccountAuthenticatorResponse r, String s, String s2, String[] strings, Bundle bundle) throws NetworkErrorException { return null; } // Ignore attempts to confirm credentials @Override public Bundle confirmCredentials( AccountAuthenticatorResponse r, Account account, Bundle bundle) throws NetworkErrorException { return null; } // Getting an authentication token is not supported @Override public Bundle getAuthToken( GoalKicker.com Android Notes for Professionals 574 AccountAuthenticatorResponse r, Account account, String s, Bundle bundle) throws NetworkErrorException { throw new UnsupportedOperationException(); } // Getting a label for the auth token is not supported @Override public String getAuthTokenLabel(String s) { throw new UnsupportedOperationException(); } // Updating user credentials is not supported @Override public Bundle updateCredentials( AccountAuthenticatorResponse r, Account account, String s, Bundle bundle) throws NetworkErrorException { throw new UnsupportedOperationException(); } // Checking features for the account is not supported @Override public Bundle hasFeatures( AccountAuthenticatorResponse r, Account account, String[] strings) throws NetworkErrorException { throw new UnsupportedOperationException(); } } Authenticator Service /** * A bound Service that instantiates the authenticator * when started. */ public class AuthenticatorService extends Service { // Instance field that stores the authenticator object private Authenticator mAuthenticator; @Override public void onCreate() { // Create a new authenticator object mAuthenticator = new Authenticator(this); } /* * When the system binds to this Service to make the RPC call * return the authenticator's IBinder. */ @Override public IBinder onBind(Intent intent) { return mAuthenticator.getIBinder(); } } AndroidManifest.xml additions <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.READ_SYNC_SETTINGS" /> <uses-permission android:name="android.permission.WRITE_SYNC_SETTINGS" /> <uses-permission android:name="android.permission.AUTHENTICATE_ACCOUNTS" /> <service android:name=".syncAdapter.SyncService" GoalKicker.com Android Notes for Professionals 575 android:exported="true"> <intent-filter> <action android:name="android.content.SyncAdapter" /> </intent-filter> <meta-data android:name="android.content.SyncAdapter" android:resource="@xml/syncadapter" /> </service> <service android:name=".authenticator.AuthenticatorService"> <intent-filter> <action android:name="android.accounts.AccountAuthenticator" /> </intent-filter> <meta-data android:name="android.accounts.AccountAuthenticator" android:resource="@xml/authenticator" /> </service> <provider android:name=".provider.StubProvider" android:authorities="com.yourpackage.provider" android:exported="false" android:syncable="true" /> res/xml/authenticator.xml <?xml version="1.0" encoding="utf-8"?> <account-authenticator xmlns:android="http://schemas.android.com/apk/res/android" android:accountType="com.yourpackage" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:smallIcon="@mipmap/ic_launcher" /> res/xml/syncadapter.xml <?xml version="1.0" encoding="utf-8"?> <sync-adapter xmlns:android="http://schemas.android.com/apk/res/android" android:accountType="com.yourpackage.android" android:allowParallelSyncs="false" android:contentAuthority="com.yourpackage.provider" android:isAlwaysSyncable="true" android:supportsUploading="false" android:userVisible="false" /> StubProvider /* * Define an implementation of ContentProvider that stubs out * all methods */ public class StubProvider extends ContentProvider { /* * Always return true, indicating that the * provider loaded correctly. */ @Override public boolean onCreate() { return true; } /* * Return no type for MIME type */ @Override public String getType(Uri uri) { return null; GoalKicker.com Android Notes for Professionals 576 } /* * query() always returns no results * */ @Override public Cursor query( Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { return null; } /* * insert() always returns null (no URI) */ @Override public Uri insert(Uri uri, ContentValues values) { return null; } /* * delete() always returns "no rows affected" (0) */ @Override public int delete(Uri uri, String selection, String[] selectionArgs) { return 0; } /* * update() always returns "no rows affected" (0) */ public int update( Uri uri, ContentValues values, String selection, String[] selectionArgs) { return 0; } } Call this function on successful login to create an account with the logged-in user ID public Account CreateSyncAccount(Context context, String accountName) { // Create the account type and default account Account newAccount = new Account( accountName, "com.yourpackage"); // Get an instance of the Android account manager AccountManager accountManager = (AccountManager) context.getSystemService( ACCOUNT_SERVICE); /* * Add the account and account type, no password or user data * If successful, return the Account object, otherwise report an error. */ if (accountManager.addAccountExplicitly(newAccount, null, null)) { /* * If you don't set android:syncable="true" in * in your <provider> element in the manifest, * then call context.setIsSyncable(account, AUTHORITY, 1) GoalKicker.com Android Notes for Professionals 577 * here. */ } else { /* * The account exists or some other error occurred. Log this, report it, * or handle it internally. */ } return newAccount; } Forcing a Sync Bundle bundle = new Bundle(); bundle.putBoolean(ContentResolver.SYNC_EXTRAS_EXPEDITED, true); bundle.putBoolean(ContentResolver.SYNC_EXTRAS_FORCE, true); bundle.putBoolean(ContentResolver.SYNC_EXTRAS_MANUAL, true); ContentResolver.requestSync(null, MyContentProvider.getAuthority(), bundle); GoalKicker.com Android Notes for Professionals 578 Chapter 90: PorterDu Mode PorterDu is described as a way of combining images as if they were "irregular shaped pieces of cardboard" overlayed on each other, as well as a scheme for blending the overlapping parts Section 90.1: Creating a PorterDu ColorFilter PorterDuff.Mode is used to create a PorterDuffColorFilter. A color lter modies the color of each pixel of a visual resource. ColorFilter filter = new PorterDuffColorFilter(Color.BLUE, PorterDuff.Mode.SRC_IN); The above lter will tint the non-transparent pixels to blue color. The color lter can be applied to a Drawable: drawable.setColorFilter(filter); It can be applied to an ImageView: imageView.setColorFilter(filter); Also, it can be applied to a Paint, so that the color that is drawn using that paint, is modied by the lter: paint.setColorFilter(filter); Section 90.2: Creating a PorterDu XferMode An Xfermode (think "transfer" mode) works as a transfer step in drawing pipeline. When an Xfermode is applied to a Paint, the pixels drawn with the paint are combined with underlying pixels (already drawn) as per the mode: paint.setColor(Color.BLUE); paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.SRC_IN)); Now we have a blue tint paint. Any shape drawn will tint the already existing, non-transparent pixels blue in the area of the shape. Section 90.3: Apply a radial mask (vignette) to a bitmap using PorterDuXfermode /** * Apply a radial mask (vignette, i.e. fading to black at the borders) to a bitmap * @param imageToApplyMaskTo Bitmap to modify */ public static void radialMask(final Bitmap imageToApplyMaskTo) { Canvas canvas = new Canvas(imageToApplyMaskTo); final float centerX = imageToApplyMaskTo.getWidth() * 0.5f; final float centerY = imageToApplyMaskTo.getHeight() * 0.5f; final float radius = imageToApplyMaskTo.getHeight() * 0.7f; RadialGradient gradient = new RadialGradient(centerX, centerY, radius, 0x00000000, 0xFF000000, android.graphics.Shader.TileMode.CLAMP); GoalKicker.com Android Notes for Professionals 579 Paint p = new Paint(); p.setShader(gradient); p.setColor(0xFF000000); p.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_OUT)); canvas.drawRect(0, 0, imageToApplyMaskTo.getWidth(), imageToApplyMaskTo.getHeight(), p); } GoalKicker.com Android Notes for Professionals 580 Chapter 91: Menu inflate(int menuRes, Menu menu) Parameter Description Inate a menu hierarchy from the specied XML resource. getMenuInflater () Returns a MenuInflater with this context. onCreateOptionsMenu (Menu menu) Initialize the contents of the Activity's standard options menu. You should place your menu items in to menu. onOptionsItemSelected (MenuItem item) This method is called whenever an item in your options menu is selected Section 91.1: Options menu with dividers In Android there is a default options menu, which can take a number of options. If a larger number of options needs to be displayed, then it makes sense to group those options in order to maintain clarity. Options can be grouped by putting dividers (i.e. horizontal lines) between them. In order to allow for dividers, the following theme can be used: <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <item name="colorPrimary">@color/colorPrimary</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> <item name="android:dropDownListViewStyle">@style/PopupMenuListView</item> </style> <style name="PopupMenuListView" parent="@style/Widget.AppCompat.ListView.DropDown"> <item name="android:divider">@color/black</item> <item name="android:dividerHeight">1dp</item> </style> By changing the theme, dividers can be added to a menu. Section 91.2: Apply custom font to Menu public static void applyFontToMenu(Menu m, Context mContext){ for(int i=0;i<m.size();i++) { applyFontToMenuItem(m.getItem(i),mContext); } } public static void applyFontToMenuItem(MenuItem mi, Context mContext) { if(mi.hasSubMenu()) for(int i=0;i<mi.getSubMenu().size();i++) { applyFontToMenuItem(mi.getSubMenu().getItem(i),mContext); } Typeface font = Typeface.createFromAsset(mContext.getAssets(), "fonts/yourCustomFont.ttf"); SpannableString mNewTitle = new SpannableString(mi.getTitle()); mNewTitle.setSpan(new CustomTypefaceSpan("", font, mContext), 0, mNewTitle.length(), Spannable.SPAN_INCLUSIVE_INCLUSIVE); mi.setTitle(mNewTitle); } and then in the Activity: @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.main, menu); applyFontToMenu(menu,this); GoalKicker.com Android Notes for Professionals 581 return true; } Section 91.3: Creating a Menu in an Activity To dene your own menu, create an XML le inside your project's res/menu/ directory and build the menu with the following elements: <menu> : Denes a Menu, which holds all the menu items. <item> : Creates a MenuItem, which represents a single item in a menu. We can also create a nested element in order to create a submenu. Step 1: Create your own xml le as the following: In res/menu/main_menu.xml: <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/aboutMenu" android:title="About" /> <item android:id="@+id/helpMenu" android:title="Help" /> <item android:id="@+id/signOutMenu" android:title="Sign Out" /> </menu> Step 2: To specify the options menu, override onCreateOptionsMenu() in your activity. In this method, you can inate your menu resource (dened in your XML le i.e., res/menu/main_menu.xml) @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.main_menu, menu); return true; } When the user selects an item from the options menu, the system calls your activity's overridden onOptionsItemSelected() method. This method passes the MenuItem selected. You can identify the item by calling getItemId(), which returns the unique ID for the menu item (dened by the android:id attribute in the menu resource - res/menu/main_menu.xml)*/ @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.aboutMenu: Log.d(TAG, "Clicked on About!"); GoalKicker.com Android Notes for Professionals 582 // Code for About goes here return true; case R.id.helpMenu: Log.d(TAG, "Clicked on Help!"); // Code for Help goes here return true; case R.id.signOutMenu: Log.d(TAG, "Clicked on Sign Out!"); // SignOut method call goes here return true; default: return super.onOptionsItemSelected(item); } } Wrapping up! Your Activity code should look like below: public class MainActivity extends AppCompatActivity { private static final String TAG = "mytag"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.main_menu, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.aboutMenu: Log.d(TAG, "Clicked on About!"); // Code for About goes here return true; case R.id.helpMenu: Log.d(TAG, "Clicked on Help!"); // Code for Help goes here return true; case R.id.signOutMenu: Log.d(TAG, "User signed out"); // SignOut method call goes here return true; default: return super.onOptionsItemSelected(item); } } } Screenshot of how your own Menu looks: GoalKicker.com Android Notes for Professionals 583 GoalKicker.com Android Notes for Professionals 584 Chapter 92: Picasso Picasso is an image library for Android. It's created and maintained by Square. It simplies the process of displaying images from external locations. The library handles every stage of the process, from the initial HTTP request to the caching of the image. In many cases, only a few lines of code are required to implement this neat library. Section 92.1: Adding Picasso Library to your Android Project From the ocial documentation: Gradle. dependencies { compile "com.squareup.picasso:picasso:2.5.2" } Maven: <dependency> <groupId>com.squareup.picasso</groupId> <artifactId>picasso</artifactId> <version>2.5.2</version> </dependency> Section 92.2: Circular Avatars with Picasso Here is an example Picasso Circle Transform class based on the original, with the addition of a thin border, and also includes functionality for an optional separator for stacking: import android.graphics.Bitmap; import android.graphics.BitmapShader; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Paint.Style; import com.squareup.picasso.Transformation; public class CircleTransform implements Transformation { boolean mCircleSeparator = false; public CircleTransform(){ } public CircleTransform(boolean circleSeparator){ mCircleSeparator = circleSeparator; } @Override public Bitmap transform(Bitmap source) { int size = Math.min(source.getWidth(), source.getHeight()); int x = (source.getWidth() - size) / 2; int y = (source.getHeight() - size) / 2; Bitmap squaredBitmap = Bitmap.createBitmap(source, x, y, size, size); if (squaredBitmap != source) { source.recycle(); GoalKicker.com Android Notes for Professionals 585 } Bitmap bitmap = Bitmap.createBitmap(size, size, source.getConfig()); Canvas canvas = new Canvas(bitmap); BitmapShader shader = new BitmapShader(squaredBitmap, BitmapShader.TileMode.CLAMP, BitmapShader.TileMode.CLAMP); Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG | Paint.DITHER_FLAG | Paint.FILTER_BITMAP_FLAG); paint.setShader(shader); float r = size/2f; canvas.drawCircle(r, r, r-1, paint); // Make the thin border: Paint paintBorder = new Paint(); paintBorder.setStyle(Style.STROKE); paintBorder.setColor(Color.argb(84,0,0,0)); paintBorder.setAntiAlias(true); paintBorder.setStrokeWidth(1); canvas.drawCircle(r, r, r-1, paintBorder); // Optional separator for stacking: if (mCircleSeparator) { Paint paintBorderSeparator = new Paint(); paintBorderSeparator.setStyle(Style.STROKE); paintBorderSeparator.setColor(Color.parseColor("#ffffff")); paintBorderSeparator.setAntiAlias(true); paintBorderSeparator.setStrokeWidth(4); canvas.drawCircle(r, r, r+1, paintBorderSeparator); } squaredBitmap.recycle(); return bitmap; } @Override public String key() { return "circle"; } } Here is how to use it when loading an image (assuming this is an Activity Context, and url is a String with the url of the image to load): ImageView ivAvatar = (ImageView) itemView.findViewById(R.id.avatar); Picasso.with(this).load(url) .fit() .transform(new CircleTransform()) .into(ivAvatar); Result: GoalKicker.com Android Notes for Professionals 586 For use with the separator, give true to the constructor for the top image: ImageView ivAvatar = (ImageView) itemView.findViewById(R.id.avatar); Picasso.with(this).load(url) .fit() .transform(new CircleTransform(true)) .into(ivAvatar); Result (two ImageViews in a FrameLayout): Section 92.3: Placeholder and Error Handling Picasso supports both download and error placeholders as optional features. Its also provides callbacks for handling the download result. Picasso.with(context) .load("YOUR IMAGE URL HERE") .placeholder(Your Drawable Resource) //this is optional the image to display while the url image is downloading .error(Your Drawable Resource) //this is also optional if some error has occurred in downloading the image this image would be displayed .into(imageView, new Callback(){ @Override public void onSuccess() {} @Override public void onError() {} }); A request will be retried three times before the error placeholder is shown. Section 92.4: Re-sizing and Rotating Picasso.with(context) .load("YOUR IMAGE URL HERE") .placeholder(DRAWABLE RESOURCE) .error(DRAWABLE RESOURCE) .resize(width, height) .rotate(degree) .into(imageView); // optional // optional // optional // optional GoalKicker.com Android Notes for Professionals 587 Section 92.5: Disable cache in Picasso Picasso.with(context) .load(uri) .networkPolicy(NetworkPolicy.NO_CACHE) .memoryPolicy(MemoryPolicy.NO_CACHE) .placeholder(R.drawable.placeholder) .into(imageView); Section 92.6: Using Picasso as ImageGetter for Html.fromHtml Using Picasso as ImageGetter for Html.fromHtml public class PicassoImageGetter implements Html.ImageGetter { private TextView textView; private Picasso picasso; public PicassoImageGetter(@NonNull Picasso picasso, @NonNull TextView textView) { this.picasso = picasso; this.textView = textView; } @Override public Drawable getDrawable(String source) { Log.d(PicassoImageGetter.class.getName(), "Start loading url " + source); BitmapDrawablePlaceHolder drawable = new BitmapDrawablePlaceHolder(); picasso .load(source) .error(R.drawable.connection_error) .into(drawable); return drawable; } private class BitmapDrawablePlaceHolder extends BitmapDrawable implements Target { protected Drawable drawable; @Override public void draw(final Canvas canvas) { if (drawable != null) { checkBounds(); drawable.draw(canvas); } } public void setDrawable(@Nullable Drawable drawable) { if (drawable != null) { this.drawable = drawable; checkBounds(); } } private void checkBounds() { float defaultProportion = (float) drawable.getIntrinsicWidth() / (float) drawable.getIntrinsicHeight(); GoalKicker.com Android Notes for Professionals 588 int width = Math.min(textView.getWidth(), drawable.getIntrinsicWidth()); int height = (int) ((float) width / defaultProportion); if (getBounds().right != textView.getWidth() || getBounds().bottom != height) { setBounds(0, 0, textView.getWidth(), height); //set to full width int halfOfPlaceHolderWidth = (int) ((float) getBounds().right / 2f); int halfOfImageWidth = (int) ((float) width / 2f); drawable.setBounds( halfOfPlaceHolderWidth - halfOfImageWidth, //centering an image 0, halfOfPlaceHolderWidth + halfOfImageWidth, height); textView.setText(textView.getText()); //refresh text } } //------------------------------------------------------------------// @Override public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) { setDrawable(new BitmapDrawable(Application.getContext().getResources(), bitmap)); } @Override public void onBitmapFailed(Drawable errorDrawable) { setDrawable(errorDrawable); } @Override public void onPrepareLoad(Drawable placeHolderDrawable) { setDrawable(placeHolderDrawable); } //------------------------------------------------------------------// } } The usage is simple: Html.fromHtml(textToParse, new PicassoImageGetter(picasso, textViewTarget), null); Section 92.7: Cancelling Image Requests using Picasso In certain cases we need to cancel an image download request in Picasso before the download has completed. This could happen for various reasons, for example if the parent view transitioned to some other view before the image download could be completed. In this case, you can cancel the image download request using the cancelRequest() method: ImageView imageView; //...... GoalKicker.com Android Notes for Professionals 589 Picasso.with(imageView.getContext()).cancelRequest(imageView); Section 92.8: Loading Image from external Storage String filename = "image.png"; String imagePath = getExternalFilesDir() + "/" + filename; Picasso.with(context) .load(new File(imagePath)) .into(imageView); Section 92.9: Downloading image as Bitmap using Picasso If you want to Download image as Bitmap using Picasso following code will help you: Picasso.with(mContext) .load(ImageUrl) .into(new Target() { @Override public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) { // Todo: Do something with your bitmap here } @Override public void onBitmapFailed(Drawable errorDrawable) { } @Override public void onPrepareLoad(Drawable placeHolderDrawable) { } }); Section 92.10: Try oine disk cache rst, then go online and fetch the image rst add the OkHttp to the gradle build le of the app module compile 'com.squareup.picasso:picasso:2.5.2' compile 'com.squareup.okhttp:okhttp:2.4.0' compile 'com.jakewharton.picasso:picasso2-okhttp3-downloader:1.0.2' Then make a class extending Application import android.app.Application; import com.squareup.picasso.OkHttpDownloader; import com.squareup.picasso.Picasso; public class Global extends Application { @Override public void onCreate() { super.onCreate(); Picasso.Builder builder = new Picasso.Builder(this); builder.downloader(new OkHttpDownloader(this,Integer.MAX_VALUE)); Picasso built = builder.build(); built.setIndicatorsEnabled(true); built.setLoggingEnabled(true); GoalKicker.com Android Notes for Professionals 590 Picasso.setSingletonInstance(built); } } add it to the Manifest le as follows : <application android:name=".Global" .. > </application> Normal Usage Picasso.with(getActivity()) .load(imageUrl) .networkPolicy(NetworkPolicy.OFFLINE) .into(imageView, new Callback() { @Override public void onSuccess() { //Offline Cache hit } @Override public void onError() { //Try again online if cache failed Picasso.with(getActivity()) .load(imageUrl) .error(R.drawable.header) .into(imageView, new Callback() { @Override public void onSuccess() { //Online download } @Override public void onError() { Log.v("Picasso","Could not fetch image"); } }); } }); Link to original answer GoalKicker.com Android Notes for Professionals 591 Chapter 93: RoboGuice Section 93.1: Simple example RoboGuice is a framework that brings the simplicity and ease of Dependency Injection to Android, using Google's own Guice library. @ContentView(R.layout.main) class RoboWay extends RoboActivity { @InjectView(R.id.name) TextView name; @InjectView(R.id.thumbnail) ImageView thumbnail; @InjectResource(R.drawable.icon) Drawable icon; @InjectResource(R.string.app_name) String myName; @Inject LocationManager loc; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); name.setText( "Hello, " + myName ); } } Section 93.2: Installation for Gradle Projects Add the following pom to the dependencies section of your gradle build le : project.dependencies { compile 'org.roboguice:roboguice:3.+' provided 'org.roboguice:roboblender:3.+' } Section 93.3: @ContentView annotation The @ContentView annotation can be used to further alleviate development of activities and replace the setContentView statement : @ContentView(R.layout.myactivity_layout) public class MyActivity extends RoboActivity { @InjectView(R.id.text1) TextView textView; @Override protected void onCreate( Bundle savedState ) { textView.setText("Hello!"); } } Section 93.4: @InjectResource annotation You can inject any type of resource, Strings, Animations, Drawables, etc. To inject your rst resource into an activity, you'll need to: Inherit from RoboActivity Annotate your resources with @InjectResource Example GoalKicker.com Android Notes for Professionals 592 @InjectResource(R.string.app_name) String name; @InjectResource(R.drawable.ic_launcher) Drawable icLauncher; @InjectResource(R.anim.my_animation) Animation myAnimation; Section 93.5: @InjectView annotation You can inject any view using the @InjectView annotation: You'll need to: Inherit from RoboActivity Set your content view Annotate your views with @InjectView Example @InjectView(R.id.textView1) TextView textView1; @InjectView(R.id.textView2) TextView textView2; @InjectView(R.id.imageView1) ImageView imageView1; Section 93.6: Introduction to RoboGuice RoboGuice is a framework that brings the simplicity and ease of Dependency Injection to Android, using Google's own Guice library. RoboGuice 3 slims down your application code. Less code means fewer opportunities for bugs. It also makes your code easier to follow -- no longer is your code littered with the mechanics of the Android platform, but now it can focus on the actual logic unique to your application. To give you an idea, take a look at this simple example of a typical Android Activity: class AndroidWay extends Activity { TextView name; ImageView thumbnail; LocationManager loc; Drawable icon; String myName; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); name = (TextView) findViewById(R.id.name); thumbnail = (ImageView) findViewById(R.id.thumbnail); loc = (LocationManager) getSystemService(Activity.LOCATION_SERVICE); icon = getResources().getDrawable(R.drawable.icon); myName = getString(R.string.app_name); name.setText( "Hello, " + myName ); } } This example is 19 lines of code. If you're trying to read through onCreate(), you have to skip over 5 lines of boilerplate initialization to nd the only one that really matters: name.setText(). And complex activities can end up with a lot more of this sort of initialization code. GoalKicker.com Android Notes for Professionals 593 Compare this to the same app, written using RoboGuice: @ContentView(R.layout.main) class RoboWay extends RoboActivity { @InjectView(R.id.name) TextView name; @InjectView(R.id.thumbnail) ImageView thumbnail; @InjectResource(R.drawable.icon) Drawable icon; @InjectResource(R.string.app_name) String myName; @Inject LocationManager loc; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); name.setText( "Hello, " + myName ); } } RoboGuice's goal is to make your code be about your app, rather than be about all the initialization and lifecycle code you typically have to maintain in Android. Annotations: @ContentView annotation: The @ContentView annotation can be used to further alleviate development of activities and replace the setContentView statement : @ContentView(R.layout.myactivity_layout) public class MyActivity extends RoboActivity { @InjectView(R.id.text1) TextView textView; @Override protected void onCreate( Bundle savedState ) { textView.setText("Hello!"); } } @InjectResource annotation: First you need an Activity that inherits from RoboActivity. Then, assuming that you have an animation my_animation.xml in your res/anim folder, you can now reference it with an annotation: public class MyActivity extends RoboActivity { @InjectResource(R.anim.my_animation) Animation myAnimation; // the rest of your code } @Inject annotation: You make sure your activity extends from RoboActivity and annotate your System service member with @Inject. Roboguice will do the rest. class MyActivity extends RoboActivity { @Inject Vibrator vibrator; @Inject NotificationManager notificationManager; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 594 // we can use the instances directly! vibrator.vibrate(1000L); // RoboGuice took care of the getSystemService(VIBRATOR_SERVICE) notificationManager.cancelAll(); In addition to Views, Resources, Services, and other android-specic things, RoboGuice can inject Plain Old Java Objects. By default Roboguice will call a no argument constructor on your POJO class MyActivity extends RoboActivity { @Inject Foo foo; // this will basically call new Foo(); } GoalKicker.com Android Notes for Professionals 595 Chapter 94: ACRA Parameter Description @ReportCrashes Denes the ACRA settings such as where it is to be reported, custom content, etc formUri the path to the le that reports the crash Section 94.1: ACRAHandler Example Application-extending class for handling the reporting: @ReportsCrashes( formUri = "https://backend-of-your-choice.com/",//Non-password protected. customReportContent = { /* */ReportField.APP_VERSION_NAME, ReportField.PACKAGE_NAME,ReportField.ANDROID_VERSION, ReportField.PHONE_MODEL,ReportField.LOGCAT }, mode = ReportingInteractionMode.TOAST, resToastText = R.string.crash ) public class ACRAHandler extends Application { @Override protected void attachBaseContext(Context base) { super.attachBaseContext(base); final ACRAConfiguration config = new ConfigurationBuilder(this) .build(); // Initialise ACRA ACRA.init(this, config); } } Section 94.2: Example manifest <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" <!-- etc --> > <!-- Internet is required. READ_LOGS are to ensure that the Logcat is transmitted--> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.READ_LOGS"/> <application android:allowBackup="true" android:name=".ACRAHandler"<!-- Activates ACRA on startup --> android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > GoalKicker.com Android Notes for Professionals 596 <!-- Activities --> </application> </manifest> Section 94.3: Installation Maven <dependency> <groupId>ch.acra</groupId> <artifactId>acra</artifactId> <version>4.9.2</version> <type>aar</type> </dependency> Gradle compile 'ch.acra:acra:4.9.2' GoalKicker.com Android Notes for Professionals 597 Chapter 95: Parcelable Parcelable is an Android specic interface where you implement the serialization yourself. It was created to be far more ecient that Serializable, and to get around some problems with the default Java serialization scheme. Section 95.1: Making a custom object Parcelable /** * Created by <NAME> on 7/21/16. */ public class Foo implements Parcelable { private final int myFirstVariable; private final String mySecondVariable; private final long myThirdVariable; public Foo(int myFirstVariable, String mySecondVariable, long myThirdVariable) { this.myFirstVariable = myFirstVariable; this.mySecondVariable = mySecondVariable; this.myThirdVariable = myThirdVariable; } // Note that you MUST read values from the parcel IN THE SAME ORDER that // values were WRITTEN to the parcel! This method is our own custom method // to instantiate our object from a Parcel. It is used in the Parcelable.Creator variable we declare below. public Foo(Parcel in) { this.myFirstVariable = in.readInt(); this.mySecondVariable = in.readString(); this.myThirdVariable = in.readLong(); } // The describe contents method can normally return 0. It's used when // the parceled object includes a file descriptor. @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeInt(myFirstVariable); dest.writeString(mySecondVariable); dest.writeLong(myThirdVariable); } // Note that this seemingly random field IS NOT OPTIONAL. The system will // look for this variable using reflection in order to instantiate your // parceled object when read from an Intent. public static final Parcelable.Creator<Foo> CREATOR = new Parcelable.Creator<Foo>() { // This method is used to actually instantiate our custom object // from the Parcel. Convention dictates we make a new constructor that // takes the parcel in as its only argument. public Foo createFromParcel(Parcel in) { GoalKicker.com Android Notes for Professionals 598 return new Foo(in); } // This method is used to make an array of your custom object. // Declaring a new array with the provided size is usually enough. public Foo[] newArray(int size) { return new Foo[size]; } }; } Section 95.2: Parcelable object containing another Parcelable object An example of a class that contains a parcelable class inside: public class Repository implements Parcelable { private String name; private Owner owner; private boolean isPrivate; public Repository(String name, Owner owner, boolean isPrivate) { this.name = name; this.owner = owner; this.isPrivate = isPrivate; } protected Repository(Parcel in) { name = in.readString(); owner = in.readParcelable(Owner.class.getClassLoader()); isPrivate = in.readByte() != 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(name); dest.writeParcelable(owner, flags); dest.writeByte((byte) (isPrivate ? 1 : 0)); } @Override public int describeContents() { return 0; } public static final Creator<Repository> CREATOR = new Creator<Repository>() { @Override public Repository createFromParcel(Parcel in) { return new Repository(in); } @Override public Repository[] newArray(int size) { return new Repository[size]; } }; //getters and setters public String getName() { GoalKicker.com Android Notes for Professionals 599 return name; } public void setName(String name) { this.name = name; } public Owner getOwner() { return owner; } public void setOwner(Owner owner) { this.owner = owner; } public boolean isPrivate() { return isPrivate; } public void setPrivate(boolean isPrivate) { this.isPrivate = isPrivate; } } Owner is just a normal parcelable class. Section 95.3: Using Enums with Parcelable /** * Created by <NAME> on 03/08/16. * This is not a complete parcelable implementation, it only highlights the easiest * way to read and write your Enum values to your parcel */ public class Foo implements Parcelable { private final MyEnum myEnumVariable; private final MyEnum mySaferEnumVariableExample; public Foo(Parcel in) { //the simplest way myEnumVariable = MyEnum.valueOf( in.readString() ); //with some error checking try { mySaferEnumVariableExample= MyEnum.valueOf( in.readString() ); } catch (IllegalArgumentException e) { //bad string or null value mySaferEnumVariableExample= MyEnum.DEFAULT; } } ... @Override public void writeToParcel(Parcel dest, int flags) { //the simple way dest.writeString(myEnumVariable.name()); //avoiding NPEs with some error checking GoalKicker.com Android Notes for Professionals 600 dest.writeString(mySaferEnumVariableExample == null? null : mySaferEnumVariableExample.name()); } } public enum MyEnum { VALUE_1, VALUE_2, DEFAULT } This is preferable to (for example) using an ordinal, because inserting new values into your enum will not aect previously stored values GoalKicker.com Android Notes for Professionals 601 Chapter 96: Retrot2 The ocial Retrot page describes itself as A type-safe REST client for Android and Java. Retrot turns your REST API into a Java interface. It uses annotations to describe HTTP requests, URL parameter replacement and query parameter support is integrated by default. Additionally, it provides functionality for multipart request body and le uploads. Section 96.1: A Simple GET Request We are going to be showing how to make a GET request to an API that responds with a JSON object or a JSON array. The rst thing we need to do is add the Retrot and GSON Converter dependencies to our module's gradle le. Add the dependencies for retrot library as described in the Remarks section. Example of expected JSON object: { "deviceId": "56V56C14SF5B4SF", "name": "Steven", "eventCount": 0 } Example of JSON array: [ { "deviceId": "56V56C14SF5B4SF", "name": "Steven", "eventCount": 0 }, { "deviceId": "35A80SF3QDV7M9F", "name": "John", "eventCount": 2 } ] Example of corresponding model class: public class Device { @SerializedName("deviceId") public String id; @SerializedName("name") public String name; @SerializedName("eventCount") public int eventCount; } The @SerializedName annotations here are from the GSON library and allows us to serialize and deserialize this class to JSON using the serialized name as the keys. Now we can build the interface for the API that will actually GoalKicker.com Android Notes for Professionals 602 fetch the data from the server. public interface DeviceAPI { @GET("device/{deviceId}") Call<Device> getDevice (@Path("deviceId") String deviceID); @GET("devices") Call<List<Device>> getDevices(); } There's a lot going on here in a pretty compact space so let's break it down: The @GET annotation comes from Retrot and tells the library that we're dening a GET request. The path in the parentheses is the endpoint that our GET request should hit (we'll set the base url a little later). The curly-brackets allow us to replace parts of the path at run time so we can pass arguments. The function we're dening is called getDevice and takes the device id we want as an argument. The @PATH annotation tells Retrot that this argument should replace the "deviceId" placeholder in the path. The function returns a Call object of type Device. Creating a wrapper class: Now we will make a little wrapper class for our API to keep the Retrot initialization code wrapped up nicely. public class DeviceAPIHelper { public final DeviceAPI api; private DeviceAPIHelper () { Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .addConverterFactory(GsonConverterFactory.create()) .build(); api = retrofit.create(DeviceAPI.class); } } This class creates a GSON instance to be able to parse the JSON response, creates a Retrot instance with our base url and a GSONConverter and then creates an instance of our API. Calling the API: // Getting a JSON object Call<Device> callObject = api.getDevice(deviceID); callObject.enqueue(new Callback<Response<Device>>() { @Override public void onResponse (Call<Device> call, Response<Device> response) { if (response.isSuccessful()) { Device device = response.body(); } } GoalKicker.com Android Notes for Professionals 603 @Override public void onFailure (Call<Device> call, Throwable t) { Log.e(TAG, t.getLocalizedMessage()); } }); // Getting a JSON array Call<List<Device>> callArray = api.getDevices(); callArray.enqueue(new Callback<Response<List<Device>>() { @Override public void onResponse (Call<List<Device>> call, Response<List<Device>> response) { if (response.isSuccessful()) { List<Device> devices = response.body(); } } @Override public void onFailure (Call<List<Device>> call, Throwable t) { Log.e(TAG, t.getLocalizedMessage()); } }); This uses our API interface to create a Call<Device> object and to create a Call<List<Device>> respectively. Calling enqueue tells Retrot to make that call on a background thread and return the result to the callback that we're creating here. Note: Parsing a JSON array of primitive objects (like String, Integer, Boolean, and Double) is similar to parsing a JSON array. However, you don't need your own model class. You can get the array of Strings for example by having the return type of the call as Call<List<String>>. Section 96.2: Debugging with Stetho Add the following dependencies to your application. compile 'com.facebook.stetho:stetho:1.5.0' compile 'com.facebook.stetho:stetho-okhttp3:1.5.0' In your Application class' onCreate method, call the following. Stetho.initializeWithDefaults(this); When creating your Retrofit instance, create a custom OkHttp instance. OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder(); clientBuilder.addNetworkInterceptor(new StethoInterceptor()); Then set this custom OkHttp instance in the Retrot instance. Retrofit retrofit = new Retrofit.Builder() // ... .client(clientBuilder.build()) .build(); GoalKicker.com Android Notes for Professionals 604 Now connect your phone to your computer, launch the app, and type chrome://inspect into your Chrome browser. Retrot network calls should now show up for you to inspect. Section 96.3: Add logging to Retrot2 Retrot requests can be logged using an intercepter. There are several levels of detail available: NONE, BASIC, HEADERS, BODY. See Github project here. 1. Add dependency to build.gradle: compile 'com.squareup.okhttp3:logging-interceptor:3.8.1' 2. Add logging interceptor when creating Retrot: HttpLoggingInterceptor loggingInterceptor = new HttpLoggingInterceptor(); loggingInterceptor.setLevel(LoggingInterceptor.Level.BODY); OkHttpClient okHttpClient = new OkHttpClient().newBuilder() .addInterceptor(loggingInterceptor) .build(); Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .client(okHttpClient) .addConverterFactory(GsonConverterFactory.create(gson)) .build(); Exposing the logs in the Terminal(Android Monitor) is something that should be avoided in the release version as it may lead to unwanted exposing of critical information such as Auth Tokens etc. To avoid the logs being exposed in the run time, check the following condition if(BuildConfig.DEBUG){ //your interfector code here } For example: HttpLoggingInterceptor loggingInterceptor = new HttpLoggingInterceptor(); if(BuildConfig.DEBUG){ //print the logs in this case loggingInterceptor.setLevel(LoggingInterceptor.Level.BODY); }else{ loggingInterceptor.setLevel(LoggingInterceptor.Level.NONE); } OkHttpClient okHttpClient = new OkHttpClient().newBuilder() .addInterceptor(loggingInterceptor) .build(); Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .client(okHttpClient) .addConverterFactory(GsonConverterFactory.create(gson)) .build(); Section 96.4: A simple POST request with GSON Sample JSON: GoalKicker.com Android Notes for Professionals 605 { "id": "12345", "type": "android" } Dene your request: public class GetDeviceRequest { @SerializedName("deviceId") private String mDeviceId; public GetDeviceRequest(String deviceId) { this.mDeviceId = deviceId; } public String getDeviceId() { return mDeviceId; } } Dene your service (endpoints to hit): public interface Service { @POST("device") Call<Device> getDevice(@Body GetDeviceRequest getDeviceRequest); } Dene your singleton instance of the network client: public class RestClient { private static Service REST_CLIENT; static { setupRestClient(); } private static void setupRestClient() { // Define gson Gson gson = new Gson(); // Define our client Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .addConverterFactory(GsonConverterFactory.create(gson)) .build(); REST_CLIENT = retrofit.create(Service.class); } public static Retrofit getRestClient() { return REST_CLIENT; } GoalKicker.com Android Notes for Professionals 606 } Dene a simple model object for the device: public class Device { @SerializedName("id") private String mId; @SerializedName("type") private String mType; public String getId() { return mId; } public String getType() { return mType; } } Dene controller to handle the requests for the device public class DeviceController { // Other initialization code here... public void getDeviceFromAPI() { // Define our request and enqueue Call<Device> call = RestClient.getRestClient().getDevice(new GetDeviceRequest("12345")); // Go ahead and enqueue the request call.enqueue(new Callback<Device>() { @Override public void onSuccess(Response<Device> deviceResponse) { // Take care of your device here if (deviceResponse.isSuccess()) { // Handle success //delegate.passDeviceObject(); } } @Override public void onFailure(Throwable t) { // Go ahead and handle the error here } }); Section 96.5: Download a le from Server using Retrot2 Interface declaration for downloading a le public interface ApiInterface { @GET("movie/now_playing") Call<MovieResponse> getNowPlayingMovies(@Query("api_key") String apiKey, @Query("page") int page); GoalKicker.com Android Notes for Professionals 607 // option 1: a resource relative to your base URL @GET("resource/example.zip") Call<ResponseBody> downloadFileWithFixedUrl(); // option 2: using a dynamic URL @GET Call<ResponseBody> downloadFileWithDynamicUrl(@Url String fileUrl); } The option 1 is used for downloading a le from Server which is having xed URL. and option 2 is used to pass a dynamic value as full URL to request call. This can be helpful when downloading les, which are dependent of parameters like user or time. Setup retrot for making api calls public class ServiceGenerator { public static final String API_BASE_URL = "http://your.api-base.url/"; private static OkHttpClient.Builder httpClient = new OkHttpClient.Builder(); private static Retrofit.Builder builder = new Retrofit.Builder() .baseUrl(API_BASE_URL) .addConverterFactory(GsonConverterFactory.create()); public static <S> S createService(Class<S> serviceClass){ Retrofit retrofit = builder.client(httpClient.build()).build(); return retrofit.create(serviceClass); } } Now, make implementation of api for downloading le from server private void downloadFile(){ ApiInterface apiInterface = ServiceGenerator.createService(ApiInterface.class); Call<ResponseBody> call = apiInterface.downloadFileWithFixedUrl(); call.enqueue(new Callback<ResponseBody>() { @Override public void onResponse(Call<ResponseBody> call, Response<ResponseBody> response) { if (response.isSuccessful()){ boolean writtenToDisk = writeResponseBodyToDisk(response.body()); Log.d("File download was a success? ", String.valueOf(writtenToDisk)); } } @Override public void onFailure(Call<ResponseBody> call, Throwable t) { } }); } And after getting response in the callback, code some standard IO for saving le to disk. Here is the code: private boolean writeResponseBodyToDisk(ResponseBody body) { GoalKicker.com Android Notes for Professionals 608 try { // todo change the file location/name according to your needs File futureStudioIconFile = new File(getExternalFilesDir(null) + File.separator + "Future Studio Icon.png"); InputStream inputStream = null; OutputStream outputStream = null; try { byte[] fileReader = new byte[4096]; long fileSize = body.contentLength(); long fileSizeDownloaded = 0; inputStream = body.byteStream(); outputStream = new FileOutputStream(futureStudioIconFile); while (true) { int read = inputStream.read(fileReader); if (read == -1) { break; } outputStream.write(fileReader, 0, read); fileSizeDownloaded += read; Log.d("File Download: " , fileSizeDownloaded + " of " + fileSize); } outputStream.flush(); return true; } catch (IOException e) { return false; } finally { if (inputStream != null) { inputStream.close(); } if (outputStream != null) { outputStream.close(); } } } catch (IOException e) { return false; } } Note we have specied ResponseBody as return type, otherwise Retrot will try to parse and convert it, which doesn't make sense when you are downloading le. If you want more on Retrot stus, got to this link as it is very useful. [1]: https://futurestud.io/blog/retrot-getting-started-and-android-client Section 96.6: Upload multiple le using Retrot as multipart Once you have setup the Retrot environment in your project, you can use the following example that demonstrates how to upload multiple les using Retrot: GoalKicker.com Android Notes for Professionals 609 private void mulipleFileUploadFile(Uri[] fileUri) { OkHttpClient okHttpClient = new OkHttpClient(); OkHttpClient clientWith30sTimeout = okHttpClient.newBuilder() .readTimeout(30, TimeUnit.SECONDS) .build(); Retrofit retrofit = new Retrofit.Builder() .baseUrl(API_URL_BASE) .addConverterFactory(new MultiPartConverter()) .client(clientWith30sTimeout) .build(); WebAPIService service = retrofit.create(WebAPIService.class); //here is the interface which you have created for the call service Map<String, okhttp3.RequestBody> maps = new HashMap<>(); if (fileUri!=null && fileUri.length>0) { for (int i = 0; i < fileUri.length; i++) { String filePath = getRealPathFromUri(fileUri[i]); File file1 = new File(filePath); if (filePath != null && filePath.length() > 0) { if (file1.exists()) { okhttp3.RequestBody requestFile = okhttp3.RequestBody.create(okhttp3.MediaType.parse("multipart/form-data"), file1); String filename = "imagePath" + i; //key for upload file like : imagePath0 maps.put(filename + "\"; filename=\"" + file1.getName(), requestFile); } } } } String descriptionString = " string request";// //hear is the your json request Call<String> call = service.postFile(maps, descriptionString); call.enqueue(new Callback<String>() { @Override public void onResponse(Call<String> call, Response<String> response) { Log.i(LOG_TAG, "success"); Log.d("body==>", response.body().toString() + ""); Log.d("isSuccessful==>", response.isSuccessful() + ""); Log.d("message==>", response.message() + ""); Log.d("raw==>", response.raw().toString() + ""); Log.d("raw().networkResponse()", response.raw().networkResponse().toString() + ""); } @Override public void onFailure(Call<String> call, Throwable t) { Log.e(LOG_TAG, t.getMessage()); } }); } public String getRealPathFromUri(final Uri uri) { // function for file path from uri, if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT && DocumentsContract.isDocumentUri(mContext, uri)) { // ExternalStorageProvider if (isExternalStorageDocument(uri)) { final String docId = DocumentsContract.getDocumentId(uri); final String[] split = docId.split(":"); final String type = split[0]; GoalKicker.com Android Notes for Professionals 610 if ("primary".equalsIgnoreCase(type)) { return Environment.getExternalStorageDirectory() + "/" + split[1]; } } // DownloadsProvider else if (isDownloadsDocument(uri)) { final String id = DocumentsContract.getDocumentId(uri); final Uri contentUri = ContentUris.withAppendedId( Uri.parse("content://downloads/public_downloads"), Long.valueOf(id)); return getDataColumn(mContext, contentUri, null, null); } // MediaProvider else if (isMediaDocument(uri)) { final String docId = DocumentsContract.getDocumentId(uri); final String[] split = docId.split(":"); final String type = split[0]; Uri contentUri = null; if ("image".equals(type)) { contentUri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI; } else if ("video".equals(type)) { contentUri = MediaStore.Video.Media.EXTERNAL_CONTENT_URI; } else if ("audio".equals(type)) { contentUri = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; } final String selection = "_id=?"; final String[] selectionArgs = new String[]{ split[1] }; return getDataColumn(mContext, contentUri, selection, selectionArgs); } } // MediaStore (and general) else if ("content".equalsIgnoreCase(uri.getScheme())) { // Return the remote address if (isGooglePhotosUri(uri)) return uri.getLastPathSegment(); return getDataColumn(mContext, uri, null, null); } // File else if ("file".equalsIgnoreCase(uri.getScheme())) { return uri.getPath(); } return null; } Following is the interface public interface WebAPIService { @Multipart @POST("main.php") Call<String> postFile(@PartMap Map<String,RequestBody> Files, @Part("json") String description); } GoalKicker.com Android Notes for Professionals 611 Section 96.7: Retrot with OkHttp interceptor This example shows how to use a request interceptor with OkHttp. This has numerous use cases such as: Adding universal header to the request. E.g. authenticating a request Debugging networked applications Retrieving raw response Logging network transaction etc. Set custom user agent Retrofit.Builder builder = new Retrofit.Builder() .addCallAdapterFactory(RxJavaCallAdapterFactory.create()) .addConverterFactory(GsonConverterFactory.create()) .baseUrl("https://api.github.com/"); if (!TextUtils.isEmpty(githubToken)) { // `githubToken`: Access token for GitHub OkHttpClient client = new OkHttpClient.Builder().addInterceptor(new Interceptor() { @Override public Response intercept(Chain chain) throws IOException { Request request = chain.request(); Request newReq = request.newBuilder() .addHeader("Authorization", format("token %s", githubToken)) .build(); return chain.proceed(newReq); } }).build(); builder.client(client); } return builder.build().create(GithubApi.class); See OkHttp topic for more details. Section 96.8: Header and Body: an Authentication Example The @Header and @Body annotations can be placed into the method signatures and Retrot will automatically create them based on your models. public interface MyService { @POST("authentication/user") Call<AuthenticationResponse> authenticateUser(@Body AuthenticationRequest request, @Header("Authorization") String basicToken); } AuthenticaionRequest is our model, a POJO, containing the information the server requires. For this example, our server wants the client key and secret. public class AuthenticationRequest { String clientKey; String clientSecret; } Notice that in @Header("Authorization") we are specifying we are populating the Authorization header. The other headers will be populated automatically since Retrot can infer what they are based on the type of objects we are sending and expecting in return. GoalKicker.com Android Notes for Professionals 612 We create our Retrot service somewhere. We make sure to use HTTPS. Retrofit retrofit = new Retrofit.Builder() .baseUrl("https:// some example site") .client(client) .build(); MyService myService = retrofit.create(MyService.class) Then we can use our service. AuthenticationRequest request = new AuthenticationRequest(); request.setClientKey(getClientKey()); request.setClientSecret(getClientSecret()); String basicToken = "Basic " + token; myService.authenticateUser(request, basicToken); Section 96.9: Uploading a le via Multipart Declare your interface with Retrot2 annotations: public interface BackendApiClient { @Multipart @POST("/uploadFile") Call<RestApiDefaultResponse> uploadPhoto(@Part("file\"; filename=\"photo.jpg\" ") RequestBody photo); } Where RestApiDefaultResponse is a custom class containing the response. Building the implementation of your API and enqueue the call: Retrofit retrofit = new Retrofit.Builder() .addConverterFactory(GsonConverterFactory.create()) .baseUrl("http://<yourhost>/") .client(okHttpClient) .build(); BackendApiClient apiClient = retrofit.create(BackendApiClient.class); RequestBody reqBody = RequestBody.create(MediaType.parse("image/jpeg"), photoFile); Call<RestApiDefaultResponse> call = apiClient.uploadPhoto(reqBody); call.enqueue(<your callback function>); Section 96.10: Retrot 2 Custom Xml Converter Adding dependencies into the build.gradle le. dependencies { .... compile 'com.squareup.retrofit2:retrofit:2.1.0' compile ('com.thoughtworks.xstream:xstream:1.4.7') { exclude group: 'xmlpull', module: 'xmlpull' } .... } Then create Converter Factory GoalKicker.com Android Notes for Professionals 613 public class XStreamXmlConverterFactory extends Converter.Factory { /** Create an instance using a default {@link com.thoughtworks.xstream.XStream} instance for conversion. */ public static XStreamXmlConverterFactory create() { return create(new XStream()); } /** Create an instance using {@code xStream} for conversion. */ public static XStreamXmlConverterFactory create(XStream xStream) { return new XStreamXmlConverterFactory(xStream); } private final XStream xStream; private XStreamXmlConverterFactory(XStream xStream) { if (xStream == null) throw new NullPointerException("xStream == null"); this.xStream = xStream; } @Override public Converter<ResponseBody, ?> responseBodyConverter(Type type, Annotation[] annotations, Retrofit retrofit) { if (!(type instanceof Class)) { return null; } Class<?> cls = (Class<?>) type; return new XStreamXmlResponseBodyConverter<>(cls, xStream); } @Override public Converter<?, RequestBody> requestBodyConverter(Type type, Annotation[] parameterAnnotations, Annotation[] methodAnnotations, Retrofit retrofit) { if (!(type instanceof Class)) { return null; } return new XStreamXmlRequestBodyConverter<>(xStream); } } create a class to handle the body request. final class XStreamXmlResponseBodyConverter <T> implements Converter<ResponseBody, T> { private final Class<T> cls; private final XStream xStream; XStreamXmlResponseBodyConverter(Class<T> cls, XStream xStream) { this.cls = cls; this.xStream = xStream; } @Override public T convert(ResponseBody value) throws IOException { try { GoalKicker.com Android Notes for Professionals 614 this.xStream.processAnnotations(cls); Object object = this.xStream.fromXML(value.byteStream()); return (T) object; }finally { value.close(); } } } create a class to handle the body response. final class XStreamXmlRequestBodyConverter<T> implements Converter<T, RequestBody> { private static final MediaType MEDIA_TYPE = MediaType.parse("application/xml; charset=UTF-8"); private static final String CHARSET = "UTF-8"; private final XStream xStream; XStreamXmlRequestBodyConverter(XStream xStream) { this.xStream = xStream; } @Override public RequestBody convert(T value) throws IOException { Buffer buffer = new Buffer(); try { OutputStreamWriter osw = new OutputStreamWriter(buffer.outputStream(), CHARSET); xStream.toXML(value, osw); osw.flush(); } catch (Exception e) { throw new RuntimeException(e); } return RequestBody.create(MEDIA_TYPE, buffer.readByteString()); } } So, this point we can send and receive any XML , We just need create XStream Annotations for the entities. Then create a Retrot instance: XStream xs = new XStream(new DomDriver()); xs.autodetectAnnotations(true); Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .addConverterFactory(XStreamXmlConverterFactory.create(xs)) .client(client) .build(); Section 96.11: Reading XML form URL with Retrot 2 We will use retrot 2 and SimpleXmlConverter to get xml data from url and parse to Java class. Add dependency to Gradle script: GoalKicker.com Android Notes for Professionals 615 compile 'com.squareup.retrofit2:retrofit:2.1.0' compile 'com.squareup.retrofit2:converter-simplexml:2.1.0' Create interface Also create xml class wrapper in our case Rss class public interface ApiDataInterface{ // path to xml link on web site @GET (data/read.xml) Call<Rss> getData(); } Xml read function private void readXmlFeed() { try { // base url - url of web site Retrofit retrofit = new Retrofit.Builder() .baseUrl(http://www.google.com/) .client(new OkHttpClient()) .addConverterFactory(SimpleXmlConverterFactory.create()) .build(); ApiDataInterface apiService = retrofit.create(ApiDataInterface.class); Call<Rss> call = apiService.getData(); call.enqueue(new Callback<Rss>() { @Override public void onResponse(Call<Rss> call, Response<Rss> response) { Log.e("Response success", response.message()); } @Override public void onFailure(Call<Rss> call, Throwable t) { Log.e("Response fail", t.getMessage()); } }); } catch (Exception e) { Log.e("Exception", e.getMessage()); } } This is example of Java class with SimpleXML annotations More about annotations SimpleXmlDocumentation @Root (name = "rss") GoalKicker.com Android Notes for Professionals 616 public class Rss { public Rss() { } public Rss(String title, String description, String link, List<Item> item, String language) { this.title = title; this.description = description; this.link = link; this.item = item; this.language = language; } @Element (name = "title") private String title; @Element(name = "description") private String description; @Element(name = "link") private String link; @ElementList (entry="item", inline=true) private List<Item> item; @Element(name = "language") private String language; GoalKicker.com Android Notes for Professionals 617 Chapter 97: ButterKnife Butterknife is a view binding tool that uses annotations to generate boilerplate code for us. This tool is developed by <NAME> at Square and is essentially used to save typing repetitive lines of code like findViewById(R.id.view) when dealing with views thus making our code look a lot cleaner. To be clear, Butterknife is not a dependency injection library. Butterknife injects code at compile time. It is very similar to the work done by Android Annotations. Section 97.1: Conguring ButterKnife in your project Congure your project-level build.gradle to include the android-apt plugin: buildscript { repositories { mavenCentral() } dependencies { classpath 'com.jakewharton:butterknife-gradle-plugin:8.5.1' } } Then, apply the android-apt plugin in your module-level build.gradle and add the ButterKnife dependencies: apply plugin: 'android-apt' android { ... } dependencies { compile 'com.jakewharton:butterknife:8.5.1' annotationProcessor 'com.jakewharton:butterknife-compiler:8.5.1' } Note: If you are using the new Jack compiler with version 2.2.0 or newer you do not need the android-apt plugin and can instead replace apt with annotationProcessor when declaring the compiler dependency. In order to use ButterKnife annotations you shouldn't forget about binding them in onCreate() of your Activities or onCreateView() of your Fragments: class ExampleActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Binding annotations ButterKnife.bind(this); // ... } } // Or class ExampleFragment extends Fragment { GoalKicker.com Android Notes for Professionals 618 @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { super.onCreateView(inflater, container, savedInstanceState); View view = inflater.inflate(getContentView(), container, false); // Binding annotations ButterKnife.bind(this, view); // ... return view; } } Snapshots of the development version are available in Sonatype's snapshots repository. Below are the additional steps you'd have to take to use ButterKnife in a library project To use ButterKnife in a library project, add the plugin to your project-level build.gradle: buildscript { dependencies { classpath 'com.jakewharton:butterknife-gradle-plugin:8.5.1' } } and then apply to your module by adding these lines on the top of your library-level build.gradle: apply plugin: 'com.android.library' // ... apply plugin: 'com.jakewharton.butterknife' Now make sure you use R2 instead of R inside all ButterKnife annotations. class ExampleActivity extends Activity { // Bind xml resource to their View @BindView(R2.id.user) EditText username; @BindView(R2.id.pass) EditText password; // Binding resources from drawable,strings,dimens,colors @BindString(R.string.choose) String choose; @BindDrawable(R.drawable.send) Drawable send; @BindColor(R.color.cyan) int cyan; @BindDimen(R.dimen.margin) Float generalMargin; // Listeners @OnClick(R.id.submit) public void submit(View view) { // TODO submit data to server... } // bind with butterknife in onCreate @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ButterKnife.bind(this); // TODO continue } GoalKicker.com Android Notes for Professionals 619 } Section 97.2: Unbinding views in ButterKnife Fragments have a dierent view lifecycle than activities. When binding a fragment in onCreateView, set the views to null in onDestroyView. Butter Knife returns an Unbinder instance when you call bind to do this for you. Call its unbind method in the appropriate lifecycle callback. An example: public class MyFragment extends Fragment { @BindView(R.id.textView) TextView textView; @BindView(R.id.button) Button button; private Unbinder unbinder; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.my_fragment, container, false); unbinder = ButterKnife.bind(this, view); // TODO Use fields... return view; } @Override public void onDestroyView() { super.onDestroyView(); unbinder.unbind(); } } Note: Calling unbind() in onDestroyView() is not required, but recommended as it saves quite a bit of memory if your app has a large backstack. Section 97.3: Binding Listeners using ButterKnife OnClick Listener: @OnClick(R.id.login) public void login(View view) { // Additional logic } All arguments to the listener method are optional: @OnClick(R.id.login) public void login() { // Additional logic } Specic type will be automatically casted: @OnClick(R.id.submit) public void sayHi(Button button) { button.setText("Hello!"); } GoalKicker.com Android Notes for Professionals 620 Multiple IDs in a single binding for common event handling: @OnClick({ R.id.door1, R.id.door2, R.id.door3 }) public void pickDoor(DoorView door) { if (door.hasPrizeBehind()) { Toast.makeText(this, "You win!", LENGTH_SHORT).show(); } else { Toast.makeText(this, "Try again", LENGTH_SHORT).show(); } } Custom Views can bind to their own listeners by not specifying an ID: public class CustomButton extends Button { @OnClick public void onClick() { // TODO } } Section 97.4: Android Studio ButterKnife Plugin Android ButterKnife Zelezny Plugin for generating ButterKnife injections from selected layout XMLs in activities/fragments/adapters. Note : Make sure that you make the right click for your_xml_layou(R.layout.your_xml_layou) else the Generate menu will not contain Butterknife injector option. GoalKicker.com Android Notes for Professionals 621 Link : Jetbrains Plugin Android ButterKnife Zelezny Section 97.5: Binding Views using ButterKnife we can annotate elds with @BindView and a view ID for Butter Knife to nd and automatically cast the corresponding view in our layout. Binding Views Binding Views in Activity class ExampleActivity extends Activity { @BindView(R.id.title) TextView title; @BindView(R.id.subtitle) TextView subtitle; @BindView(R.id.footer) TextView footer; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.simple_activity); ButterKnife.bind(this); // TODO Use fields... } } GoalKicker.com Android Notes for Professionals 622 Binding Views in Fragments public class FancyFragment extends Fragment { @BindView(R.id.button1) Button button1; @BindView(R.id.button2) Button button2; private Unbinder unbinder; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fancy_fragment, container, false); unbinder = ButterKnife.bind(this, view); // TODO Use fields... return view; } // in fragments or non activity bindings we need to unbind the binding when view is about to be destroyed @Override public void onDestroy() { super.onDestroy(); unbinder.unbind(); } } Binding Views in Dialogs We can use ButterKnife.findById to nd views on a View, Activity, or Dialog. It uses generics to infer the return type and automatically performs the cast. View view = LayoutInflater.from(context).inflate(R.layout.thing, null); TextView firstName = ButterKnife.findById(view, R.id.first_name); TextView lastName = ButterKnife.findById(view, R.id.last_name); ImageView photo = ButterKnife.findById(view, R.id.photo); Binding Views in ViewHolder static class ViewHolder { @BindView(R.id.title) TextView name; @BindView(R.id.job_title) TextView jobTitle; public ViewHolder(View view) { ButterKnife.bind(this, view); } } Binding Resources Apart from being useful for binding views, one could also use ButterKnife to bind resources such as those dened within strings.xml, drawables.xml, colors.xml, dimens.xml, etc. public class ExampleActivity extends Activity { @BindString(R.string.title) String title; @BindDrawable(R.drawable.graphic) Drawable graphic; @BindColor(R.color.red) int red; // int or ColorStateList field @BindDimen(R.dimen.spacer) Float spacer; // int (for pixel size) or float (for exact value) field @Override public void onCreate(Bundle savedInstanceState) { // ... GoalKicker.com Android Notes for Professionals 623 ButterKnife.bind(this); } } Binding View Lists You can group multiple views into a List or array. This is very helpful when we need to perform one action on multiple views at once. @BindViews({ R.id.first_name, R.id.middle_name, R.id.last_name }) List<EditText> nameViews; //The apply method allows you to act on all the views in a list at once. ButterKnife.apply(nameViews, DISABLE); ButterKnife.apply(nameViews, ENABLED, false); //We can use Action and Setter interfaces allow specifying simple behavior. static final ButterKnife.Action<View> DISABLE = new ButterKnife.Action<View>() { @Override public void apply(View view, int index) { view.setEnabled(false); } }; static final ButterKnife.Setter<View, Boolean> ENABLED = new ButterKnife.Setter<View, Boolean>() { @Override public void set(View view, Boolean value, int index) { view.setEnabled(value); } }; Optional Bindings By default, both @Bind and listener bindings are required. An exception is thrown if the target view cannot be found. But if we are not sure if a view will be there or not then we can add a @Nullable annotation to elds or the @Optional annotation to methods to suppress this behavior and create an optional binding. @Nullable @BindView(R.id.might_not_be_there) TextView mightNotBeThere; @Optional @OnClick(R.id.maybe_missing) void onMaybeMissingClicked() { // TODO ... } GoalKicker.com Android Notes for Professionals 624 Chapter 98: Volley Volley is an Android HTTP library that was introduced by Google to make networking calls much simpler. By default all the Volley network calls are made asynchronously, handling everything in a background thread and returning the results in the foreground with use of callbacks. As fetching data over a network is one of the most common tasks that is performed in any app, the Volley library was made to ease Android app development. Section 98.1: Using Volley for HTTP requests Add the gradle dependency in app-level build.gradle compile 'com.android.volley:volley:1.0.0' Also, add the android.permission.INTERNET permission to your app's manifest. **Create Volley RequestQueue instance singleton in your Application ** public class InitApplication extends Application { private RequestQueue queue; private static InitApplication sInstance; private static final String TAG = InitApplication.class.getSimpleName(); @Override public void onCreate() { super.onCreate(); sInstance = this; Stetho.initializeWithDefaults(this); } public static synchronized InitApplication getInstance() { return sInstance; } public <T> void addToQueue(Request<T> req, String tag) { req.setTag(TextUtils.isEmpty(tag) ? TAG : tag); getQueue().add(req); } public <T> void addToQueue(Request<T> req) { req.setTag(TAG); getQueue().add(req); } public void cancelPendingRequests(Object tag) { if (queue != null) { queue.cancelAll(tag); } } public RequestQueue getQueue() { if (queue == null) { queue = Volley.newRequestQueue(getApplicationContext()); return queue; GoalKicker.com Android Notes for Professionals 625 } return queue; } } Now, you can use the volley instance using the getInstance() method and add a new request in the queue using InitApplication.getInstance().addToQueue(request); A simple example to request JsonObject from server is JsonObjectRequest myRequest = new JsonObjectRequest(Method.GET, url, null, new Response.Listener<JSONObject>() { @Override public void onResponse(JSONObject response) { Log.d(TAG, response.toString()); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { Log.d(TAG, "Error: " + error.getMessage()); } }); myRequest.setRetryPolicy(new DefaultRetryPolicy( MY_SOCKET_TIMEOUT_MS, DefaultRetryPolicy.DEFAULT_MAX_RETRIES, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT)); To handle Volley timeouts you need to use a RetryPolicy. A retry policy is used in case a request cannot be completed due to network failure or some other cases. Volley provides an easy way to implement your RetryPolicy for your requests. By default, Volley sets all socket and connection timeouts to 5 seconds for all requests. RetryPolicy is an interface where you need to implement your logic of how you want to retry a particular request when a timeout occurs. The constructor takes the following three parameters: initialTimeoutMs - Species the socket timeout in milliseconds for every retry attempt. maxNumRetries - The number of times retry is attempted. backoffMultiplier - A multiplier which is used to determine exponential time set to socket for every retry attempt. Section 98.2: Basic StringRequest using GET method final TextView mTextView = (TextView) findViewById(R.id.text); ... // Instantiate the RequestQueue. RequestQueue queue = Volley.newRequestQueue(this); String url ="http://www.google.com"; // Request a string response from the provided URL. StringRequest stringRequest = new StringRequest(Request.Method.GET, url, new Response.Listener<String>() { @Override GoalKicker.com Android Notes for Professionals 626 public void onResponse(String response) { // Display the first 500 characters of the response string. mTextView.setText("Response is: "+ response.substring(0,500)); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { mTextView.setText("That didn't work!"); } }); // Add the request to the RequestQueue. queue.add(stringRequest); Section 98.3: Adding custom design time attributes to NetworkImageView There are several additional attributes that the Volley NetworkImageView adds to the standard ImageView. However, these attributes can only be set in code. The following is an example of how to make an extension class that will pick up the attributes from your XML layout le and apply them to the NetworkImageView instance for you. In your ~/res/xml directory, add a le named attrx.xml: <resources> <declare-styleable name="MoreNetworkImageView"> <attr name="defaultImageResId" format="reference"/> <attr name="errorImageResId" format="reference"/> </declare-styleable> </resources> Add a new class le to your project: package my.namespace; import android.content.Context; import android.content.res.TypedArray; import android.support.annotation.NonNull; import android.util.AttributeSet; import com.android.volley.toolbox.NetworkImageView; public class MoreNetworkImageView extends NetworkImageView { public MoreNetworkImageView(@NonNull final Context context) { super(context); } public MoreNetworkImageView(@NonNull final Context context, @NonNull final AttributeSet attrs) { this(context, attrs, 0); } public MoreNetworkImageView(@NonNull final Context context, @NonNull final AttributeSet attrs, final int defStyle) { super(context, attrs, defStyle); final TypedArray attributes = context.obtainStyledAttributes(attrs, R.styleable.MoreNetworkImageView, defStyle, 0); // load defaultImageResId from XML int defaultImageResId = GoalKicker.com Android Notes for Professionals 627 attributes.getResourceId(R.styleable.MoreNetworkImageView_defaultImageResId, 0); if (defaultImageResId > 0) { setDefaultImageResId(defaultImageResId); } // load errorImageResId from XML int errorImageResId = attributes.getResourceId(R.styleable.MoreNetworkImageView_errorImageResId, 0); if (errorImageResId > 0) { setErrorImageResId(errorImageResId); } } } An example layout le showing the use of the custom attributes: <?xml version="1.0" encoding="utf-8"?> <android.support.v7.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="fill_parent"> <my.namespace.MoreNetworkImageView android:layout_width="64dp" android:layout_height="64dp" app:errorImageResId="@drawable/error_img" app:defaultImageResId="@drawable/default_img" tools:defaultImageResId="@drawable/editor_only_default_img"/> <!-Note: The "tools:" prefix does NOT work for custom attributes in Android Studio 2.1 and older at least, so in this example the defaultImageResId would show "default_img" in the editor, not the "editor_only_default_img" drawable even though it should if it was supported as an editor-only override correctly like standard Android properties. --> </android.support.v7.widget.CardView> Section 98.4: Adding custom headers to your requests [e.g. for basic auth] If you need to add custom headers to your volley requests, you can't do this after initialisation, as the headers are saved in a private variable. Instead, you need to override the getHeaders() method of Request.class as such: new JsonObjectRequest(REQUEST_METHOD, REQUEST_URL, REQUEST_BODY, RESP_LISTENER, ERR_LISTENER) { @Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> customHeaders = new Hashmap<>(); customHeaders.put("KEY_0", "VALUE_0"); ... customHeaders.put("KEY_N", "VALUE_N"); return customHeaders; } }; GoalKicker.com Android Notes for Professionals 628 Explanation of the parameters: REQUEST_METHOD - Either of the Request.Method.* constants. REQUEST_URL - The full URL to send your request to. REQUEST_BODY - A JSONObject containing the POST-Body to be sent (or null). RESP_LISTENER - A Response.Listener<?> object, whose onResponse(T data) method is called upon successful completion. ERR_LISTENER - A Response.ErrorListener object, whose onErrorResponse(VolleyError e) method is called upon a unsuccessful request. If you want to build a custom request, you can add the headers in it as well: public class MyCustomRequest extends Request { ... @Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> customHeaders = new Hashmap<>(); customHeaders.put("KEY_0", "VALUE_0"); ... customHeaders.put("KEY_N", "VALUE_N"); return customHeaders; } ... } Section 98.5: Remote server authentication using StringRequest through POST method For the sake of this example, let us assume that we have a server for handling the POST requests that we will be making from our Android app: // User input data. String email = "<EMAIL>"; String password = "123"; // Our server URL for handling POST requests. String URL = "http://my.server.com/login.php"; // When we create a StringRequest (or a JSONRequest) for sending // data with Volley, we specify the Request Method as POST, and // the URL that will be receiving our data. StringRequest stringRequest = new StringRequest(Request.Method.POST, URL, new Response.Listener<String>() { @Override public void onResponse(String response) { // At this point, Volley has sent the data to your URL // and has a response back from it. I'm going to assume // that the server sends an "OK" string. if (response.equals("OK")) { // Do login stuff. } else { // So the server didn't return an "OK" response. // Depending on what you did to handle errors on your // server, you can decide what action to take here. } GoalKicker.com Android Notes for Professionals 629 } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { // This is when errors related to Volley happen. // It's up to you what to do if that should happen, but // it's usually not a good idea to be too clear as to // what happened here to your users. } }) { @Override protected Map<String, String> getParams() throws AuthFailureError { // Here is where we tell Volley what it should send in // our POST request. For this example, we want to send // both the email and the password. // We will need key ids for our data, so our server can know // what is what. String key_email = "email"; String key_password = "password"; Map<String, String> map = new HashMap<String, String>(); // map.put(key, value); map.put(key_email, email); map.put(key_password, password); return map; } }; // This is a policy that we need to specify to tell Volley, what // to do if it gets a timeout, how many times to retry, etc. stringRequest.setRetryPolicy(new RetryPolicy() { @Override public int getCurrentTimeout() { // Here goes the timeout. // The number is in milliseconds, 5000 is usually enough, // but you can up or low that number to fit your needs. return 50000; } @Override public int getCurrentRetryCount() { // The maximum number of attempts. // Again, the number can be anything you need. return 50000; } @Override public void retry(VolleyError error) throws VolleyError { // Here you could check if the retry count has gotten // to the maximum number, and if so, send a VolleyError // message or similar. For the sake of the example, I'll // show a Toast. Toast.makeText(getContext(), error.toString(), Toast.LENGTH_LONG).show(); } }); // And finally, we create a Volley Queue. For this example, I'm using // getContext(), because I was working with a Fragment. But context could // be "this", "getContext()", etc. RequestQueue requestQueue = Volley.newRequestQueue(getContext()); requestQueue.add(stringRequest); GoalKicker.com Android Notes for Professionals 630 } else { // If, for example, the user inputs an email that is not currently // on your remote DB, here's where we can inform the user. Toast.makeText(getContext(), "Wrong email", Toast.LENGTH_LONG).show(); } Section 98.6: Cancel a request // assume a Request and RequestQueue have already been initialized somewhere above public static final String TAG = "SomeTag"; // Set the tag on the request. request.setTag(TAG); // Add the request to the RequestQueue. mRequestQueue.add(request); // To cancel this specific request request.cancel(); // ... then, in some future life cycle event, for example in onStop() // To cancel all requests with the specified tag in RequestQueue mRequestQueue.cancelAll(TAG); Section 98.7: Request JSON final TextView mTxtDisplay = (TextView) findViewById(R.id.txtDisplay); ImageView mImageView; String url = "http://ip.jsontest.com/"; final JsonObjectRequest jsObjRequest = new JsonObjectRequest (Request.Method.GET, url, null, new Response.Listener<JSONObject>() { @Override public void onResponse(JSONObject response) { mTxtDisplay.setText("Response: " + response.toString()); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { // ... } }); requestQueue.add(jsObjRequest); Section 98.8: Use JSONArray as request body The default requests integrated in volley don't allow to pass a JSONArray as request body in a POST request. Instead, you can only pass a JSON object as a parameter. However, instead of passing a JSON object as a parameter to the request constructor, you need to override the getBody() method of the Request.class. You should pass null as third parameter as well: JSONArray requestBody = new JSONArray(); new JsonObjectRequest(Request.Method.POST, REQUEST_URL, null, RESP_LISTENER, ERR_LISTENER) { @Override GoalKicker.com Android Notes for Professionals 631 public byte[] getBody() { try { return requestBody.toString().getBytes(PROTOCOL_CHARSET); } catch (UnsupportedEncodingException uee) { // error handling return null; } } }; Explanation of the parameters: REQUEST_URL - The full URL to send your request to. RESP_LISTENER - A Response.Listener<?> object, whose onResponse(T data) method is called upon successful completion. ERR_LISTENER - A Response.ErrorListener object, whose onErrorResponse(VolleyError e) method is called upon an unsuccessful request. Section 98.9: Boolean variable response from server with json request in volley you can custom class below one private final String PROTOCOL_CONTENT_TYPE = String.format("application/json; charset=%s", PROTOCOL_CHARSET); public BooleanRequest(int method, String url, String requestBody, Response.Listener<Boolean> listener, Response.ErrorListener errorListener) { super(method, url, errorListener); this.mListener = listener; this.mErrorListener = errorListener; this.mRequestBody = requestBody; } @Override protected Response<Boolean> parseNetworkResponse(NetworkResponse response) { Boolean parsed; try { parsed = Boolean.valueOf(new String(response.data, HttpHeaderParser.parseCharset(response.headers))); } catch (UnsupportedEncodingException e) { parsed = Boolean.valueOf(new String(response.data)); } return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response)); } @Override protected VolleyError parseNetworkError(VolleyError volleyError) { return super.parseNetworkError(volleyError); } @Override protected void deliverResponse(Boolean response) { mListener.onResponse(response); } @Override public void deliverError(VolleyError error) { mErrorListener.onErrorResponse(error); GoalKicker.com Android Notes for Professionals 632 } @Override public String getBodyContentType() { return PROTOCOL_CONTENT_TYPE; } @Override public byte[] getBody() throws AuthFailureError { try { return mRequestBody == null ? null : mRequestBody.getBytes(PROTOCOL_CHARSET); } catch (UnsupportedEncodingException uee) { VolleyLog.wtf("Unsupported Encoding while trying to get the bytes of %s using %s", mRequestBody, PROTOCOL_CHARSET); return null; } } } use this with your activity try { JSONObject jsonBody; jsonBody = new JSONObject(); jsonBody.put("Title", "Android Demo"); jsonBody.put("Author", "BNK"); jsonBody.put("Date", "2015/08/28"); String requestBody = jsonBody.toString(); BooleanRequest booleanRequest = new BooleanRequest(0, url, requestBody, new Response.Listener<Boolean>() { @Override public void onResponse(Boolean response) { Toast.makeText(mContext, String.valueOf(response), Toast.LENGTH_SHORT).show(); } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { Toast.makeText(mContext, error.toString(), Toast.LENGTH_SHORT).show(); } }); // Add the request to the RequestQueue. queue.add(booleanRequest); } catch (JSONException e) { e.printStackTrace(); } Section 98.10: Helper Class for Handling Volley Errors public class VolleyErrorHelper { /** * Returns appropriate message which is to be displayed to the user * against the specified error object. * * @param error * @param context * @return */ public static String getMessage (Object error , Context context){ if(error instanceof TimeoutError){ GoalKicker.com Android Notes for Professionals 633 return context.getResources().getString(R.string.timeout); }else if (isServerProblem(error)){ return handleServerError(error ,context); }else if(isNetworkProblem(error)){ return context.getResources().getString(R.string.nointernet); } return context.getResources().getString(R.string.generic_error); } private static String handleServerError(Object error, Context context) { VolleyError er = (VolleyError)error; NetworkResponse response = er.networkResponse; if(response != null){ switch (response.statusCode){ case 404: case 422: case 401: try { // server might return error like this { "error": "Some error occurred" } // Use "Gson" to parse the result HashMap<String, String> result = new Gson().fromJson(new String(response.data), new TypeToken<Map<String, String>>() { }.getType()); if (result != null && result.containsKey("error")) { return result.get("error"); } } catch (Exception e) { e.printStackTrace(); } // invalid request return ((VolleyError) error).getMessage(); default: return context.getResources().getString(R.string.timeout); } } return context.getResources().getString(R.string.generic_error); } private static boolean isServerProblem(Object error) { return (error instanceof ServerError || error instanceof AuthFailureError); } private static boolean isNetworkProblem (Object error){ return (error instanceof NetworkError || error instanceof NoConnectionError); } GoalKicker.com Android Notes for Professionals 634 Chapter 99: Date and Time Pickers Section 99.1: Date Picker Dialog It is a dialog which prompts user to select date using DatePicker. The dialog requires context, initial year, month and day to show the dialog with starting date. When the user selects the date it callbacks via DatePickerDialog.OnDateSetListener. public void showDatePicker(Context context,int initialYear, int initialMonth, int initialDay) { DatePickerDialog datePickerDialog = new DatePickerDialog(context, new DatePickerDialog.OnDateSetListener() { @Override public void onDateSet(DatePicker datepicker,int year ,int month, int day) { //this condition is necessary to work properly on all android versions if(view.isShown()){ //You now have the selected year, month and day } } }, initialYear, initialMonth , initialDay); //Call show() to simply show the dialog datePickerDialog.show(); } Please note that month is a int starting from 0 for January to 11 for December Section 99.2: Material DatePicker add below dependencies to build.gradle le in dependency section. (this is an unOcial library for date picker) compile 'com.wdullaer:materialdatetimepicker:2.3.0' Now we have to open DatePicker on Button click event. So create one Button on xml le like below. <Button android:id="@+id/dialog_bt_date" android:layout_below="@+id/resetButton" android:layout_width="wrap_content" android:layout_height="40dp" android:textColor="#FF000000" android:gravity="center" android:text="DATE"/> and in MainActivity use this way. public class MainActivity extends AppCompatActivity implements DatePickerDialog.OnDateSetListener{ Button button; Calendar calendar ; DatePickerDialog datePickerDialog ; int Year, Month, Day ; GoalKicker.com Android Notes for Professionals 635 @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); calendar = Calendar.getInstance(); Year = calendar.get(Calendar.YEAR) ; Month = calendar.get(Calendar.MONTH); Day = calendar.get(Calendar.DAY_OF_MONTH); Button dialog_bt_date = (Button)findViewById(R.id.dialog_bt_date); dialog_bt_date.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { datePickerDialog = DatePickerDialog.newInstance(MainActivity.this, Year, Month, Day); datePickerDialog.setThemeDark(false); datePickerDialog.showYearPickerFirst(false); datePickerDialog.setAccentColor(Color.parseColor("#0072BA")); datePickerDialog.setTitle("Select Date From DatePickerDialog"); datePickerDialog.show(getFragmentManager(), "DatePickerDialog"); } }); } @Override public void onDateSet(DatePickerDialog view, int Year, int Month, int Day) { String date = "Selected Date : " + Day + "-" + Month + "-" + Year; Toast.makeText(MainActivity.this, date, Toast.LENGTH_LONG).show(); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.abc_main_menu, menu); return true; } } Output : GoalKicker.com Android Notes for Professionals 636 GoalKicker.com Android Notes for Professionals 637 Chapter 100: Localized Date/Time in Android Section 100.1: Custom localized date format with DateUtils.formatDateTime() DateUtils.formatDateTime() allows you to supply a time, and based on the ags you provide, it creates a localized datetime string. The ags allow you to specify whether to include specic elements (like the weekday). Date date = new Date(); String localizedDate = DateUtils.formatDateTime(context, date.getTime(), DateUtils.FORMAT_SHOW_DATE | DateUtils.FORMAT_SHOW_WEEKDAY); formatDateTime() automatically takes care about proper date formats. Section 100.2: Standard date/time formatting in Android Format a date: Date date = new Date(); DateFormat df = DateFormat.getDateInstance(DateFormat.MEDIUM); String localizedDate = df.format(date) Format a date and time. Date is in short format, time is in long format: Date date = new Date(); DateFormat df = DateFormat.getDateTimeInstance(DateFormat.SHORT, DateFormat.LONG); String localizedDate = df.format(date) Section 100.3: Fully customized date/time Date date = new Date(); df = new SimpleDateFormat("HH:mm", Locale.US); String localizedDate = df.format(date) Commonly used patterns: HH: hour (0-23) hh: hour (1-12) a: AM/PM marker mm: minute (0-59) ss: second dd: day in month (1-31) MM: month yyyy: year GoalKicker.com Android Notes for Professionals 638 Chapter 101: Time Utils Section 101.1: To check within a period This example will help to verify the given time is within a period or not. To check the time is today, We can use DateUtils class boolean isToday = DateUtils.isToday(timeInMillis); To check the time is within a week, private static boolean isWithinWeek(final long millis) { return System.currentTimeMillis() - millis <= (DateUtils.WEEK_IN_MILLIS DateUtils.DAY_IN_MILLIS); } To check the time is within a year, private static boolean isWithinYear(final long millis) { return System.currentTimeMillis() - millis <= DateUtils.YEAR_IN_MILLIS; } To check the time is within a number day of day including today, public static boolean isWithinDay(long timeInMillis, int day) { long diff = System.currentTimeMillis() - timeInMillis; float dayCount = (float) (diff / DateUtils.DAY_IN_MILLIS); return dayCount < day; } Note : DateUtils is android.text.format.DateUtils Section 101.2: Convert Date Format into Milliseconds To Convert you date in dd/MM/yyyy format into milliseconds you call this function with data as String public long getMilliFromDate(String dateFormat) { Date date = new Date(); SimpleDateFormat formatter = new SimpleDateFormat("dd/MM/yyyy"); try { date = formatter.parse(dateFormat); } catch (ParseException e) { e.printStackTrace(); } System.out.println("Today is " + date); return date.getTime(); } This method converts milliseconds to Time-stamp Format date : public String getTimeStamp(long timeinMillies) { String date = null; SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); // modify format GoalKicker.com Android Notes for Professionals 639 date = formatter.format(new Date(timeinMillies)); System.out.println("Today is " + date); return date; } This Method will convert given specic day,month and year into milliseconds. It will be very help when using Timpicker or Datepicker public static long getTimeInMillis(int day, int month, int year) { Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day); return calendar.getTimeInMillis(); } It will return milliseconds from date public static String getNormalDate(long timeInMillies) { String date = null; SimpleDateFormat formatter = new SimpleDateFormat("dd/MM/yyyy"); date = formatter.format(timeInMillies); System.out.println("Today is " + date); return date; } It will return current date public static String getCurrentDate() { Calendar c = Calendar.getInstance(); System.out.println("Current time => " + c.getTime()); SimpleDateFormat df = new SimpleDateFormat("dd/MM/yyyy"); String formattedDate = df.format(c.getTime()); return formattedDate; } Note : Java Provides numbers of date format support Date Pattern Section 101.3: GetCurrentRealTime This calculate current device time and add/subtract dierence between real and device time public static Calendar getCurrentRealTime() { long bootTime = networkTime - SystemClock.elapsedRealtime(); Calendar calInstance = Calendar.getInstance(); calInstance.setTimeZone(getUTCTimeZone()); long currentDeviceTime = bootTime + SystemClock.elapsedRealtime(); calInstance.setTimeInMillis(currentDeviceTime); return calInstance; } get UTC based timezone. public static TimeZone getUTCTimeZone() { return TimeZone.getTimeZone("GMT"); } GoalKicker.com Android Notes for Professionals 640 Chapter 102: In-app Billing Section 102.1: Consumable In-app Purchases Consumable Managed Products are products that can be bought multiple times such as in-game currency, game lives, power-ups, etc. In this example, we are going to implement 4 dierent consumable managed products "item1", "item2", "item3", "item4". Steps in summary: 1. Add the In-app Billing library to your project (AIDL File). 2. Add the required permission in AndroidManifest.xml le. 3. Deploy a signed apk to Google Developers Console. 4. Dene your products. 5. Implement the code. 6. Test In-app Billing (optional). Step 1: First of all, we will need to add the AIDL le to your project as clearly explained in Google Documentation here. IInAppBillingService.aidl is an Android Interface Denition Language (AIDL) le that denes the interface to the In-app Billing Version 3 service. You will use this interface to make billing requests by invoking IPC method calls. Step 2: After adding the AIDL le, add BILLING permission in AndroidManifest.xml: <!-- Required permission for implementing In-app Billing --> <uses-permission android:name="com.android.vending.BILLING" /> Step 3: Generate a signed apk, and upload it to Google Developers Console. This is required so that we can start dening our in-app products there. Step 4: Dene all your products with dierent productID, and set a price to each one of them. There are 2 types of products (Managed Products and Subscriptions). As we already said, we are going to implement 4 dierent consumable managed products "item1", "item2", "item3", "item4". Step 5: After doing all the steps above, you are now ready to start implementing the code itself in your own activity. MainActivity: public class MainActivity extends Activity { IInAppBillingService inAppBillingService; ServiceConnection serviceConnection; GoalKicker.com Android Notes for Professionals 641 // productID for each item. You should define them in the Google Developers Console. final String item1 = "item1"; final String item2 = "item2"; final String item3 = "item3"; final String item4 = "item4"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Instantiate the views according to your layout file. final Button buy1 = (Button) findViewById(R.id.buy1); final Button buy2 = (Button) findViewById(R.id.buy2); final Button buy3 = (Button) findViewById(R.id.buy3); final Button buy4 = (Button) findViewById(R.id.buy4); // setOnClickListener() for each button. // buyItem() here is the method that we will implement to launch the PurchaseFlow. buy1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { buyItem(item1); } }); buy2.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { buyItem(item2); } }); buy3.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { buyItem(item3); } }); buy4.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { buyItem(item4); } }); // Attach the service connection. serviceConnection = new ServiceConnection() { @Override public void onServiceDisconnected(ComponentName name) { inAppBillingService = null; } @Override public void onServiceConnected(ComponentName name, IBinder service) { inAppBillingService = IInAppBillingService.Stub.asInterface(service); } }; // Bind the service. Intent serviceIntent = new Intent("com.android.vending.billing.InAppBillingService.BIND"); GoalKicker.com Android Notes for Professionals 642 serviceIntent.setPackage("com.android.vending"); bindService(serviceIntent, serviceConnection, BIND_AUTO_CREATE); // Get the price of each product, and set the price as text to // each button so that the user knows the price of each item. if (inAppBillingService != null) { // Attention: You need to create a new thread here because // getSkuDetails() triggers a network request, which can // cause lag to your app if it was called from the main thread. Thread thread = new Thread(new Runnable() { @Override public void run() { ArrayList<String> skuList = new ArrayList<>(); skuList.add(item1); skuList.add(item2); skuList.add(item3); skuList.add(item4); Bundle querySkus = new Bundle(); querySkus.putStringArrayList("ITEM_ID_LIST", skuList); try { Bundle skuDetails = inAppBillingService.getSkuDetails(3, getPackageName(), "inapp", querySkus); int response = skuDetails.getInt("RESPONSE_CODE"); if (response == 0) { ArrayList<String> responseList = skuDetails.getStringArrayList("DETAILS_LIST"); for (String thisResponse : responseList) { JSONObject object = new JSONObject(thisResponse); String sku = object.getString("productId"); String price = object.getString("price"); switch (sku) { case item1: buy1.setText(price); break; case item2: buy2.setText(price); break; case item3: buy3.setText(price); break; case item4: buy4.setText(price); break; } } } } catch (RemoteException | JSONException e) { e.printStackTrace(); } } }); thread.start(); } } // Launch the PurchaseFlow passing the productID of the item the user wants to buy as a parameter. private void buyItem(String productID) { GoalKicker.com Android Notes for Professionals 643 if (inAppBillingService != null) { try { Bundle buyIntentBundle = inAppBillingService.getBuyIntent(3, getPackageName(), productID, "inapp", "bGoa+V7g/yqDXvKRqq+JTFn4uQZbPiQJo4pf9RzJ"); PendingIntent pendingIntent = buyIntentBundle.getParcelable("BUY_INTENT"); startIntentSenderForResult(pendingIntent.getIntentSender(), 1003, new Intent(), 0, 0, 0); } catch (RemoteException | IntentSender.SendIntentException e) { e.printStackTrace(); } } } // Unbind the service in onDestroy(). If you dont unbind, the open // service connection could cause your devices performance to degrade. @Override public void onDestroy() { super.onDestroy(); if (inAppBillingService != null) { unbindService(serviceConnection); } } // Check here if the in-app purchase was successful or not. If it was successful, // then consume the product, and let the app make the required changes. @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == 1003 && resultCode == RESULT_OK) { final String purchaseData = data.getStringExtra("INAPP_PURCHASE_DATA"); // Attention: You need to create a new thread here because // consumePurchase() triggers a network request, which can // cause lag to your app if it was called from the main thread. Thread thread = new Thread(new Runnable() { @Override public void run() { try { JSONObject jo = new JSONObject(purchaseData); // Get the productID of the purchased item. String sku = jo.getString("productId"); String productName = null; // increaseCoins() here is a method used as an example in a game to // increase the in-game currency if the purchase was successful. // You should implement your own code here, and let the app apply // the required changes after the purchase was successful. switch (sku) { case item1: productName = "Item 1"; increaseCoins(2000); break; case item2: productName = "Item 2"; increaseCoins(8000); break; case item3: productName = "Item 3"; increaseCoins(18000); break; GoalKicker.com Android Notes for Professionals 644 case item4: productName = "Item 4"; increaseCoins(30000); break; } // Consume the purchase so that the user is able to purchase the same product again. inAppBillingService.consumePurchase(3, getPackageName(), jo.getString("purchaseToken")); Toast.makeText(MainActivity.this, productName + " is successfully purchased. Excellent choice, master!", Toast.LENGTH_LONG).show(); } catch (JSONException | RemoteException e) { Toast.makeText(MainActivity.this, "Failed to parse purchase data.", Toast.LENGTH_LONG).show(); e.printStackTrace(); } } }); thread.start(); } } } Step 6: After implementing the code, you can test it by deploying your apk to beta/alpha channel, and let other users test the code for you. However, real in-app purchases can't be made while in testing mode. You have to publish your app/game rst to Play Store so that all the products are fully activated. More info on testing In-app Billing can be found here. Section 102.2: (Third party) In-App v3 Library Step 1: First of all follow these two steps to add in app functionality : 1. Add the library using : repositories { mavenCentral() } dependencies { compile 'com.anjlab.android.iab.v3:library:1.0.+' } 2. Add permission in manifest le. <uses-permission android:name="com.android.vending.BILLING" /> Step 2: Initialise your billing processor: BillingProcessor bp = new BillingProcessor(this, "YOUR LICENSE KEY FROM GOOGLE PLAY CONSOLE HERE", this); and implement Billing Handler : BillingProcessor.IBillingHandler which contains 4 methods : a. onBillingInitialized(); b. onProductPurchased(String productId, TransactionDetails details) : This is where you need to handle actions to be performed after successful purchase c. onBillingError(int errorCode, Throwable error) : Handle any error occurred during purchase process d. onPurchaseHistoryRestored() : For restoring in app purchases GoalKicker.com Android Notes for Professionals 645 Step 3: How to purchase a product. To purchase a managed product : bp.purchase(YOUR_ACTIVITY, "YOUR PRODUCT ID FROM GOOGLE PLAY CONSOLE HERE"); And to Purchase a subscription : bp.subscribe(YOUR_ACTIVITY, "YOUR SUBSCRIPTION ID FROM GOOGLE PLAY CONSOLE HERE"); Step 4 : Consuming a product. To consume a product simply call consumePurchase method. bp.consumePurchase("YOUR PRODUCT ID FROM GOOGLE PLAY CONSOLE HERE"); For other methods related to in app visit github GoalKicker.com Android Notes for Professionals 646 Chapter 103: FloatingActionButton Parameter android.support.design:elevation Detail Elevation value for the FAB. May be a reference to another resource, in the form "@[+][package:]type/name" or a theme attribute in the form "?[package:]type/name". android.support.design:fabSize Size for the FAB. android.support.design:rippleColor Ripple color for the FAB. android.support.design:useCompatPadding Enable compat padding. Floating action button is used for a special type of promoted action,it animates onto the screen as an expanding piece of material, by default. The icon within it may be animated,also FAB may move dierently than other UI elements because of their relative importance. A oating action button represents the primary action in an application which can simply trigger an action or navigate somewhere. Section 103.1: How to add the FAB to the layout To use a FloatingActionButton just add the dependency in the build.gradle le as described in the remarks section. Then add to the layout: <android.support.design.widget.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:layout_margin="@dimen/fab_margin" android:src="@drawable/my_icon" /> An example: Color The background color of this view defaults to the your theme's colorAccent. In the above image if the src only points to + icon (by default 24x24 dp),to get the background color of full circle you can use app:backgroundTint="@color/your_colour" If you wish to change the color in code you can use, GoalKicker.com Android Notes for Professionals 647 myFab.setBackgroundTintList(ColorStateList.valueOf(your color in int)); If you want to change FAB's color in pressed state use mFab.setRippleColor(your color in int); Positioning It is recommended to place 16dp minimum from the edge on mobile,and 24dp minimum on tablet/desktop. Note : Once you set an src excepting to cover the full area of FloatingActionButton make sure you have the right size of that image to get the best result. Default circle size is 56 x 56dp Mini circle size : 40 x 40dp If you only want to change only the Interior icon use a 24 x 24dp icon for default size Section 103.2: Show and Hide FloatingActionButton on Swipe To show and hide a FloatingActionButton with the default animation, just call the methods show() and hide(). It's good practice to keep a FloatingActionButton in the Activity layout instead of putting it in a Fragment, this allows the default animations to work when showing and hiding. Here is an example with a ViewPager: Three Tabs Show FloatingActionButton for the rst and third Tab Hide the FloatingActionButton on the middle Tab public class MainActivity extends AppCompatActivity { FloatingActionButton fab; ViewPager viewPager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); fab = (FloatingActionButton) findViewById(R.id.fab); viewPager = (ViewPager) findViewById(R.id.viewpager); // ... set up ViewPager ... viewPager.addOnPageChangeListener(new ViewPager.OnPageChangeListener() { @Override public void onPageSelected(int position) { if (position == 0) { fab.setImageResource(android.R.drawable.ic_dialog_email); fab.show(); } else if (position == 2) { GoalKicker.com Android Notes for Professionals 648 fab.setImageResource(android.R.drawable.ic_dialog_map); fab.show(); } else { fab.hide(); } } @Override public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) {} @Override public void onPageScrollStateChanged(int state) {} }); // Handle the FloatingActionButton click event: fab.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { int position = viewPager.getCurrentItem(); if (position == 0) { openSend(); } else if (position == 2) { openMap(); } } }); } } Result: GoalKicker.com Android Notes for Professionals 649 Section 103.3: Show and Hide FloatingActionButton on Scroll Starting with the Support Library version 22.2.1, it's possible to show and hide a FloatingActionButton from scrolling behavior using a FloatingActionButton.Behavior sublclass that takes advantage of the show() and hide() methods. Note that this only works with a CoordinatorLayout in conjunction with inner Views that support Nested Scrolling, such as RecyclerView and NestedScrollView. This ScrollAwareFABBehavior class comes from the Android Guides on Codepath (cc-wiki with attribution required) public class ScrollAwareFABBehavior extends FloatingActionButton.Behavior { public ScrollAwareFABBehavior(Context context, AttributeSet attrs) { super(); } @Override public boolean onStartNestedScroll(final CoordinatorLayout coordinatorLayout, final FloatingActionButton child, final View directTargetChild, final View target, final int nestedScrollAxes) { // Ensure we react to vertical scrolling return nestedScrollAxes == ViewCompat.SCROLL_AXIS_VERTICAL || super.onStartNestedScroll(coordinatorLayout, child, directTargetChild, target, nestedScrollAxes); } @Override public void onNestedScroll(final CoordinatorLayout coordinatorLayout, final FloatingActionButton child, final View target, final int dxConsumed, final int dyConsumed, final int dxUnconsumed, final int dyUnconsumed) { super.onNestedScroll(coordinatorLayout, child, target, dxConsumed, dyConsumed, dxUnconsumed, dyUnconsumed); if (dyConsumed > 0 && child.getVisibility() == View.VISIBLE) { // User scrolled down and the FAB is currently visible -> hide the FAB child.hide(); } else if (dyConsumed < 0 && child.getVisibility() != View.VISIBLE) { // User scrolled up and the FAB is currently not visible -> show the FAB child.show(); } } } In the FloatingActionButton layout xml, specify the app:layout_behavior with the fully-qualied-class-name of ScrollAwareFABBehavior: app:layout_behavior="com.example.app.ScrollAwareFABBehavior" For example with this layout: <android.support.design.widget.CoordinatorLayout android:id="@+id/main_layout" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> GoalKicker.com Android Notes for Professionals 650 <android.support.design.widget.AppBarLayout android:id="@+id/appBarLayout" android:layout_width="match_parent" android:layout_height="wrap_content" app:elevation="6dp"> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:background="?attr/colorPrimary" android:minHeight="?attr/actionBarSize" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" app:elevation="0dp" app:layout_scrollFlags="scroll|enterAlways" /> <android.support.design.widget.TabLayout android:id="@+id/tab_layout" app:tabMode="fixed" android:layout_below="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="?attr/colorPrimary" app:elevation="0dp" app:tabTextColor="#d3d3d3" android:minHeight="?attr/actionBarSize" /> </android.support.design.widget.AppBarLayout> <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_below="@+id/tab_layout" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> <android.support.design.widget.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" app:layout_behavior="com.example.app.ScrollAwareFABBehavior" android:layout_margin="@dimen/fab_margin" android:src="@android:drawable/ic_dialog_email" /> </android.support.design.widget.CoordinatorLayout> Here is the result: GoalKicker.com Android Notes for Professionals 651 Section 103.4: Setting behaviour of FloatingActionButton You can set the behavior of the FAB in XML. For example: <android.support.design.widget.FloatingActionButton app:layout_behavior=".MyBehavior" /> Or you can set programmatically using: CoordinatorLayout.LayoutParams p = (CoordinatorLayout.LayoutParams) fab.getLayoutParams(); p.setBehavior(xxxx); fab.setLayoutParams(p); GoalKicker.com Android Notes for Professionals 652 Chapter 104: Touch Events Section 104.1: How to vary between child and parent view group touch events 1. The onTouchEvents() for nested view groups can be managed by the boolean onInterceptTouchEvent. The default value for the OnInterceptTouchEvent is false. The parent's onTouchEvent is received before the child's. If the OnInterceptTouchEvent returns false, it sends the motion event down the chain to the child's OnTouchEvent handler. If it returns true the parent's will handle the touch event. However there may be instances when we want some child elements to manage OnTouchEvents and some to be managed by the parent view (or possibly the parent of the parent). This can be managed in more than one way. 2. One way a child element can be protected from the parent's OnInterceptTouchEvent is by implementing the requestDisallowInterceptTouchEvent. public void requestDisallowInterceptTouchEvent (boolean disallowIntercept) This prevents any of the parent views from managing the OnTouchEvent for this element, if the element has event handlers enabled. If the OnInterceptTouchEvent is false, the child element's OnTouchEvent will be evaluated. If you have a methods within the child elements handling the various touch events, any related event handlers that are disabled will return the OnTouchEvent to the parent. This answer: A visualisation of how the propagation of touch events passes through: parent -> child|parent -> child|parent -> child views. GoalKicker.com Android Notes for Professionals 653 Courtesy from here 4. Another way is returning varying values from the OnInterceptTouchEvent for the parent. This example taken from Managing Touch Events in a ViewGroup and demonstrates how to intercept the child's OnTouchEvent when the user is scrolling. 4a. @Override public boolean onInterceptTouchEvent(MotionEvent ev) { /* * This method JUST determines whether we want to intercept the motion. * If we return true, onTouchEvent will be called and we do the actual * scrolling there. */ final int action = MotionEventCompat.getActionMasked(ev); // Always handle the case of the touch gesture being complete. if (action == MotionEvent.ACTION_CANCEL || action == MotionEvent.ACTION_UP) { // Release the scroll. mIsScrolling = false; return false; // Do not intercept touch event, let the child handle it GoalKicker.com Android Notes for Professionals 654 } switch (action) { case MotionEvent.ACTION_MOVE: { if (mIsScrolling) { // We're currently scrolling, so yes, intercept the // touch event! return true; } // If the user has dragged her finger horizontally more than // the touch slop, start the scroll // left as an exercise for the reader final int xDiff = calculateDistanceX(ev); // Touch slop should be calculated using ViewConfiguration // constants. if (xDiff > mTouchSlop) { // Start scrolling! mIsScrolling = true; return true; } break; } ... } // In general, we don't want to intercept touch events. They should be // handled by the child view. return false; } This is some code from the same link showing how to create the parameters of the rectangle around your element: 4b. // The hit rectangle for the ImageButton myButton.getHitRect(delegateArea); // Extend the touch area of the ImageButton beyond its bounds // on the right and bottom. delegateArea.right += 100; delegateArea.bottom += 100; // Instantiate a TouchDelegate. // "delegateArea" is the bounds in local coordinates of // the containing view to be mapped to the delegate view. // "myButton" is the child view that should receive motion // events. TouchDelegate touchDelegate = new TouchDelegate(delegateArea, myButton); // Sets the TouchDelegate on the parent view, such that touches // within the touch delegate bounds are routed to the child. if (View.class.isInstance(myButton.getParent())) { ((View) myButton.getParent()).setTouchDelegate(touchDelegate); } GoalKicker.com Android Notes for Professionals 655 Chapter 105: Handling touch and motion events Listener Details onTouchListener Handles single touches for buttons, surfaces and more onTouchEvent A listener that can be found in surfaces(e.g. SurfaceView). Does not need to be set like other listeners(e,g. onTouchListener) onLongTouch Similar to onTouch, but listens for long presses in buttons, surfaces and more. A summary of some of the basic touch/motion-handling systems in the Android API. Section 105.1: Buttons Touch events related to a Button can be checked as follows: public class ExampleClass extends Activity implements View.OnClickListener, View.OnLongClickListener{ public Button onLong, onClick; @Override public void onCreate(Bundle sis){ super.onCreate(sis); setContentView(R.layout.layout); onLong = (Button) findViewById(R.id.onLong); onClick = (Button) findViewById(R.id.onClick); // The buttons are created. Now we need to tell the system that // these buttons have a listener to check for touch events. // "this" refers to this class, as it contains the appropriate event listeners. onLong.setOnLongClickListener(this); onClick.setOnClickListener(this); [OR] onClick.setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View v){ // Take action. This listener is only designed for one button. // This means, no other input will come here. // This makes a switch statement unnecessary here. } }); onLong.setOnLongClickListener(new View.OnLongClickListener(){ @Override public boolean onLongClick(View v){ // See comment in onClick.setOnClickListener(). } }); } @Override public void onClick(View v) { // If you have several buttons to handle, use a switch to handle them. switch(v.getId()){ case R.id.onClick: // Take action. break; } GoalKicker.com Android Notes for Professionals 656 } @Override public boolean onLongClick(View v) { // If you have several buttons to handle, use a switch to handle them. switch(v.getId()){ case R.id.onLong: // Take action. break; } return false; } } Section 105.2: Surface Touch event handler for surfaces (e.g. SurfaceView, GLSurfaceView, and others): import android.app.Activity; import android.os.Bundle; import android.view.MotionEvent; import android.view.SurfaceView; import android.view.View; public class ExampleClass extends Activity implements View.OnTouchListener{ @Override public void onCreate(Bundle sis){ super.onCreate(sis); CustomSurfaceView csv = new CustomSurfaceView(this); csv.setOnTouchListener(this); setContentView(csv); } @Override public boolean onTouch(View v, MotionEvent event) { // Add a switch (see buttons example) if you handle multiple views // here you can see (using MotionEvent event) to see what touch event // is being taken. Is the pointer touching or lifted? Is it moving? return false; } } Or alternatively (in the surface): public class CustomSurfaceView extends SurfaceView { @Override public boolean onTouchEvent(MotionEvent ev) { super.onTouchEvent(ev); // Handle touch events here. When doing this, you do not need to call a listener. // Please note that this listener only applies to the surface it is placed in // (in this case, CustomSurfaceView), which means that anything else which is // pressed outside the SurfaceView is handled by the parts of your app that // have a listener in that area. return true; } } GoalKicker.com Android Notes for Professionals 657 Section 105.3: Handling multitouch in a surface public class CustomSurfaceView extends SurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { super.onTouchEvent(e); if(e.getPointerCount() > 2){ return false; // If we want to limit the amount of pointers, we return false // which disallows the pointer. It will not be reacted on either, for // any future touch events until it has been lifted and repressed. } // What can you do here? Check if the amount of pointers are [x] and take action, // if a pointer leaves, a new enters, or the [x] pointers are moved. // Some examples as to handling etc. touch/motion events. switch (MotionEventCompat.getActionMasked(e)) { case MotionEvent.ACTION_DOWN: case MotionEvent.ACTION_POINTER_DOWN: // One or more pointers touch the screen. break; case MotionEvent.ACTION_UP: case MotionEvent.ACTION_POINTER_UP: // One or more pointers stop touching the screen. break; case MotionEvent.ACTION_MOVE: // One or more pointers move. if(e.getPointerCount() == 2){ move(); }else if(e.getPointerCount() == 1){ paint(); }else{ zoom(); } break; } return true; // Allow repeated action. } } GoalKicker.com Android Notes for Professionals 658 Chapter 106: Detect Shake Event in Android Section 106.1: Shake Detector in Android Example public class ShakeDetector implements SensorEventListener { private static final float SHAKE_THRESHOLD_GRAVITY = 2.7F; private static final int SHAKE_SLOP_TIME_MS = 500; private static final int SHAKE_COUNT_RESET_TIME_MS = 3000; private OnShakeListener mListener; private long mShakeTimestamp; private int mShakeCount; public void setOnShakeListener(OnShakeListener listener) { this.mListener = listener; } public interface OnShakeListener { public void onShake(int count); } @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { // ignore } @Override public void onSensorChanged(SensorEvent event) { if (mListener != null) { float x = event.values[0]; float y = event.values[1]; float z = event.values[2]; float gX = x / SensorManager.GRAVITY_EARTH; float gY = y / SensorManager.GRAVITY_EARTH; float gZ = z / SensorManager.GRAVITY_EARTH; // gForce will be close to 1 when there is no movement. float gForce = FloatMath.sqrt(gX * gX + gY * gY + gZ * gZ); if (gForce > SHAKE_THRESHOLD_GRAVITY) { final long now = System.currentTimeMillis(); // ignore shake events too close to each other (500ms) if (mShakeTimestamp + SHAKE_SLOP_TIME_MS > now) { return; } // reset the shake count after 3 seconds of no shakes if (mShakeTimestamp + SHAKE_COUNT_RESET_TIME_MS < now) { mShakeCount = 0; } mShakeTimestamp = now; mShakeCount++; GoalKicker.com Android Notes for Professionals 659 mListener.onShake(mShakeCount); } } } } Section 106.2: Using Seismic shake detection Seismic is an Android device shake detection library by Square. To use it just start listening to the shake events emitted by it. @Override protected void onCreate(Bundle savedInstanceState) { sm = (SensorManager) getSystemService(SENSOR_SERVICE); sd = new ShakeDetector(() -> { /* react to detected shake */ }); } @Override protected void onResume() { sd.start(sm); } @Override protected void onPause() { sd.stop(); } To dene the a dierent acceleration threshold use sd.setSensitivity(sensitivity) with a sensitivity of SENSITIVITY_LIGHT, SENSITIVITY_MEDIUM, SENSITIVITY_HARD or any other reasonable integer value. The given default values range from 11 to 15. Installation compile 'com.squareup:seismic:1.0.2' GoalKicker.com Android Notes for Professionals 660 Chapter 107: Hardware Button Events/Intents (PTT, LWP, etc.) Several android devices have custom buttons added by the manufacturer. This opens new possibilities for the developer in handling those buttons especially when making Apps targeted for Hardware Devices. This topic documents buttons which have intents attached to them which you can listen for via intent-receivers. Section 107.1: Sonim Devices Sonim devices have varying by model a lot of dierent custom buttons: PTT_KEY com.sonim.intent.action.PTT_KEY_DOWN com.sonim.intent.action.PTT_KEY_UP YELLOW_KEY com.sonim.intent.action.YELLOW_KEY_DOWN com.sonim.intent.action.YELLOW_KEY_UP SOS_KEY com.sonim.intent.action.SOS_KEY_DOWN com.sonim.intent.action.SOS_KEY_UP GREEN_KEY com.sonim.intent.action.GREEN_KEY_DOWN com.sonim.intent.action.GREEN_KEY_UP Registering the buttons To receive those intents you will have to assign the buttons to your app in the Phone-Settings. Sonim has a possibilty to auto-register the buttons to the App when it is installed. In order to do that you will have to contact them and get a package-specic key to include in your Manifest like this: <meta-data android:name="app_key_green_data" android:value="your-key-here" /> Section 107.2: RugGear Devices PTT Button android.intent.action.PTT.down android.intent.action.PTT.up Conrmed on: RG730, RG740A GoalKicker.com Android Notes for Professionals 661 Chapter 108: GreenRobot EventBus ThreadMode.POSTING Thread Mode Description Will be called on the same thread that the event was posted on. This is the default mode. ThreadMode.MAIN Will be called on the main UI thread. Will be called on a background thread. If the posting thread isn't the main thread it will be ThreadMode.BACKGROUND used. If posted on the main thread EventBus has a single background thread that it will use. ThreadMode.ASYNC Will be called on its own thread. Section 108.1: Passing a Simple Event The rst thing we need to do it add EventBus to our module's gradle le: dependencies { ... compile 'org.greenrobot:eventbus:3.0.0' ... } Now we need to create a model for our event. It can contain anything we want to pass along. For now we'll just make an empty class. public class DeviceConnectedEvent { } Now we can add the code to our Activity that will register with EventBus and subscribe to the event. public class MainActivity extends AppCompatActivity { private EventBus _eventBus; @Override protected void onCreate (Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); _eventBus = EventBus.getDefault(); } @Override protected void onStart () { super.onStart(); _eventBus.register(this); } @Override protected void onStop () { _eventBus.unregister(this); super.onStop(); } @Subscribe(threadMode = ThreadMode.MAIN) GoalKicker.com Android Notes for Professionals 662 public void onDeviceConnected (final DeviceConnectedEvent event) { // Process event and update UI } } In this Activity we get an instance of EventBus in the onCreate() method. We register / unregister for events in onStart() / onStop(). It's important to remember to unregister when your listener loses scope or you could leak your Activity. Finally we dene the method that we want called with the event. The @Subscribe annotation tells EventBus which methods it can look for to handle events. You have to have at least one methods annotated with @Subscribe to register with EventBus or it will throw an exception. In the annotation we dene the thread mode. This tells EventBus which thread to call the method on. It is a very handy way of passing information from a background thread to the UI thread! That's exactly what we're doing here. ThreadMode.MAIN means that this method will be called on Android's main UI thread so it's safe to do any UI manipulations here that you need. The name of the method doesn't matter. The only think, other that the @Subscribe annotation, that EventBus is looking for is the type of the argument. As long as the type matches it will be called when an event is posted. The last thing we need to do it to post an event. This code will be in our Service. EventBus.getDefault().post(new DeviceConnectedEvent()); That's all there is to it! EventBus will take that DeviceConnectedEvent and look through its registered listeners, look through the methods that they've subscribed and nd the ones that take a DeviceConnectedEvent as an argument and call them on the thread that they want to be called on. Section 108.2: Receiving Events For receiving events you need to register your class on the EventBus. @Override public void onStart() { super.onStart(); EventBus.getDefault().register(this); } @Override public void onStop() { EventBus.getDefault().unregister(this); super.onStop(); } And then subscribe to the events. @Subscribe(threadMode = ThreadMode.MAIN) public void handleEvent(ArbitraryEvent event) { Toast.makeText(getActivity(), "Event type: "+event.getEventType(), Toast.LENGTH_SHORT).show(); } Section 108.3: Sending Events Sending events is as easy as creating the Event object and then posting it. EventBus.getDefault().post(new ArbitraryEvent(ArbitraryEvent.TYPE_1)); GoalKicker.com Android Notes for Professionals 663 Chapter 109: Otto Event Bus Section 109.1: Passing an event This example describes passing an event using the Otto Event Bus. To use the Otto Event Bus in Android Studio you have to insert the following statement in your modules gradle le: dependencies { compile 'com.squareup:otto:1.3.8' } The event we'd like to pass is a simple Java object: public class DatabaseContentChangedEvent { public String message; public DatabaseContentChangedEvent(String message) { this.message = message; } } We need a Bus to send events. This is typically a singleton: import com.squareup.otto.Bus; public final class BusProvider { private static final Bus mBus = new Bus(); public static Bus getInstance() { return mBus; } private BusProvider() { } } To send an event we only need our BusProvider and it's post method. Here we send an event if the action of an AsyncTask is completed: public abstract class ContentChangingTask extends AsyncTask<Object, Void, Void> { ... @Override protected void onPostExecute(Void param) { BusProvider.getInstance().post( new DatabaseContentChangedEvent("Content changed") ); } } Section 109.2: Receiving an event To receive an event it is necessary to implement a method with the event type as parameter and annotate it using @Subscribe. Furthermore you have to register/unregister the instance of your object at the BusProvider (see GoalKicker.com Android Notes for Professionals 664 example Sending an event): public class MyFragment extends Fragment { private final static String TAG = "MyFragment"; ... @Override public void onResume() { super.onResume(); BusProvider.getInstance().register(this); } @Override public void onPause() { super.onPause(); BusProvider.getInstance().unregister(this); } @Subscribe public void onDatabaseContentChanged(DatabaseContentChangedEvent event) { Log.i(TAG, "onDatabaseContentChanged: "+event.message); } } Important: In order to receive that event an instance of the class has to exist. This is usually not the case when you want to send a result from one activity to another activity. So check your use case for the event bus. GoalKicker.com Android Notes for Professionals 665 Chapter 110: Vibration Section 110.1: Getting Started with Vibration Grant Vibration Permission before you start implement code, you have to add permission in android manifest : <uses-permission android:name="android.permission.VIBRATE"/> Import Vibration Library import android.os.Vibrator; Get instance of Vibrator from Context Vibrator vibrator = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE); Check device has vibrator void boolean isHaveVibrate(){ if (vibrator.hasVibrator()) { return true; } return false; } Section 110.2: Vibrate Indenitely using the vibrate(long[] pattern, int repeat) Vibrator vibrator = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE); // Start time delay // Vibrate for 500 milliseconds // Sleep for 1000 milliseconds long[] pattern = {0, 500, 1000}; // 0 meaning is repeat indefinitely vibrator.vibrate(pattern, 0); Section 110.3: Vibration Patterns You can create vibration patterns by passing in an array of longs, each of which represents a duration in milliseconds. The rst number is start time delay. Each array entry then alternates between vibrate, sleep, vibrate, sleep, etc. The following example demonstrates this pattern: vibrate 100 milliseconds and sleep 1000 milliseconds vibrate 200 milliseconds and sleep 2000 milliseconds long[] pattern = {0, 100, 1000, 200, 2000}; GoalKicker.com Android Notes for Professionals 666 To cause the pattern to repeat, pass in the index into the pattern array at which to start the repeat, or -1 to disable repeating. Vibrator vibrator = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE); vibrator.vibrate(pattern, -1); // does not repeat vibrator.vibrate(pattern, 0); // repeats forever Section 110.4: Stop Vibrate If you want stop vibrate please call : vibrator.cancel(); Section 110.5: Vibrate for one time using the vibrate(long milliseconds) Vibrator vibrator = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE); vibrator.vibrate(500); GoalKicker.com Android Notes for Professionals 667 Chapter 111: ContentProvider Section 111.1: Implementing a basic content provider class 1) Create a Contract Class A contract class denes constants that help applications work with the content URIs, column names, intent actions, and other features of a content provider. Contract classes are not included automatically with a provider; the provider's developer has to dene them and then make them available to other developers. A provider usually has a single authority, which serves as its Android-internal name. To avoid conicts with other providers, use a unique content authority. Because this recommendation is also true for Android package names, you can dene your provider authority as an extension of the name of the package containing the provider. For example, if your Android package name is com.example.appname, you should give your provider the authority com.example.appname.provider. public class MyContract { public static final String CONTENT_AUTHORITY = "com.example.myApp"; public static final String PATH_DATATABLE = "dataTable"; public static final String TABLE_NAME = "dataTable"; } A content URI is a URI that identies data in a provider. Content URIs include the symbolic name of the entire provider (its authority) and a name that points to a table or le (a path). The optional id part points to an individual row in a table. Every data access method of ContentProvider has a content URI as an argument; this allows you to determine the table, row, or le to access. Dene these in the contract class. public static final Uri BASE_CONTENT_URI = Uri.parse("content://" + CONTENT_AUTHORITY); public static final Uri CONTENT_URI = BASE_CONTENT_URI.buildUpon().appendPath(PATH_DATATABLE).build(); // define all columns of table and common functions required 2) Create the Helper Class A helper class manages database creation and version management. public class DatabaseHelper extends SQLiteOpenHelper { // Increment the version when there is a change in the structure of database public static final int DATABASE_VERSION = 1; // The name of the database in the filesystem, you can choose this to be anything public static final String DATABASE_NAME = "weather.db"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { // Called when the database is created for the first time. This is where the // creation of tables and the initial population of the tables should happen. } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { GoalKicker.com Android Notes for Professionals 668 // Called when the database needs to be upgraded. The implementation // should use this method to drop tables, add tables, or do anything else it // needs to upgrade to the new schema version. } } 3) Create a class that extends ContentProvider class public class MyProvider extends ContentProvider { public DatabaseHelper dbHelper; public static final UriMatcher matcher = buildUriMatcher(); public static final int DATA_TABLE = 100; public static final int DATA_TABLE_DATE = 101; A UriMatcher maps an authority and path to an integer value. The method match() returns a unique integer value for a URI (it can be any arbitrary number, as long as it's unique). A switch statement chooses between querying the entire table, and querying for a single record. Our UriMatcher returns 100 if the URI is the Content URI of Table and 101 if the URI points to a specic row within that table. You can use the # wildcard to match with any number and * to match with any string. public static UriMatcher buildUriMatcher() { UriMatcher uriMatcher = new UriMatcher(UriMatcher.NO_MATCH); uriMatcher.addURI(CONTENT_AUTHORITY, MyContract.PATH_DATATABLE, DATA_TABLE); uriMatcher.addURI(CONTENT_AUTHORITY, MyContract.PATH_DATATABLE + "/#", DATA_TABLE_DATE); return uriMatcher; } IMPORTANT: the ordering of addURI() calls matters! The UriMatcher will look in sequential order from rst added to last. Since wildcards like # and * are greedy, you will need to make sure that you have ordered your URIs correctly. For example: uriMatcher.addURI(CONTENT_AUTHORITY, "/example", 1); uriMatcher.addURI(CONTENT_AUTHORITY, "/*", 2); is the proper ordering, since the matcher will look for /example rst before resorting to the /* match. If these method calls were reversed and you called uriMatcher.match("/example"), then the UriMatcher will stop looking for matches once it encounters the /* path and return the wrong result! You will then need to override these functions: onCreate(): Initialize your provider. The Android system calls this method immediately after it creates your provider. Notice that your provider is not created until a ContentResolver object tries to access it. @Override public boolean onCreate() { dbhelper = new DatabaseHelper(getContext()); return true; } getType(): Return the MIME type corresponding to a content URI @Override public String getType(Uri uri) { final int match = matcher.match(uri); GoalKicker.com Android Notes for Professionals 669 switch (match) { case DATA_TABLE: return ContentResolver.CURSOR_DIR_BASE_TYPE + "/" + MyContract.CONTENT_AUTHORITY + "/" + MyContract.PATH_DATATABLE; case DATA_TABLE_DATE: return ContentResolver.ANY_CURSOR_ITEM_TYPE + "/" + MyContract.CONTENT_AUTHORITY + "/" + MyContract.PATH_DATATABLE; default: throw new UnsupportedOperationException("Unknown Uri: " + uri); } } query(): Retrieve data from your provider. Use the arguments to select the table to query, the rows and columns to return, and the sort order of the result. Return the data as a Cursor object. @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { Cursor retCursor = dbHelper.getReadableDatabase().query( MyContract.TABLE_NAME, projection, selection, selectionArgs, null, null, sortOrder); retCursor.setNotificationUri(getContext().getContentResolver(), uri); return retCursor; } Insert a new row into your provider. Use the arguments to select the destination table and to get the column values to use. Return a content URI for the newly-inserted row. @Override public Uri insert(Uri uri, ContentValues values) { final SQLiteDatabase db = dbHelper.getWritableDatabase(); long id = db.insert(MyContract.TABLE_NAME, null, values); return ContentUris.withAppendedId(MyContract.CONTENT_URI, ID); } delete(): Delete rows from your provider. Use the arguments to select the table and the rows to delete. Return the number of rows deleted. @Override public int delete(Uri uri, String selection, String[] selectionArgs) { SQLiteDatabase db = dbHelper.getWritableDatabase(); int rowsDeleted = db.delete(MyContract.TABLE_NAME, selection, selectionArgs); getContext().getContentResolver().notifyChange(uri, null); return rowsDeleted; } update(): Update existing rows in your provider. Use the arguments to select the table and rows to update and to get the new column values. Return the number of rows updated. @Override public int update(Uri uri, ContentValues values, String selection, String[] selectionArgs) { SQLiteDatabase db = dbHelper.getWritableDatabase(); int rowsUpdated = db.update(MyContract.TABLE_NAME, values, selection, selectionArgs); getContext().getContentResolver().notifyChange(uri, null); return rowsUpdated; } GoalKicker.com Android Notes for Professionals 670 4) Update manifest le <provider android:authorities="com.example.myApp" android:name=".DatabaseProvider"/> GoalKicker.com Android Notes for Professionals 671 Chapter 112: Dagger 2 Section 112.1: Component setup for Application and Activity injection A basic AppComponent that depends on a single AppModule to provide application-wide singleton objects. @Singleton @Component(modules = AppModule.class) public interface AppComponent { void inject(App app); Context provideContext(); Gson provideGson(); } A module to use together with the AppComponent which will provide its singleton objects, e.g. an instance of Gson to reuse throughout the whole application. @Module public class AppModule { private final Application mApplication; public AppModule(Application application) { mApplication = application; } @Singleton @Provides Gson provideGson() { return new Gson(); } @Singleton @Provides Context provideContext() { return mApplication; } } A subclassed application to setup dagger and the singleton component. public class App extends Application { @Inject AppComponent mAppComponent; @Override public void onCreate() { super.onCreate(); DaggerAppComponent.builder().appModule(new AppModule(this)).build().inject(this); } public AppComponent getAppComponent() { GoalKicker.com Android Notes for Professionals 672 return mAppComponent; } } Now an activity scoped component that depends on the AppComponent to gain access to the singleton objects. @ActivityScope @Component(dependencies = AppComponent.class, modules = ActivityModule.class) public interface MainActivityComponent { void inject(MainActivity activity); } And a reusable ActivityModule that will provide basic dependencies, like a FragmentManager @Module public class ActivityModule { private final AppCompatActivity mActivity; public ActivityModule(AppCompatActivity activity) { mActivity = activity; } @ActivityScope public AppCompatActivity provideActivity() { return mActivity; } @ActivityScope public FragmentManager provideFragmentManager(AppCompatActivity activity) { return activity.getSupportFragmentManager(); } } Putting everything together we're set up and can inject our activity and be sure to use the same Gson throughout out app! public class MainActivity extends AppCompatActivity { @Inject Gson mGson; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); DaggerMainActivityComponent.builder() .appComponent(((App)getApplication()).getAppComponent()) .activityModule(new ActivityModule(this)) .build().inject(this); } } Section 112.2: Custom Scopes @Scope GoalKicker.com Android Notes for Professionals 673 @Documented @Retention(RUNTIME) public @interface ActivityScope { } Scopes are just annotations and you can create your own ones where needed. Section 112.3: Using @Subcomponent instead of @Component(dependencies={...}) @Singleton @Component(modules = AppModule.class) public interface AppComponent { void inject(App app); Context provideContext(); Gson provideGson(); MainActivityComponent mainActivityComponent(ActivityModule activityModule); } @ActivityScope @Subcomponent(modules = ActivityModule.class) public interface MainActivityComponent { void inject(MainActivity activity); } public class MainActivity extends AppCompatActivity { @Inject Gson mGson; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ((App)getApplication()).getAppComponent() .mainActivityComponent(new ActivityModule(this)).inject(this); } } Section 112.4: Creating a component from multiple modules Dagger 2 supports creating a component from multiple modules. You can create your component this way: @Singleton @Component(modules = {GeneralPurposeModule.class, SpecificModule.class}) public interface MyMultipleModuleComponent { void inject(MyFragment myFragment); void inject(MyService myService); void inject(MyController myController); void inject(MyActivity myActivity); } The two references modules GeneralPurposeModule and SpecificModule can then be implemented as follows: GeneralPurposeModule.java GoalKicker.com Android Notes for Professionals 674 @Module public class GeneralPurposeModule { @Provides @Singleton public Retrofit getRetrofit(PropertiesReader propertiesReader, RetrofitHeaderInterceptor headerInterceptor){ // Logic here... return retrofit; } @Provides @Singleton public PropertiesReader getPropertiesReader(){ return new PropertiesReader(); } @Provides @Singleton public RetrofitHeaderInterceptor getRetrofitHeaderInterceptor(){ return new RetrofitHeaderInterceptor(); } } SpecicModule.java @Singleton @Module public class SpecificModule { @Provides @Singleton public RetrofitController getRetrofitController(Retrofit retrofit){ RetrofitController retrofitController = new RetrofitController(); retrofitController.setRetrofit(retrofit); return retrofitController; } @Provides @Singleton public MyService getMyService(RetrofitController retrofitController){ MyService myService = new MyService(); myService.setRetrofitController(retrofitController); return myService; } } During the dependency injection phase, the component will take objects from both modules according to the needs. This approach is very useful in terms of modularity. In the example, there is a general purpose module used to instantiate components such as the Retrofit object (used to handle the network communication) and a PropertiesReader (in charge of handling conguration les). There is also a specic module that handles the instantiation of specic controllers and service classes in relation to that specic application component. Section 112.5: How to add Dagger 2 in build.gradle Since the release of Gradle 2.2, the use of the android-apt plugin is no longer used. The following method of setting up Dagger 2 should be used. For older version of Gradle, use the previous method shown below. For Gradle >= 2.2 GoalKicker.com Android Notes for Professionals 675 dependencies { // apt command comes from the android-apt plugin annotationProcessor 'com.google.dagger:dagger-compiler:2.8' compile 'com.google.dagger:dagger:2.8' provided 'javax.annotation:jsr250-api:1.0' } For Gradle < 2.2 To use Dagger 2 it's necessary to add android-apt plugin, add this to the root build.gradle: buildscript { dependencies { classpath 'com.android.tools.build:gradle:2.1.0' classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8' } } Then the application module's build.gradle should contain: apply plugin: 'com.android.application' apply plugin: 'com.neenbedankt.android-apt' android { } final DAGGER_VERSION = '2.0.2' dependencies { compile "com.google.dagger:dagger:${DAGGER_VERSION}" apt "com.google.dagger:dagger-compiler:${DAGGER_VERSION}" } Reference: https://github.com/codepath/android_guides/wiki/Dependency-Injection-with-Dagger-2 Section 112.6: Constructor Injection Classes without dependencies can easily be created by dagger. public class Engine { @Inject // <-- Annotate your constructor. public Engine() { } } This class can be provided by any component. It has no dependencies itself and is not scoped. There is no further code necessary. Dependencies are declared as parameters in the constructor. Dagger will call the constructor and supply the dependencies, as long as those dependencies can be provided. public class Car { GoalKicker.com Android Notes for Professionals 676 private Engine engine; @Inject public Car(Engine engine) { this.engine = engine; } } This class can be provided by every component i this component can also provide all of its dependenciesEngine in this case. Since Engine can also be constructor injected, any component can provide a Car. You can use constructor injection whenever all of the dependencies can be provided by the component. A component can provide a dependency, if it can create it by using constructor injection a module of the component can provide it it can be provided by the parent component (if it is a @Subcomponent) it can use an object exposed by a component it depends on (component dependencies) GoalKicker.com Android Notes for Professionals 677 Chapter 113: Realm Realm Mobile Database is an alternative to SQLite. Realm Mobile Database is much faster than an ORM, and often faster than raw SQLite. Benets Oine functionality, Fast queries, Safe threading, Cross-platform apps, Encryption, Reactive architecture. Section 113.1: Sorted queries In order to sort a query, instead of using findAll(), you should use findAllSorted(). RealmResults<SomeObject> results = realm.where(SomeObject.class) .findAllSorted("sortField", Sort.ASCENDING); Note: sort() returns a completely new RealmResults that is sorted, but an update to this RealmResults will reset it. If you use sort(), you should always re-sort it in your RealmChangeListener, remove the RealmChangeListener from the previous RealmResults and add it to the returned new RealmResults. Using sort() on a RealmResults returned by an async query that is not yet loaded will fail. findAllSorted() will always return the results sorted by the eld, even if it gets updated. It is recommended to use findAllSorted(). Section 113.2: Using Realm with RxJava For queries, Realm provides the realmResults.asObservable() method. Observing results is only possible on looper threads (typically the UI thread). For this to work, your conguration must contain the following realmConfiguration = new RealmConfiguration.Builder(context) // .rxFactory(new RealmObservableFactory()) // //... .build(); Afterwards, you can use your results as an observable. Observable<RealmResults<SomeObject>> observable = results.asObservable(); For asynchronous queries, you should lter the results by isLoaded(), so that you receive an event only when the query has been executed. This filter() is not needed for synchronous queries (isLoaded() always returns true on sync queries). Subscription subscription = RxTextView.textChanges(editText).switchMap(charSequence -> realm.where(SomeObject.class) .contains("searchField", charSequence.toString(), Case.INSENSITIVE) .findAllAsync() .asObservable()) .filter(RealmResults::isLoaded) // .subscribe(objects -> adapter.updateData(objects)); GoalKicker.com Android Notes for Professionals 678 For writes, you should either use the executeTransactionAsync() method, or open a Realm instance on the background thread, execute the transaction synchronously, then close the Realm instance. public Subscription loadObjectsFromNetwork() { return objectApi.getObjects() .subscribeOn(Schedulers.io()) .subscribe(response -> { try(Realm realmInstance = Realm.getDefaultInstance()) { realmInstance.executeTransaction(realm -> realm.insertOrUpdate(response.objects)); } }); } Section 113.3: Basic Usage Setting up an instance To use Realm you rst need to obtain an instance of it. Each Realm instance maps to a le on disk. The most basic way to get an instance is as follows: // Create configuration RealmConfiguration realmConfiguration = new RealmConfiguration.Builder(context).build(); // Obtain realm instance Realm realm = Realm.getInstance(realmConfiguration); // or Realm.setDefaultConfiguration(realmConfiguration); Realm realm = Realm.getDefaultInstance(); The method Realm.getInstance() creates the database le if it has not been created, otherwise opens the le. The RealmConfiguration object controls all aspects of how a Realm is created - whether it's an inMemory() database, name of the Realm le, if the Realm should be cleared if a migration is needed, initial data, etc. Please note that calls to Realm.getInstance() are reference counted (each call increments a counter), and the counter is decremented when realm.close() is called. Closing an instance On background threads, it's very important to close the Realm instance(s) once it's no longer used (for example, transaction is complete and the thread execution ends). Failure to close all Realm instances on background thread results in version pinning, and can cause a large growth in le size. Runnable runnable = new Runnable() { Realm realm = null; try { realm = Realm.getDefaultInstance(); // ... } finally { if(realm != null) { realm.close(); } } }; new Thread(runnable).start(); // background thread, like `doInBackground()` of AsyncTask It's worth noting that above API Level 19, you can replace this code with just this: GoalKicker.com Android Notes for Professionals 679 try(Realm realm = Realm.getDefaultInstance()) { // ... } Models Next step would be creating your models. Here a question might be asked, "what is a model?". A model is a structure which denes properties of an object being stored in the database. For example, in the following we model a book. public class Book extends RealmObject { // Primary key of this entity @PrimaryKey private long id; private String title; @Index // faster queries private String author; // Standard getters & setter public long getId() { return id; } public void setId(long id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getAuthor() { return author; } public void setAuthor(String author) { this.author = author; } } Note that your models should extend RealmObject class. Primary key is also specied by @PrimaryKey annotation. Primary keys can be null, but only one element can have null as a primary key. Also you can use the @Ignore annotation for the elds that should not be persisted to the disk: @Ignore private String isbn; Inserting or updating data In order to store a book object to your Realm database instance, you can rst create an instance of your model and then store it to the database via copyToRealm method. For creating or updating you can use copyToRealmOrUpdate. (A faster alternative is the newly added insertOrUpdate()). GoalKicker.com Android Notes for Professionals 680 // Creating an instance of the model Book book = new Book(); book.setId(1); book.setTitle("Walking on air"); book.setAuthor("<NAME>") // Store to the database realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { realm.insertOrUpdate(book); } }); Note that all changes to data must happen in a transaction. Another way to create an object is using the following pattern: Book book = realm.createObject(Book.class, primaryKey); ... Querying the database All books: RealmResults<Book> results = realm.where(Book.class).findAll(); All books having id greater than 10: RealmResults<Book> results = realm.where(Book.class) .greaterThan("id", 10) .findAll(); Books by '<NAME>' or '%Peter%': RealmResults<Book> results = realm.where(Book.class) .beginGroup() .equalTo("author", "<NAME>") .or() .contains("author", "Peter") .endGroup().findAll(); Deleting an object For example, we want to delete all books by <NAME>: // Start of transaction realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { // First Step: Query all Taylor Swift books RealmResults<Book> results = ... // Second Step: Delete elements in Realm results.deleteAllFromRealm(); } }); GoalKicker.com Android Notes for Professionals 681 Section 113.4: List of primitives (RealmList<Integer/String/...>) Realm currently does not support storing a list of primitives. It is on their todo list (GitHub issue #575), but for the meantime, here is a workaround. Create a new class for your primitive type, this uses Integer, but change it for whatever you want to store. public class RealmInteger extends RealmObject { private int val; public RealmInteger() { } public RealmInteger(int val) { this.val = val; } // Getters and setters } You can now use this in your RealmObject. public class MainObject extends RealmObject { private String name; private RealmList<RealmInteger> ints; // Getters and setters } If you are using GSON to populate your RealmObject, you will need to add a custom type adapter. Type token = new TypeToken<RealmList<RealmInteger>>(){}.getType(); Gson gson = new GsonBuilder() .setExclusionStrategies(new ExclusionStrategy() { @Override public boolean shouldSkipField(FieldAttributes f) { return f.getDeclaringClass().equals(RealmObject.class); } @Override public boolean shouldSkipClass(Class<?> clazz) { return false; } }) .registerTypeAdapter(token, new TypeAdapter<RealmList<RealmInteger>>() { @Override public void write(JsonWriter out, RealmList<RealmInteger> value) throws IOException { // Empty } @Override public RealmList<RealmInteger> read(JsonReader in) throws IOException { RealmList<RealmInteger> list = new RealmList<RealmInteger>(); in.beginArray(); while (in.hasNext()) { list.add(new RealmInteger(in.nextInt())); } in.endArray(); GoalKicker.com Android Notes for Professionals 682 return list; } }) .create(); Section 113.5: Async queries Every synchronous query method (such as findAll() or findAllSorted()) has an asynchronous counterpart (findAllAsync() / findAllSortedAsync()). Asynchronous queries ooad the evaluation of the RealmResults to another thread. In order to receive these results on the current thread, the current thread must be a looper thread (read: async queries typically only work on the UI thread). RealmChangeListener<RealmResults<SomeObject>> realmChangeListener; // field variable realmChangeListener = new RealmChangeListener<RealmResults<SomeObject>>() { @Override public void onChange(RealmResults<SomeObject> element) { // asyncResults are now loaded adapter.updateData(element); } }; RealmResults<SomeObject> asyncResults = realm.where(SomeObject.class).findAllAsync(); asyncResults.addChangeListener(realmChangeListener); Section 113.6: Adding Realm to your project Add the following dependency to your project level build.gradle le. dependencies { classpath "io.realm:realm-gradle-plugin:3.1.2" } Add the following right at the top of your app level build.gradle le. apply plugin: 'realm-android' Complete a gradle sync and you now have Realm added as a dependency to your project! Realm requires an initial call since 2.0.0 before using it. You can do this in your Application class or in your rst Activity's onCreate method. Realm.init(this); // added in Realm 2.0.0 Realm.setDefaultConfiguration(new RealmConfiguration.Builder().build()); Section 113.7: Realm Models Realm models must extend the RealmObject base class, they dene the schema of the underlying database. Supported eld types are boolean, byte, short, int, long, float, double, String, Date, byte[], links to other RealmObjects, and RealmList<T extends RealmModel>. public class Person extends RealmObject { GoalKicker.com Android Notes for Professionals 683 @PrimaryKey //primary key is also implicitly an @Index //it is required for `copyToRealmOrUpdate()` to update the object. private long id; @Index //index makes queries faster on this field @Required //prevents `null` value from being inserted private String name; private RealmList<Dog> dogs; //->many relationship to Dog private Person spouse; //->one relationship to Person @Ignore private Calendar birthday; //calendars are not supported but can be ignored // getters, setters } If you add (or remove) a new eld to your RealmObject (or you add a new RealmObject class or delete an existing one), a migration will be needed. You can either set deleteIfMigrationNeeded() in your RealmConfiguration.Builder, or dene the necessary migration. Migration is also required when adding (or removing) @Required, or @Index, or @PrimaryKey annotation. Relationships must be set manually, they are NOT automatic based on primary keys. Since 0.88.0, it is also possible to use public elds instead of private elds/getters/setters in RealmObject classes. It is also possible to implement RealmModel instead of extending RealmObject, if the class is also annotated with @RealmClass. @RealmClass public class Person implements RealmModel { // ... } In that case, methods like person.deleteFromRealm() or person.addChangeListener() are replaced with RealmObject.deleteFromRealm(person) and RealmObject.addChangeListener(person). Limitations are that by a RealmObject, only RealmObject can be extended, and there is no support for final, volatile and transient elds. It is important that a managed RealmObject class can only be modied in a transaction. A managed RealmObject cannot be passed between threads. Section 113.8: try-with-resources try (Realm realm = Realm.getDefaultInstance()) { realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { //whatever Transaction that has to be done } }); //No need to close realm in try-with-resources } The Try with resources can be used only from KITKAT (minSDK 19) GoalKicker.com Android Notes for Professionals 684 Chapter 114: Android Versions Section 114.1: Checking the Android Version on device at runtime Build.VERSION_CODES is an enumeration of the currently known SDK version codes. In order to conditionally run code based on the device's Android version, use the TargetApi annotation to avoid Lint errors, and check the build version before running the code specic to the API level. Here is an example of how to use a class that was introduced in API-23, in a project that supports API levels lower than 23: @Override @TargetApi(23) public void onResume() { super.onResume(); if (android.os.Build.VERSION.SDK_INT <= Build.VERSION_CODES.M) { //run Marshmallow code FingerprintManager fingerprintManager = this.getSystemService(FingerprintManager.class); //... } } GoalKicker.com Android Notes for Professionals 685 Chapter 115: Wi-Fi Connections Section 115.1: Connect with WEP encryption This example connects to a Wi-Fi access point with WEP encryption, given an SSID and the password. public boolean ConnectToNetworkWEP(String networkSSID, String password) { try { WifiConfiguration conf = new WifiConfiguration(); conf.SSID = "\"" + networkSSID + "\""; // Please note the quotes. String should contain SSID in quotes conf.wepKeys[0] = "\"" + password + "\""; //Try it with quotes first conf.allowedKeyManagement.set(WifiConfiguration.KeyMgmt.NONE); conf.allowedGroupCiphers.set(WifiConfiguration.AuthAlgorithm.OPEN); conf.allowedGroupCiphers.set(WifiConfiguration.AuthAlgorithm.SHARED); WifiManager wifiManager = (WifiManager) this.getApplicationContext().getSystemService(Context.WIFI_SERVICE); int networkId = wifiManager.addNetwork(conf); if (networkId == -1){ //Try it again with no quotes in case of hex password conf.wepKeys[0] = password; networkId = wifiManager.addNetwork(conf); } List<WifiConfiguration> list = wifiManager.getConfiguredNetworks(); for( WifiConfiguration i : list ) { if(i.SSID != null && i.SSID.equals("\"" + networkSSID + "\"")) { wifiManager.disconnect(); wifiManager.enableNetwork(i.networkId, true); wifiManager.reconnect(); break; } } //WiFi Connection success, return true return true; } catch (Exception ex) { System.out.println(Arrays.toString(ex.getStackTrace())); return false; } } Section 115.2: Connect with WPA2 encryption This example connects to a Wi-Fi access point with WPA2 encryption. public boolean ConnectToNetworkWPA(String networkSSID, String password) { try { WifiConfiguration conf = new WifiConfiguration(); conf.SSID = "\"" + networkSSID + "\""; // Please note the quotes. String should contain SSID in quotes conf.preSharedKey = "\"" + password + "\""; conf.status = WifiConfiguration.Status.ENABLED; GoalKicker.com Android Notes for Professionals 686 conf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.TKIP); conf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.CCMP); conf.allowedKeyManagement.set(WifiConfiguration.KeyMgmt.WPA_PSK); conf.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.TKIP); conf.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.CCMP); Log.d("connecting", conf.SSID + " " + conf.preSharedKey); WifiManager wifiManager = (WifiManager) this.getApplicationContext().getSystemService(Context.WIFI_SERVICE); wifiManager.addNetwork(conf); Log.d("after connecting", conf.SSID + " " + conf.preSharedKey); List<WifiConfiguration> list = wifiManager.getConfiguredNetworks(); for( WifiConfiguration i : list ) { if(i.SSID != null && i.SSID.equals("\"" + networkSSID + "\"")) { wifiManager.disconnect(); wifiManager.enableNetwork(i.networkId, true); wifiManager.reconnect(); Log.d("re connecting", i.SSID + " " + conf.preSharedKey); break; } } //WiFi Connection success, return true return true; } catch (Exception ex) { System.out.println(Arrays.toString(ex.getStackTrace())); return false; } } Section 115.3: Scan for access points This example scans for available access points and ad hoc networks. btnScan activates a scan initiated by the WifiManager.startScan() method. After the scan, WifiManager calls the SCAN_RESULTS_AVAILABLE_ACTION intent and the WifiScanReceiver class processes the scan result. The results are displayed in a TextView. public class MainActivity extends AppCompatActivity { private final static String TAG = "MainActivity"; TextView txtWifiInfo; WifiManager wifi; WifiScanReceiver wifiReceiver; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); wifi=(WifiManager)getSystemService(Context.WIFI_SERVICE); wifiReceiver = new WifiScanReceiver(); txtWifiInfo = (TextView)findViewById(R.id.txtWifiInfo); Button btnScan = (Button)findViewById(R.id.btnScan); btnScan.setOnClickListener(new View.OnClickListener() { @Override GoalKicker.com Android Notes for Professionals 687 public void onClick(View v) { Log.i(TAG, "Start scan..."); wifi.startScan(); } }); } protected void onPause() { unregisterReceiver(wifiReceiver); super.onPause(); } protected void onResume() { registerReceiver( wifiReceiver, new IntentFilter(WifiManager.SCAN_RESULTS_AVAILABLE_ACTION) ); super.onResume(); } private class WifiScanReceiver extends BroadcastReceiver { public void onReceive(Context c, Intent intent) { List<ScanResult> wifiScanList = wifi.getScanResults(); txtWifiInfo.setText(""); for(int i = 0; i < wifiScanList.size(); i++){ String info = ((wifiScanList.get(i)).toString()); txtWifiInfo.append(info+"\n\n"); } } } } Permissions The following permissions need to be dened in AndroidManifest.xml: <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" /> android.permission.ACCESS_WIFI_STATE is necessary for calling WifiManager.getScanResults(). Without android.permission.CHANGE_WIFI_STATE you cannot initiate a scan with WifiManager.startScan(). When compiling the project for api level 23 or greater (Android 6.0 and up), either android.permission.ACCESS_FINE_LOCATION or android.permission.ACCESS_COARSE_LOCATION must be inserted. Furthermore that permission needs to be requested, e.g. in the onCreate method of your main activity: @Override protected void onCreate(Bundle savedInstanceState) { ... String[] PERMS_INITIAL={ Manifest.permission.ACCESS_FINE_LOCATION, }; ActivityCompat.requestPermissions(this, PERMS_INITIAL, 127); } GoalKicker.com Android Notes for Professionals 688 Chapter 116: SensorManager Section 116.1: Decide if your device is static or not, using the accelerometer Add the following code to the onCreate()/onResume() method: SensorManager sensorManager; Sensor mAccelerometer; final float movementThreshold = 0.5f; // You may have to change this value. boolean isMoving = false; float[] prevValues = {1.0f, 1.0f, 1.0f}; float[] currValues = new float[3]; sensorManager = (SensorManager)getSystemService(SENSOR_SERVICE); mAccelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); sensorManager.registerListener(this, mAccelerometer, SensorManager.SENSOR_DELAY_NORMAL); You may have to adjust the sensitivity by adapting the movementThreshold by trial and error. Then, override the onSensorChanged() method as follows: @Override public void onSensorChanged(SensorEvent event) { if (event.sensor == mAccelerometer) { System.arraycopy(event.values, 0, currValues, 0, event.values.length); if ((Math.abs(currValues[0] - prevValues[0]) > movementThreshold) || (Math.abs(currValues[1] - prevValues[1]) > movementThreshold) || (Math.abs(currValues[2] - prevValues[2]) > movementThreshold)) { isMoving = true; } else { isMoving = false; } System.arraycopy(currValues, 0, prevValues, 0, currValues.length); } } If you want to prevent your app from being installed on devices that do not have an accelerometer, you have to add the following line to your manifest: <uses-feature android:name="android.hardware.sensor.accelerometer" /> Section 116.2: Retrieving sensor events Retrieving sensor information from the onboard sensors: public class MainActivity extends Activity implements SensorEventListener { private SensorManager mSensorManager; private Sensor accelerometer; private Sensor gyroscope; float[] accelerometerData = new float[3]; float[] gyroscopeData = new float[3]; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 689 setContentView(R.layout.activity_main); mSensorManager = (SensorManager) getSystemService(SENSOR_SERVICE); accelerometer = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); gyroscope = mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE); } @Override public void onResume() { //Register listeners for your sensors of interest mSensorManager.registerListener(this, accelerometer, SensorManager.SENSOR_DELAY_FASTEST); mSensorManager.registerListener(this, gyroscope, SensorManager.SENSOR_DELAY_FASTEST); super.onResume(); } @Override protected void onPause() { //Unregister any previously registered listeners mSensorManager.unregisterListener(this); super.onPause(); } @Override public void onSensorChanged(SensorEvent event) { //Check the type of sensor data being polled and store into corresponding float array if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) { accelerometerData = event.values; } else if (event.sensor.getType() == Sensor.TYPE_GYROSCOPE) { gyroscopeData = event.values; } } @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { // TODO Auto-generated method stub } } Section 116.3: Sensor transformation to world coordinate system The sensor values returned by Android are with respective to the phone's coordinate system (e.g. +Y points towards the top of the phone). We can transform these sensor values into a world coordinate system (e.g. +Y points towards magnetic North, tangential to the ground) using the sensor managers rotation matrix First, you would need to declare and initialize the matrices/arrays where data will be stored (you can do this in the onCreate method, for example): float[] accelerometerData = new float[3]; float[] accelerometerWorldData = new float[3]; float[] gravityData = new float[3]; float[] magneticData = new float[3]; float[] rotationMatrix = new float[9]; Next, we need to detect changes in sensor values, store them into the corresponding arrays (if we want to use them later/elsewhere), then calculate the rotation matrix and resulting transformation into world coordinates: GoalKicker.com Android Notes for Professionals 690 public void onSensorChanged(SensorEvent event) { sensor = event.sensor; int i = sensor.getType(); if (i == Sensor.TYPE_ACCELEROMETER) { accelerometerData = event.values; } else if (i == Sensor.TYPE_GRAVITY) { gravityData = event.values; } else if (i == Sensor.TYPE_MAGNETIC) { magneticData = event.values; } //Calculate rotation matrix from gravity and magnetic sensor data SensorManager.getRotationMatrix(rotationMatrix, null, gravityData, magneticData); //World coordinate system transformation for acceleration accelerometerWorldData[0] = rotationMatrix[0] * accelerometerData[0] + rotationMatrix[1] * accelerometerData[1] + rotationMatrix[2] * accelerometerData[2]; accelerometerWorldData[1] = rotationMatrix[3] * accelerometerData[0] + rotationMatrix[4] * accelerometerData[1] + rotationMatrix[5] * accelerometerData[2]; accelerometerWorldData[2] = rotationMatrix[6] * accelerometerData[0] + rotationMatrix[7] * accelerometerData[1] + rotationMatrix[8] * accelerometerData[2]; } GoalKicker.com Android Notes for Professionals 691 Chapter 117: ProgressBar Section 117.1: Material Linear ProgressBar According to Material Documentation: A linear progress indicator should always ll from 0% to 100% and never decrease in value. It should be represented by bars on the edge of a header or sheet that appear and disappear. To use a material Linear ProgressBar just use in your xml: <ProgressBar android:id="@+id/my_progressBar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:layout_width="wrap_content" android:layout_height="wrap_content"/> Indeterminate To create indeterminate ProgressBar set the android:indeterminate attribute to true. <ProgressBar GoalKicker.com Android Notes for Professionals 692 android:id="@+id/my_progressBar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:indeterminate="true"/> Determinate To create determinate ProgressBar set the android:indeterminate attribute to false and use the android:max and the android:progress attributes: <ProgressBar android:id="@+id/my_progressBar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:indeterminate="false" android:max="100" android:progress="10"/> Just use this code to update the value: ProgressBar progressBar = (ProgressBar) findViewById(R.id.my_progressBar); progressBar.setProgress(20); Buer To create a buer eect with the ProgressBar set the android:indeterminate attribute to false and use the android:max, the android:progress and the android:secondaryProgress attributes: <ProgressBar android:id="@+id/my_progressBar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:indeterminate="false" android:max="100" android:progress="10" android:secondaryProgress="25"/> The buer value is dened by android:secondaryProgress attribute. Just use this code to update the values: ProgressBar progressBar = (ProgressBar) findViewById(R.id.my_progressBar); progressBar.setProgress(20); progressBar.setSecondaryProgress(50); Indeterminate and Determinate To obtain this kind of ProgressBar just use an indeterminate ProgressBar using the android:indeterminate attribute to true. <ProgressBar android:id="@+id/progressBar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:indeterminate="true"/> Then when you need to switch from indeterminate to determinate progress use setIndeterminate() method . ProgressBar progressBar = (ProgressBar) findViewById(R.id.my_progressBar); progressBar.setIndeterminate(false); GoalKicker.com Android Notes for Professionals 693 Section 117.2: Tinting ProgressBar Using an AppCompat theme, the ProgressBar's color will be the colorAccent you have dened. Version 5.0 To change the ProgressBar color without changing the accent color you can use theandroid:theme attribute overriding the accent color: <ProgressBar android:theme="@style/MyProgress" style="@style/Widget.AppCompat.ProgressBar" /> <!-- res/values/styles.xml --> <style name="MyProgress" parent="Theme.AppCompat.Light"> <item name="colorAccent">@color/myColor</item> </style> To tint the ProgressBar you can use in the xml le the attributes android:indeterminateTintMode and android:indeterminateTint <ProgressBar android:indeterminateTintMode="src_in" android:indeterminateTint="@color/my_color" /> Section 117.3: Customized progressbar CustomProgressBarActivity.java: public class CustomProgressBarActivity extends AppCompatActivity { private TextView txtProgress; private ProgressBar progressBar; private int pStatus = 0; private Handler handler = new Handler(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_custom_progressbar); txtProgress = (TextView) findViewById(R.id.txtProgress); progressBar = (ProgressBar) findViewById(R.id.progressBar); new Thread(new Runnable() { @Override public void run() { while (pStatus <= 100) { handler.post(new Runnable() { @Override public void run() { progressBar.setProgress(pStatus); txtProgress.setText(pStatus + " %"); } }); try { Thread.sleep(100); } catch (InterruptedException e) { GoalKicker.com Android Notes for Professionals 694 e.printStackTrace(); } pStatus++; } } }).start(); } } activity_custom_progressbar.xml: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.skholingua.android.custom_progressbar_circular.MainActivity" > <RelativeLayout android:layout_width="wrap_content" android:layout_centerInParent="true" android:layout_height="wrap_content"> <ProgressBar android:id="@+id/progressBar" style="?android:attr/progressBarStyleHorizontal" android:layout_width="250dp" android:layout_height="250dp" android:layout_centerInParent="true" android:indeterminate="false" android:max="100" android:progress="0" android:progressDrawable="@drawable/custom_progressbar_drawable" android:secondaryProgress="0" /> <TextView android:id="@+id/txtProgress" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignBottom="@+id/progressBar" android:layout_centerInParent="true" android:textAppearance="?android:attr/textAppearanceSmall" /> </RelativeLayout> </RelativeLayout> custom_progressbar_drawable.xml: <?xml version="1.0" encoding="utf-8"?> <rotate xmlns:android="http://schemas.android.com/apk/res/android" android:fromDegrees="-90" android:pivotX="50%" android:pivotY="50%" GoalKicker.com Android Notes for Professionals 695 android:toDegrees="270" > <shape android:shape="ring" android:useLevel="false" > <gradient android:centerY="0.5" android:endColor="#FA5858" android:startColor="#0099CC" android:type="sweep" android:useLevel="false" /> </shape> </rotate> Reference screenshot: Section 117.4: Creating Custom Progress Dialog By Creating Custom Progress Dialog class, the dialog can be used to show in UI instance, without recreating the dialog. First Create a Custom Progress Dialog Class. GoalKicker.com Android Notes for Professionals 696 CustomProgress.java public class CustomProgress { public static CustomProgress customProgress = null; private Dialog mDialog; public static CustomProgress getInstance() { if (customProgress == null) { customProgress = new CustomProgress(); } return customProgress; } public void showProgress(Context context, String message, boolean cancelable) { mDialog = new Dialog(context); // no tile for the dialog mDialog.requestWindowFeature(Window.FEATURE_NO_TITLE); mDialog.setContentView(R.layout.prograss_bar_dialog); mProgressBar = (ProgressBar) mDialog.findViewById(R.id.progress_bar); // mProgressBar.getIndeterminateDrawable().setColorFilter(context.getResources() // .getColor(R.color.material_blue_gray_500), PorterDuff.Mode.SRC_IN); TextView progressText = (TextView) mDialog.findViewById(R.id.progress_text); progressText.setText("" + message); progressText.setVisibility(View.VISIBLE); mProgressBar.setVisibility(View.VISIBLE); // you can change or add this line according to your need mProgressBar.setIndeterminate(true); mDialog.setCancelable(cancelable); mDialog.setCanceledOnTouchOutside(cancelable); mDialog.show(); } public void hideProgress() { if (mDialog != null) { mDialog.dismiss(); mDialog = null; } } } Now creating the custom progress layout prograss_bar_dialog.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="65dp" android:background="@android:color/background_dark" android:orientation="vertical"> <TextView android:id="@+id/progress_text" android:layout_width="wrap_content" android:layout_height="40dp" android:layout_above="@+id/progress_bar" android:layout_marginLeft="10dp" GoalKicker.com Android Notes for Professionals 697 android:layout_marginStart="10dp" android:background="@android:color/transparent" android:gravity="center_vertical" android:text="" android:textColor="@android:color/white" android:textSize="16sp" android:visibility="gone" /> <-- Where the style can be changed to any kind of ProgressBar --> <ProgressBar android:id="@+id/progress_bar" style="@android:style/Widget.DeviceDefault.ProgressBar.Horizontal" android:layout_width="match_parent" android:layout_height="30dp" android:layout_alignParentBottom="true" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" android:layout_gravity="center" android:background="@color/cardview_dark_background" android:maxHeight="20dp" android:minHeight="20dp" /> </RelativeLayout> This is it. Now for calling the Dialog in Code CustomProgress customProgress = CustomProgress.getInstance(); // now you have the instance of CustomProgres // for showing the ProgressBar customProgress.showProgress(#Context, getString(#StringId), #boolean); // for hiding the ProgressBar customProgress.hideProgress(); Section 117.5: Indeterminate ProgressBar An indeterminate ProgressBar shows a cyclic animation without an indication of progress. Basic indeterminate ProgressBar (spinning wheel) <ProgressBar android:id="@+id/progressBar" android:indeterminate="true" android:layout_width="wrap_content" android:layout_height="wrap_content"/> Horizontal indeterminate ProgressBar (at bar) <ProgressBar android:id="@+id/progressBar" android:indeterminate="true" android:layout_width="match_parent" android:layout_height="wrap_content" style="@android:style/Widget.ProgressBar.Horizontal"/> GoalKicker.com Android Notes for Professionals 698 Other built-in ProgressBar styles style="@android:style/Widget.ProgressBar.Small" style="@android:style/Widget.ProgressBar.Large" style="@android:style/Widget.ProgressBar.Inverse" style="@android:style/Widget.ProgressBar.Small.Inverse" style="@android:style/Widget.ProgressBar.Large.Inverse" To use the indeterminate ProgressBar in an Activity ProgressBar progressBar = (ProgressBar) findViewById(R.id.progressBar); progressBar.setVisibility(View.VISIBLE); progressBar.setVisibility(View.GONE); Section 117.6: Determinate ProgressBar A determinate ProgressBar shows the current progress towards a specic maximum value. Horizontal determinate ProgressBar <ProgressBar android:id="@+id/progressBar" android:indeterminate="false" android:layout_width="match_parent" android:layout_height="10dp" style="@android:style/Widget.ProgressBar.Horizontal"/> Vertical determinate ProgressBar <ProgressBar android:id="@+id/progressBar" android:indeterminate="false" android:layout_width="10dp" android:layout_height="match_parent" android:progressDrawable="@drawable/progress_vertical" style="@android:style/Widget.ProgressBar.Horizontal"/> res/drawable/progress_vertical.xml <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@android:id/background"> <shape> <corners android:radius="3dp"/> <solid android:color="@android:color/darker_gray"/> </shape> </item> <item android:id="@android:id/secondaryProgress"> <clip android:clipOrientation="vertical" android:gravity="bottom"> <shape> <corners android:radius="3dp"/> <solid android:color="@android:color/holo_blue_light"/> </shape> </clip> </item> <item android:id="@android:id/progress"> <clip android:clipOrientation="vertical" android:gravity="bottom"> <shape> GoalKicker.com Android Notes for Professionals 699 <corners android:radius="3dp"/> <solid android:color="@android:color/holo_blue_dark"/> </shape> </clip> </item> </layer-list> Ring determinate ProgressBar <ProgressBar android:id="@+id/progressBar" android:indeterminate="false" android:layout_width="match_parent" android:layout_height="wrap_content" android:progressDrawable="@drawable/progress_ring" style="@android:style/Widget.ProgressBar.Horizontal"/> res/drawable/progress_ring.xml <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@android:id/secondaryProgress"> <shape android:shape="ring" android:useLevel="true" android:thicknessRatio="24" android:innerRadiusRatio="2.2"> <corners android:radius="3dp"/> <solid android:color="#0000FF"/> </shape> </item> <item android:id="@android:id/progress"> <shape android:shape="ring" android:useLevel="true" android:thicknessRatio="24" android:innerRadiusRatio="2.2"> <corners android:radius="3dp"/> <solid android:color="#FFFFFF"/> </shape> </item> </layer-list> To use the determinate ProgressBar in an Activity. ProgressBar progressBar = (ProgressBar) findViewById(R.id.progressBar); progressBar.setSecondaryProgress(100); progressBar.setProgress(10); progressBar.setMax(100); GoalKicker.com Android Notes for Professionals 700 Chapter 118: Custom Fonts Section 118.1: Custom font in canvas text Drawing text in canvas with your font from assets. Typeface typeface = Typeface.createFromAsset(getAssets(), "fonts/SomeFont.ttf"); Paint textPaint = new Paint(); textPaint.setTypeface(typeface); canvas.drawText("Your text here", x, y, textPaint); Section 118.2: Working with fonts in Android O Android O changes the way to work with fonts. Android O introduces a new feature, called Fonts in XML, which allows you to use fonts as resources. This means, that there is no need to bundle fonts as assets. Fonts are now compiled in an R le and are automatically available in the system as a resource. In order to add a new font, you have to do the following: Create a new resource directory: res/font. Add your font les into this font folder. For example, by adding myfont.ttf, you will be able to use this font via R.font.myfont. You can also create your own font family by adding the following XML le into the res/font directory: <?xml version="1.0" encoding="utf-8"?> <font-family xmlns:android="http://schemas.android.com/apk/res/android"> <font android:fontStyle="normal" android:fontWeight="400" android:font="@font/lobster_regular" /> <font android:fontStyle="italic" android:fontWeight="400" android:font="@font/lobster_italic" /> </font-family> You can use both the font le and the font family le in the same way: In an XML le, by using the android:fontFamily attribute, for example like this: <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:fontFamily="@font/myfont"/> Or like this: <style name="customfontstyle" parent="@android:style/TextAppearance.Small"> <item name="android:fontFamily">@font/myfont</item> </style> GoalKicker.com Android Notes for Professionals 701 In your code, by using the following lines of code: Typeface typeface = getResources().getFont(R.font.myfont); textView.setTypeface(typeface); Section 118.3: Custom font to whole activity public class ReplaceFont { public static void changeDefaultFont(Context context, String oldFont, String assetsFont) { Typeface typeface = Typeface.createFromAsset(context.getAssets(), assetsFont); replaceFont(oldFont, typeface); } private static void replaceFont(String oldFont, Typeface typeface) { try { Field myField = Typeface.class.getDeclaredField(oldFont); myField.setAccessible(true); myField.set(null, typeface); } catch (NoSuchFieldException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } } Then in your activity, in onCreate() method: // Put your font to assets folder... ReplaceFont.changeDefaultFont(getApplication(), "DEFAULT", "LinLibertine.ttf"); Section 118.4: Putting a custom font in your app 1. Go to the (project folder) 2. Then app -> src -> main. 3. Create folder 'assets -> fonts' into the main folder. 4. Put your 'fontle.ttf' into the fonts folder. Section 118.5: Initializing a font private Typeface myFont; // A good practice might be to call this in onCreate() of a custom // Application class and pass 'this' as Context. Your font will be ready to use // as long as your app lives public void initFont(Context context) { myFont = Typeface.createFromAsset(context.getAssets(), "fonts/Roboto-Light.ttf"); } Section 118.6: Using a custom font in a TextView public void setFont(TextView textView) { textView.setTypeface(myFont); } GoalKicker.com Android Notes for Professionals 702 Section 118.7: Apply font on TextView by xml (Not required Java code) TextViewPlus.java: public class TextViewPlus extends TextView { private static final String TAG = "TextView"; public TextViewPlus(Context context) { super(context); } public TextViewPlus(Context context, AttributeSet attrs) { super(context, attrs); setCustomFont(context, attrs); } public TextViewPlus(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); setCustomFont(context, attrs); } private void setCustomFont(Context ctx, AttributeSet attrs) { TypedArray a = ctx.obtainStyledAttributes(attrs, R.styleable.TextViewPlus); String customFont = a.getString(R.styleable.TextViewPlus_customFont); setCustomFont(ctx, customFont); a.recycle(); } public boolean setCustomFont(Context ctx, String asset) { Typeface typeface = null; try { typeface = Typeface.createFromAsset(ctx.getAssets(), asset); } catch (Exception e) { Log.e(TAG, "Unable to load typeface: "+e.getMessage()); return false; } setTypeface(typeface); return true; } } attrs.xml: (Where to place res/values) <?xml version="1.0" encoding="utf-8"?> <resources> <declare-styleable name="TextViewPlus"> <attr name="customFont" format="string"/> </declare-styleable> </resources> How to use: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:foo="http://schemas.android.com/apk/res-auto" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> GoalKicker.com Android Notes for Professionals 703 <com.mypackage.TextViewPlus android:id="@+id/textViewPlus1" android:layout_height="match_parent" android:layout_width="match_parent" android:text="@string/showingOffTheNewTypeface" foo:customFont="my_font_name_regular.otf"> </com.mypackage.TextViewPlus> </LinearLayout> Section 118.8: Ecient Typeface loading Loading custom fonts can be lead to a bad performance. I highly recommend to use this little helper which saves/loads your already used fonts into a Hashtable. public class TypefaceUtils { private static final Hashtable<String, Typeface> sTypeFaces = new Hashtable<>(); /** * Get typeface by filename from assets main directory * * @param context * @param fileName the name of the font file in the asset main directory * @return */ public static Typeface getTypeFace(final Context context, final String fileName) { Typeface tempTypeface = sTypeFaces.get(fileName); if (tempTypeface == null) { tempTypeface = Typeface.createFromAsset(context.getAssets(), fileName); sTypeFaces.put(fileName, tempTypeface); } return tempTypeface; } } Usage: Typeface typeface = TypefaceUtils.getTypeface(context, "RobotoSlab-Bold.ttf"); setTypeface(typeface); GoalKicker.com Android Notes for Professionals 704 Chapter 119: Getting system font names and using the fonts The following examples show how to retrieve the default names of the system fonts that are store in the /system/fonts/ directory and how to use a system font to set the typeface of a TextView element. Section 119.1: Getting system font names ArrayList<String> fontNames = new ArrayList<String>(); File temp = new File("/system/fonts/"); String fontSuffix = ".ttf"; for(File font : temp.listFiles()) { String fontName = font.getName(); if(fontName.endsWith(fontSuffix)) { fontNames.add(fontName.subSequence(0,fontName.lastIndexOf(fontSuffix)).toString()); } } Section 119.2: Applying a system font to a TextView In the following code you need to replace fontsname by the name of the font you would like to use: TextView lblexample = (TextView) findViewById(R.id.lblexample); lblexample.setTypeface(Typeface.createFromFile("/system/fonts/" + "fontsname" + ".ttf")); GoalKicker.com Android Notes for Professionals 705 Chapter 120: Text to Speech(TTS) Section 120.1: Text to Speech Base layout_text_to_speech.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="16dp"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Enter text here!" android:id="@+id/textToSpeak"/> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_below="@id/textToSpeak" android:id="@+id/btnSpeak"/> </RelativeLayout> AndroidTextToSpeechActivity.java public class AndroidTextToSpeechActivity extends Activity implements TextToSpeech.OnInitListener { EditText textToSpeak = null; Button btnSpeak = null; TextToSpeech tts; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); textToSpeak = findViewById(R.id.textToSpeak); btnSpeak = findViewById(R.id.btnSpeak); btnSpeak.setEnabled(false); tts = new TextToSpeech(this, this); btnSpeak.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { speakOut(); } }); } @Override public void onDestroy() { // Don't forget to shutdown tts! if (tts != null) { tts.stop(); tts.shutdown(); } super.onDestroy(); } GoalKicker.com Android Notes for Professionals 706 @Override public void onInit(int status) { if (status == TextToSpeech.SUCCESS) { int result = tts.setLanguage(Locale.US); if (result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED) { Log.e("TTS", "This Language is not supported"); } else { btnSpeak.setEnabled(true); speakOut(); } } else { Log.e("TTS", "Initilization Failed!"); } } private void speakOut() { String text = textToSpeak.getText().toString(); if(text == null || text.isEmpty()) return; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { String utteranceId=this.hashCode() + ""; tts.speak(text, TextToSpeech.QUEUE_FLUSH, null, utteranceId); } else { tts.speak(text, TextToSpeech.QUEUE_FLUSH, null); } } } The language to be spoken can be set by providing a Locale to the setLanguage() method: tts.setLanguage(Locale.CHINESE); // Chinese language The number of supported languages varies between Android levels. The method isLanguageAvailable() can be used to check if a certain language is supported: tts.isLanguageAvailable(Locale.CHINESE); The speech pitch level can be set by using the setPitch() method. By default, the pitch value is 1.0. Use values less than 1.0 to decrease the pitch level or values greater than 1.0 to increase the pitch level: tts.setPitch(0.6); The speech rate can be set using setSpeechRate(). The default speech rate is 1.0. The speech rate can be doubled by setting it to 2.0 or made half by setting it to 0.5: tts.setSpeechRate(2.0); Section 120.2: TextToSpeech implementation across the APIs Cold observable implementation, emits true when TTS engine nishes speaking, starts speaking when subscribed. Notice that API level 21 introduces dierent way to perform speaking: public class RxTextToSpeech { GoalKicker.com Android Notes for Professionals 707 @Nullable RxTTSObservableOnSubscribe audio; WeakReference<Context> contextRef; public RxTextToSpeech(Context context) { this.contextRef = new WeakReference<>(context); } public void requestTTS(FragmentActivity activity, int requestCode) { Intent checkTTSIntent = new Intent(); checkTTSIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); activity.startActivityForResult(checkTTSIntent, requestCode); } public void cancelCurrent() { if (audio != null) { audio.dispose(); audio = null; } } public Observable<Boolean> speak(String textToRead) { audio = new RxTTSObservableOnSubscribe(contextRef.get(), textToRead, Locale.GERMANY); return Observable.create(audio); } public static class RxTTSObservableOnSubscribe extends UtteranceProgressListener implements ObservableOnSubscribe<Boolean>, Disposable, Cancellable, TextToSpeech.OnInitListener { volatile boolean disposed; ObservableEmitter<Boolean> emitter; TextToSpeech textToSpeech; String text = ""; Locale selectedLocale; Context context; public RxTTSObservableOnSubscribe(Context context, String text, Locale locale) { this.selectedLocale = locale; this.context = context; this.text = text; } @Override public void subscribe(ObservableEmitter<Boolean> e) throws Exception { this.emitter = e; if (context == null) { this.emitter.onError(new Throwable("nullable context, cannot execute " + text)); } else { this.textToSpeech = new TextToSpeech(context, this); } } @Override @DebugLog public void dispose() { if (textToSpeech != null) { textToSpeech.setOnUtteranceProgressListener(null); textToSpeech.stop(); textToSpeech.shutdown(); textToSpeech = null; } disposed = true; } GoalKicker.com Android Notes for Professionals 708 @Override public boolean isDisposed() { return disposed; } @Override public void cancel() throws Exception { dispose(); } @Override public void onInit(int status) { int languageCode = textToSpeech.setLanguage(selectedLocale); if (languageCode == android.speech.tts.TextToSpeech.LANG_COUNTRY_AVAILABLE) { textToSpeech.setPitch(1); textToSpeech.setSpeechRate(1.0f); textToSpeech.setOnUtteranceProgressListener(this); performSpeak(); } else { emitter.onError(new Throwable("language " + selectedLocale.getCountry() + " is not supported")); } } @Override public void onStart(String utteranceId) { //no-op } @Override public void onDone(String utteranceId) { this.emitter.onNext(true); this.emitter.onComplete(); } @Override public void onError(String utteranceId) { this.emitter.onError(new Throwable("error TTS " + utteranceId)); } void performSpeak() { if (isAtLeastApiLevel(21)) { speakWithNewApi(); } else { speakWithOldApi(); } } @RequiresApi(api = 21) void speakWithNewApi() { Bundle params = new Bundle(); params.putString(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, ""); textToSpeech.speak(text, TextToSpeech.QUEUE_ADD, params, uniqueId()); } void speakWithOldApi() { HashMap<String, String> map = new HashMap<>(); map.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, uniqueId()); textToSpeech.speak(text, TextToSpeech.QUEUE_ADD, map); } private String uniqueId() { return UUID.randomUUID().toString(); } } GoalKicker.com Android Notes for Professionals 709 public static boolean isAtLeastApiLevel(int apiLevel) { return Build.VERSION.SDK_INT >= apiLevel; } } GoalKicker.com Android Notes for Professionals 710 Chapter 121: Spinner Section 121.1: Basic Spinner Example Spinner It is a type of dropdown input. Firstly in layout <Spinner android:id="@+id/spinner" <!-- id to refer this spinner from JAVA--> android:layout_width="match_parent" android:layout_height="wrap_content"> </Spinner> Now Secondly populate values in spinner There are mainly two ways to populate values in spinner. 1. From XML itself create a array.xml in values directory under res. Create this array <string-array name="defaultValue"> <item>--Select City Area--</item> <item>--Select City Area--</item> <item>--Select City Area--</item> </string-array> Now add this line in sppiner XML android:entries="@array/defaultValue" 2. You can also add values via JAVA if you are using in activity cityArea = (Spinner) ndViewById(R.id.cityArea); else if you are using in fragment cityArea = (Spinner) findViewById(R.id.cityArea); Now create a arrayList of Strings ArrayList<String> area = new ArrayList<>(); //add values in area arrayList cityArea.setAdapter(new ArrayAdapter<String>(context , android.R.layout.simple_list_item_1, area)); This will look like According to the device Android version it will render style GoalKicker.com Android Notes for Professionals 711 Following are some of the default themes If an app does not explicitly request a theme in its manifest, Android System will determine the default theme based on the apps targetSdkVersion to maintain the apps original expectations: Android SDK Version Version < 11 Default Theme @android:style/Theme Version between 11 and 13 @android:style/Theme.Holo 14 and higher @android:style/Theme.DeviceDefault Spinner can be easily customized with the help of xml eg android:background="@drawable/spinner_background" android:layout_margin="16dp" android:padding="16dp" Create a custom background in XML and use it. easily get the position and other details of the selected item in spinner cityArea.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { areaNo = position; } @Override public void onNothingSelected(AdapterView<?> parent) { } }); Change the text color of the selected item in spinner This can be done in two ways in XML <item android:state_activated="true" android:color="@color/red"/> This will change the selected item color in the popup. and from JAVA do this (in the setOnItemSelectedListener(...)) @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { ((TextView) parent.getChildAt(0)).setTextColor(0x00000000); // similarly change `background color` etc. } Section 121.2: Adding a spinner to your activity In /res/values/strings.xml: <string-array name="spinner_options"> <item>Option 1</item> GoalKicker.com Android Notes for Professionals 712 <item>Option 2</item> <item>Option 3</item> </string-array> In layout XML: <Spinner android:id="@+id/spinnerName" android:layout_width="match_parent" android:layout_height="wrap_content" android:entries="@array/spinner_options" /> In Activity: Spinner spinnerName = (Spinner) findViewById(R.id.spinnerName); spinnerName.setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String chosenOption = (String) parent.getItemAtPosition(position); } @Override public void onNothingSelected(AdapterView<?> parent) {} }); GoalKicker.com Android Notes for Professionals 713 Chapter 122: Data Encryption/Decryption This topic discusses how encryption and decryption works in Android. Section 122.1: AES encryption of data using password in a secure way The following example encrypts a given data block using AES. The encryption key is derived in a secure way (random salt, 1000 rounds of SHA-256). The encryption uses AES in CBC mode with random IV. Note that the data stored in the class EncryptedData (salt, iv, and encryptedData) can be concatenated to a single byte array. You can then save the data or transmit it to the recipient. private static final int SALT_BYTES = 8; private static final int PBK_ITERATIONS = 1000; private static final String ENCRYPTION_ALGORITHM = "AES/CBC/PKCS5Padding"; private static final String PBE_ALGORITHM = "PBEwithSHA256and128BITAES-CBC-BC"; private EncryptedData encrypt(String password, byte[] data) throws NoSuchPaddingException, NoSuchAlgorithmException, InvalidKeySpecException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException, InvalidAlgorithmParameterException { EncryptedData encData = new EncryptedData(); SecureRandom rnd = new SecureRandom(); encData.salt = new byte[SALT_BYTES]; encData.iv = new byte[16]; // AES block size rnd.nextBytes(encData.salt); rnd.nextBytes(encData.iv); PBEKeySpec keySpec = new PBEKeySpec(password.toCharArray(), encData.salt, PBK_ITERATIONS); SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance(PBE_ALGORITHM); Key key = secretKeyFactory.generateSecret(keySpec); Cipher cipher = Cipher.getInstance(ENCRYPTION_ALGORITHM); IvParameterSpec ivSpec = new IvParameterSpec(encData.iv); cipher.init(Cipher.ENCRYPT_MODE, key, ivSpec); encData.encryptedData = cipher.doFinal(data); return encData; } private byte[] decrypt(String password, byte[] salt, byte[] iv, byte[] encryptedData) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException, InvalidAlgorithmParameterException { PBEKeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt, PBK_ITERATIONS); SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance(PBE_ALGORITHM); Key key = secretKeyFactory.generateSecret(keySpec); Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding"); IvParameterSpec ivSpec = new IvParameterSpec(iv); cipher.init(Cipher.DECRYPT_MODE, key, ivSpec); return cipher.doFinal(encryptedData); } private static class EncryptedData { public byte[] salt; public byte[] iv; public byte[] encryptedData; } The following example code shows how to test encryption and decryption: GoalKicker.com Android Notes for Professionals 714 try { String password = "test12345"; byte[] data = "plaintext11223344556677889900".getBytes("UTF-8"); EncryptedData encData = encrypt(password, data); byte[] decryptedData = decrypt(password, encData.salt, encData.iv, encData.encryptedData); String decDataAsString = new String(decryptedData, "UTF-8"); Toast.makeText(this, decDataAsString, Toast.LENGTH_LONG).show(); } catch (Exception e) { e.printStackTrace(); } GoalKicker.com Android Notes for Professionals 715 Chapter 123: OkHttp Section 123.1: Basic usage example I like to wrap my OkHttp into a class called HttpClient for example, and in this class I have methods for each of the major HTTP verbs, post, get, put and delete, most commonly. (I usually include an interface, in order to keep for it to implement, in order to be able to easily change to a dierent implementation, if need be): public class HttpClient implements HttpClientInterface{ private static final String TAG = OkHttpClient.class.getSimpleName(); public static final MediaType JSON = MediaType.parse("application/json; charset=utf-8"); OkHttpClient httpClient = new OkHttpClient(); @Override public String post(String url, String json) throws IOException { Log.i(TAG, "Sending a post request with body:\n" + json + "\n to URL: " + url); RequestBody body = RequestBody.create(JSON, json); Request request = new Request.Builder() .url(url) .post(body) .build(); Response response = httpClient.newCall(request).execute(); return response.body().string(); } The syntax is the same for put, get and delete except for 1 word (.put(body)) so it might be obnoxious to post that code as well. Usage is pretty simple, just call the appropriate method on some url with some json payload and the method will return a string as a result that you can later use and parse. Let's assume that the response will be a json, we can create a JSONObject easily from it: String response = httpClient.post(MY_URL, JSON_PAYLOAD); JSONObject json = new JSONObject(response); // continue to parse the response according to it's structure Section 123.2: Setting up OkHttp Grab via Maven: <dependency> <groupId>com.squareup.okhttp3</groupId> <artifactId>okhttp</artifactId> <version>3.6.0</version> </dependency> or Gradle: compile 'com.squareup.okhttp3:okhttp:3.6.0' Section 123.3: Logging interceptor Interceptors are used to intercept OkHttp calls. The reason to intercept could be to monitor, rewrite and retry GoalKicker.com Android Notes for Professionals 716 calls. It can be used for outgoing request or incoming response both. class LoggingInterceptor implements Interceptor { @Override public Response intercept(Interceptor.Chain chain) throws IOException { Request request = chain.request(); long t1 = System.nanoTime(); logger.info(String.format("Sending request %s on %s%n%s", request.url(), chain.connection(), request.headers())); Response response = chain.proceed(request); long t2 = System.nanoTime(); logger.info(String.format("Received response for %s in %.1fms%n%s", response.request().url(), (t2 - t1) / 1e6d, response.headers())); return response; } } Section 123.4: Synchronous Get Call private final OkHttpClient client = new OkHttpClient(); public void run() throws Exception { Request request = new Request.Builder() .url(yourUrl) .build(); Response response = client.newCall(request).execute(); if (!response.isSuccessful()) throw new IOException("Unexpected code " + response); Headers responseHeaders = response.headers(); System.out.println(response.body().string()); } Section 123.5: Asynchronous Get Call private final OkHttpClient client = new OkHttpClient(); public void run() throws Exception { Request request = new Request.Builder() .url(yourUrl) .build(); client.newCall(request).enqueue(new Callback() { @Override public void onFailure(Call call, IOException e) { e.printStackTrace(); } @Override public void onResponse(Call call, Response response) throws IOException { if (!response.isSuccessful()) throw new IOException("Unexpected code " + response); Headers responseHeaders = response.headers(); System.out.println(response.body().string()); } GoalKicker.com Android Notes for Professionals 717 }); } Section 123.6: Posting form parameters private final OkHttpClient client = new OkHttpClient(); public void run() throws Exception { RequestBody formBody = new FormBody.Builder() .add("search", "Jurassic Park") .build(); Request request = new Request.Builder() .url("https://en.wikipedia.org/w/index.php") .post(formBody) .build(); Response response = client.newCall(request).execute(); if (!response.isSuccessful()) throw new IOException("Unexpected code " + response); System.out.println(response.body().string()); } Section 123.7: Posting a multipart request private static final String IMGUR_CLIENT_ID = "..."; private static final MediaType MEDIA_TYPE_PNG = MediaType.parse("image/png"); private final OkHttpClient client = new OkHttpClient(); public void run() throws Exception { // Use the imgur image upload API as documented at https://api.imgur.com/endpoints/image RequestBody requestBody = new MultipartBody.Builder() .setType(MultipartBody.FORM) .addFormDataPart("title", "Square Logo") .addFormDataPart("image", "logo-square.png", RequestBody.create(MEDIA_TYPE_PNG, new File("website/static/logo-square.png"))) .build(); Request request = new Request.Builder() .header("Authorization", "Client-ID " + IMGUR_CLIENT_ID) .url("https://api.imgur.com/3/image") .post(requestBody) .build(); Response response = client.newCall(request).execute(); if (!response.isSuccessful()) throw new IOException("Unexpected code " + response); System.out.println(response.body().string()); } Section 123.8: Rewriting Responses private static final Interceptor REWRITE_CACHE_CONTROL_INTERCEPTOR = new Interceptor() { @Override public Response intercept(Interceptor.Chain chain) throws IOException { Response originalResponse = chain.proceed(chain.request()); return originalResponse.newBuilder() .header("Cache-Control", "max-age=60") .build(); } GoalKicker.com Android Notes for Professionals 718 }; GoalKicker.com Android Notes for Professionals 719 Chapter 124: Handling Deep Links <data> Attribute scheme Details The scheme part of a URI (case-sensitive). Examples: http, https, ftp host The host part of a URI (case-sensitive). Examples: google.com, example.org port The port part of a URI. Examples: 80, 443 path The path part of a URI. Must begin with /. Examples: /, /about pathPrex A prex for the path part of a URI. Examples: /item, /article pathPattern A pattern to match for the path part of a URI. Examples: /item/.*, /article/[0-9]* mimeType A mime type to match. Examples: image/jpeg, audio/* Deep links are URLs that take users directly to specic content in your app. You can set up deep links by adding intent lters and extracting data from incoming intents to drive users to the right screen in your app. Section 124.1: Retrieving query parameters public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Intent intent = getIntent(); Uri data = intent.getData(); if (data != null) { String param1 = data.getQueryParameter("param1"); String param2 = data.getQueryParameter("param2"); } } } If the user clicks on a linkto http://www.example.com/map?param1=FOO&param2=BAR, then param1 here will have a value of "FOO" and param2 will have a value of "BAR". Section 124.2: Simple deep link AndroidManifest.xml: <activity android:name="com.example.MainActivity" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="www.example.com" /> </intent-filter> </activity> GoalKicker.com Android Notes for Professionals 720 This will accept any link starting with http://www.example.com as a deep link to start your MainActivity. Section 124.3: Multiple paths on a single domain AndroidManifest.xml: <activity android:name="com.example.MainActivity" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="www.example.com" /> <data android:path="/" /> <data android:path="/about" /> <data android:path="/map" /> </intent-filter> </activity> This will launch your MainActivity when the user clicks any of these links: http://www.example.com/ http://www.example.com/about http://www.example.com/map Section 124.4: Multiple domains and multiple paths AndroidManifest.xml: <activity android:name="com.example.MainActivity" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="www.example.com" /> <data android:scheme="http" android:host="www.example2.com" /> <data android:path="/" /> <data android:path="/map" /> </intent-filter> </activity> This will launch your MainActivity when the user clicks any of these links: http://www.example.com/ http://www.example2.com/ GoalKicker.com Android Notes for Professionals 721 http://www.example.com/map http://www.example2.com/map Section 124.5: Both http and https for the same domain AndroidManifest.xml: <activity android:name="com.example.MainActivity" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" /> <data android:scheme="https" /> <data android:host="www.example.com" /> <data android:path="/" /> <data android:path="/map" /> </intent-filter> </activity> This will launch your MainActivity when the user clicks any of these links: http://www.example.com/ https://www.example.com/ http://www.example.com/map https://www.example.com/map Section 124.6: Using pathPrex AndroidManifest.xml: <activity android:name="com.example.MainActivity" > <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="www.example.com" android:path="/item" /> </intent-filter> </activity> This will launch your MainActivity when the user clicks any link starting with http://www.example.com/item, such as: https://www.example.com/item http://www.example.com/item/1234 https://www.example.com/item/xyz/details GoalKicker.com Android Notes for Professionals 722 Chapter 125: Crash Reporting Tools Section 125.1: Fabric - Crashlytics Fabric is a modular mobile platform that provides useful kits you can mix to build your application. Crashlytics is a crash and issue reporting tool provided by Fabric that allows you to track and monitor your applications in detail. How to Congure Fabric-Crashlytics Step 1: Change your build.gradle: Add the plugin repo and the gradle plugin: buildscript { repositories { maven { url 'https://maven.fabric.io/public' } } dependencies { // The Fabric Gradle plugin uses an open ended version to react // quickly to Android tooling updates classpath 'io.fabric.tools:gradle:1.+' } } Apply the plugin: apply plugin: 'com.android.application' //Put Fabric plugin after Android plugin apply plugin: 'io.fabric' Add the Fabric repo: repositories { maven { url 'https://maven.fabric.io/public' } } Add the Crashlyrics Kit: dependencies { compile('com.crashlytics.sdk.android:crashlytics:2.6.6@aar') { transitive = true; } } Step 2: Add Your API Key and the INTERNET permission in AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android"> <application ... > <meta-data android:name="io.fabric.ApiKey" GoalKicker.com Android Notes for Professionals 723 android:value="25eeca3bb31cd41577e097cabd1ab9eee9da151d" /> </application> <uses-permission android:name="android.permission.INTERNET" /> </manifest> Step 3: Init the Kit at runtime in you code, for example: public class MainActivity extends ActionBarActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //Init the KIT Fabric.with(this, new Crashlytics()); setContentView(R.layout.activity_main); } } Step 4: Build project. To build and run: Using the Fabric IDE plugin Kits can be installed using the Fabric IDE plugin for Android Studio or IntelliJ following this link. GoalKicker.com Android Notes for Professionals 724 After installing the plugin, restart Android Studio and login with your account using Android Studio. ( short key > CTRL + L) GoalKicker.com Android Notes for Professionals 725 Then it will show the projects that you have / the project you opened, select the one you need and click next .. next. Select the kit you would like to add, for his example it is Crashlytics : GoalKicker.com Android Notes for Professionals 726 Then hit Install. You don't need to add it manually this time like above gradle plugin, instead it will build for you. GoalKicker.com Android Notes for Professionals 727 Done! Section 125.2: Capture crashes using Sherlock Sherlock captures all your crashes and reports them as a notication. When you tap on the notication, it opens up an activity with all the crash details along with Device and Application info How to integrate Sherlock with your application? You just need to add Sherlock as a gradle dependency in your project. dependencies { compile('com.github.ajitsing:sherlock:1.0.1@aar') { transitive = true } } After syncing your android studio, initialize Sherlock in your Application class. package com.singhajit.login; GoalKicker.com Android Notes for Professionals 728 import android.app.Application; import com.singhajit.sherlock.core.Sherlock; public class SampleApp extends Application { @Override public void onCreate() { super.onCreate(); Sherlock.init(this); } } Thats all you need to do. Also Sherlock does much more than just reporting a crash. To checkout all its features take a look at this article. Demo Section 125.3: Force a Test Crash With Fabric Add a button you can tap to trigger a crash. Paste this code into your layout where youd like the button to appear. <Button android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Force Crash!" android:onClick="forceCrash" android:layout_centerVertical="true" GoalKicker.com Android Notes for Professionals 729 android:layout_centerHorizontal="true" /> Throw a RuntimeException public void forceCrash(View view) { throw new RuntimeException("This is a crash"); } Run your app and tap the new button to cause a crash. In a minute or two you should be able to see the crash on your Crashlytics dashboard as well as you will get a mail. Section 125.4: Crash Reporting with ACRA Step 1: Add the dependency of latest ACRA AAR to your application gradle(build.gradle). Step 2: In your application class(the class which extends Application; if not create it) Add a @ReportsCrashes annotation and override the attachBaseContext() method. Step 3: Initialize the ACRA class in your application class @ReportsCrashes( formUri = "Your choice of backend", reportType = REPORT_TYPES(JSON/FORM), httpMethod = HTTP_METHOD(POST/PUT), formUriBasicAuthLogin = "AUTH_USERNAME", formUriBasicAuthPassword = "AUTH_PASSWORD, customReportContent = { ReportField.USER_APP_START_DATE, ReportField.USER_CRASH_DATE, ReportField.APP_VERSION_CODE, ReportField.APP_VERSION_NAME, ReportField.ANDROID_VERSION, ReportField.DEVICE_ID, ReportField.BUILD, ReportField.BRAND, ReportField.DEVICE_FEATURES, ReportField.PACKAGE_NAME, ReportField.REPORT_ID, ReportField.STACK_TRACE, }, mode = NOTIFICATION_TYPE(TOAST,DIALOG,NOTIFICATION) resToastText = R.string.crash_text_toast) public class MyApplication extends Application { @Override protected void attachBaseContext(Context base) { super.attachBaseContext(base); // Initialization of ACRA ACRA.init(this); } } Where AUTH_USERNAME and AUTH_PASSWORD are the credentials of your desired backends. Step 4: Dene the Application class in AndroidManifest.xml <application android:name=".MyApplication"> <service></service> GoalKicker.com Android Notes for Professionals 730 <activity></activity> <receiver></receiver> </application> Step 5: Make sure you have internet permission to receive the report from crashed application <uses-permission android:name="android.permission.INTERNET"/> In case if you want to send the silent report to the backend then just use the below method to achieve it. ACRA.getErrorReporter().handleSilentException(e); GoalKicker.com Android Notes for Professionals 731 Chapter 126: Check Internet Connectivity Parameter Detail Context A reference of Activity context This method is used to check weather WI-Fi is connected or not. Section 126.1: Check if device has internet connectivity Add the required network permissions to the application manifest le: <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.INTERNET" /> /** * If network connectivity is available, will return true * * @param context the current context * @return boolean true if a network connection is available */ public static boolean isNetworkAvailable(Context context) { ConnectivityManager connectivity = (ConnectivityManager) context .getSystemService(Context.CONNECTIVITY_SERVICE); if (connectivity == null) { Log.d("NetworkCheck", "isNetworkAvailable: No"); return false; } // get network info for all of the data interfaces (e.g. WiFi, 3G, LTE, etc.) NetworkInfo[] info = connectivity.getAllNetworkInfo(); // make sure that there is at least one interface to test against if (info != null) { // iterate through the interfaces for (int i = 0; i < info.length; i++) { // check this interface for a connected state if (info[i].getState() == NetworkInfo.State.CONNECTED) { Log.d("NetworkCheck", "isNetworkAvailable: Yes"); return true; } } } return false; } Section 126.2: How to check network strength in android? ConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo Info = cm.getActiveNetworkInfo(); if (Info == null || !Info.isConnectedOrConnecting()) { Log.i(TAG, "No connection"); } else { int netType = Info.getType(); int netSubtype = Info.getSubtype(); if (netType == ConnectivityManager.TYPE_WIFI) { Log.i(TAG, "Wifi connection"); GoalKicker.com Android Notes for Professionals 732 WifiManager wifiManager = (WifiManager) getApplication().getSystemService(Context.WIFI_SERVICE); List<ScanResult> scanResult = wifiManager.getScanResults(); for (int i = 0; i < scanResult.size(); i++) { Log.d("scanResult", "Speed of wifi"+scanResult.get(i).level);//The db level of signal } // Need to get wifi strength } else if (netType == ConnectivityManager.TYPE_MOBILE) { Log.i(TAG, "GPRS/3G connection"); // Need to get differentiate between 3G/GPRS } } Section 126.3: How to check network strength To check exact strength in decibels use thisConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo Info = cm.getActiveNetworkInfo(); if (Info == null || !Info.isConnectedOrConnecting()) { Log.i(TAG, "No connection"); } else { int netType = Info.getType(); int netSubtype = Info.getSubtype(); if (netType == ConnectivityManager.TYPE_WIFI) { Log.i(TAG, "Wifi connection"); WifiManager wifiManager = (WifiManager) getApplication().getSystemService(Context.WIFI_SERVICE); List<ScanResult> scanResult = wifiManager.getScanResults(); for (int i = 0; i < scanResult.size(); i++) { Log.d("scanResult", "Speed of wifi"+scanResult.get(i).level);//The db level of signal } // Need to get wifi strength } else if (netType == ConnectivityManager.TYPE_MOBILE) { Log.i(TAG, "GPRS/3G connection"); // Need to get differentiate between 3G/GPRS } } To check Network type use this Classpublic class Connectivity { /* * These constants aren't yet available in my API level (7), but I need to * handle these cases if they come up, on newer versions */ public static final int NETWORK_TYPE_EHRPD = 14; // Level 11 public static final int NETWORK_TYPE_EVDO_B = 12; // Level 9 public static final int NETWORK_TYPE_HSPAP = 15; // Level 13 public static final int NETWORK_TYPE_IDEN = 11; // Level 8 public static final int NETWORK_TYPE_LTE = 13; // Level 11 /** GoalKicker.com Android Notes for Professionals 733 * Check if there is any connectivity * * @param context * @return */ public static boolean isConnected(Context context) { ConnectivityManager cm = (ConnectivityManager) context .getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo info = cm.getActiveNetworkInfo(); return (info != null && info.isConnected()); } /** * Check if there is fast connectivity * * @param context * @return */ public static String isConnectedFast(Context context) { ConnectivityManager cm = (ConnectivityManager) context .getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo info = cm.getActiveNetworkInfo(); if ((info != null && info.isConnected())) { return Connectivity.isConnectionFast(info.getType(), info.getSubtype()); } else return "No NetWork Access"; } /** * Check if the connection is fast * * @param type * @param subType * @return */ public static String isConnectionFast(int type, int subType) { if (type == ConnectivityManager.TYPE_WIFI) { System.out.println("CONNECTED VIA WIFI"); return "CONNECTED VIA WIFI"; } else if (type == ConnectivityManager.TYPE_MOBILE) { switch (subType) { case TelephonyManager.NETWORK_TYPE_1xRTT: return "NETWORK TYPE 1xRTT"; // ~ 50-100 kbps case TelephonyManager.NETWORK_TYPE_CDMA: return "NETWORK TYPE CDMA (3G) Speed: 2 Mbps"; // ~ 14-64 kbps case TelephonyManager.NETWORK_TYPE_EDGE: return "NETWORK TYPE EDGE (2.75G) Speed: 100-120 Kbps"; // ~ // 50-100 // kbps case TelephonyManager.NETWORK_TYPE_EVDO_0: return "NETWORK TYPE EVDO_0"; // ~ 400-1000 kbps case TelephonyManager.NETWORK_TYPE_EVDO_A: return "NETWORK TYPE EVDO_A"; // ~ 600-1400 kbps case TelephonyManager.NETWORK_TYPE_GPRS: return "NETWORK TYPE GPRS (2.5G) Speed: 40-50 Kbps"; // ~ 100 // kbps case TelephonyManager.NETWORK_TYPE_HSDPA: return "NETWORK TYPE HSDPA (4G) Speed: 2-14 Mbps"; // ~ 2-14 GoalKicker.com Android Notes for Professionals 734 // Mbps case TelephonyManager.NETWORK_TYPE_HSPA: return "NETWORK TYPE HSPA (4G) Speed: 0.7-1.7 Mbps"; // ~ // 700-1700 // kbps case TelephonyManager.NETWORK_TYPE_HSUPA: return "NETWORK TYPE HSUPA (3G) Speed: 1-23 Mbps"; // ~ 1-23 // Mbps case TelephonyManager.NETWORK_TYPE_UMTS: return "NETWORK TYPE UMTS (3G) Speed: 0.4-7 Mbps"; // ~ 400-7000 // kbps // NOT AVAILABLE YET IN API LEVEL 7 case Connectivity.NETWORK_TYPE_EHRPD: return "NETWORK TYPE EHRPD"; // ~ 1-2 Mbps case Connectivity.NETWORK_TYPE_EVDO_B: return "NETWORK_TYPE_EVDO_B"; // ~ 5 Mbps case Connectivity.NETWORK_TYPE_HSPAP: return "NETWORK TYPE HSPA+ (4G) Speed: 10-20 Mbps"; // ~ 10-20 // Mbps case Connectivity.NETWORK_TYPE_IDEN: return "NETWORK TYPE IDEN"; // ~25 kbps case Connectivity.NETWORK_TYPE_LTE: return "NETWORK TYPE LTE (4G) Speed: 10+ Mbps"; // ~ 10+ Mbps // Unknown case TelephonyManager.NETWORK_TYPE_UNKNOWN: return "NETWORK TYPE UNKNOWN"; default: return ""; } } else { return ""; } } } GoalKicker.com Android Notes for Professionals 735 Chapter 127: Creating your own libraries for Android applications Section 127.1: Create a library available on Jitpack.io Perform the following steps to create the library: 1. Create a GitHub account. 2. Create a Git repository containing your library project. 3. Modify your library project's build.gradle le by adding the following code: apply plugin: 'com.github.dcendents.android-maven' ... // Build a jar with source files. task sourcesJar(type: Jar) { from android.sourceSets.main.java.srcDirs classifier = 'sources' } task javadoc(type: Javadoc) { failOnError false source = android.sourceSets.main.java.sourceFiles classpath += project.files(android.getBootClasspath().join(File.pathSeparator)) classpath += configurations.compile } // Build a jar with javadoc. task javadocJar(type: Jar, dependsOn: javadoc) { classifier = 'javadoc' from javadoc.destinationDir } artifacts { archives sourcesJar archives javadocJar } Make sure that you commit/push the above changes to GitHub. 4. Create a release from the current code on Github. 5. Run gradlew install on your code. 6. Your library is now available by the following dependency: compile 'com.github.[YourUser]:[github repository name]:[release tag]' Section 127.2: Creating library project To create a libary , you should use File -> New -> New Module -> Android Library. This will create a basic library project. GoalKicker.com Android Notes for Professionals 736 When that's done, you must have a project that is set up the following manner: [project root directory] [library root directory] [gradle] build.gradle //project level gradle.properties gradlew gradlew.bat local.properties settings.gradle //this is important! Your settings.gradle le must contain the following: include ':[library root directory]' Your [library root directory] must contain the following: [libs] [src] [main] [java] [library package] [test] [java] [library package] build.gradle //"app"-level proguard-rules.pro Your "app"-level build.gradle le must contain the following: apply plugin: 'com.android.library' android { compileSdkVersion 23 buildToolsVersion "23.0.2" defaultConfig { minSdkVersion 14 targetSdkVersion 23 } } With that, your project should be working ne! Section 127.3: Using library in project as a module To use the library, you must include it as a dependency with the following line: compile project(':[library root directory]') GoalKicker.com Android Notes for Professionals 737 Chapter 128: Device Display Metrics Section 128.1: Get the screens pixel dimensions To retrieve the screens width and height in pixels, we can make use of the WindowManagers display metrics. // Get display metrics DisplayMetrics metrics = new DisplayMetrics(); context.getWindowManager().getDefaultDisplay().getMetrics(metrics); These DisplayMetrics hold a series of information about the devices screen, like its density or size: // Get width and height in pixel Integer heightPixels = metrics.heightPixels; Integer widthPixels = metrics.widthPixels; Section 128.2: Get screen density To get the screens density, we also can make use of the Windowmanagers DisplayMetrics. This is a quick example: // Get density in dpi DisplayMetrics metrics = new DisplayMetrics(); context.getWindowManager().getDefaultDisplay().getMetrics(metrics); int densityInDpi = metrics.densityDpi; Section 128.3: Formula px to dp, dp to px conversation DP to Pixel: private int dpToPx(int dp) { return (int) (dp * Resources.getSystem().getDisplayMetrics().density); } Pixel to DP: private int pxToDp(int px) { return (int) (px / Resources.getSystem().getDisplayMetrics().density); } GoalKicker.com Android Notes for Professionals 738 Chapter 129: Building Backwards Compatible Apps Section 129.1: How to handle deprecated API It is unlikely for a developer to not come across a deprecated API during a development process. A deprecated program element is one that programmers are discouraged from using, typically because it is dangerous, or because a better alternative exists. Compilers and analyzers (like LINT) warn when a deprecated program element is used or overridden in non-deprecated code. A deprecated API is usually identied in Android Studio using a strikeout. In the example below, the method .getColor(int id) is deprecated: getResources().getColor(R.color.colorAccent)); If possible, developers are encouraged to use alternative APIs and elements. It is possible to check backwards compatibility of a library by visiting the Android documentation for the library and checking the "Added in API level x" section: In the case that the API you need to use is not compatible with the Android version that your users are using, you should check for the API level of the user before using that library. For example: //Checks the API level of the running device if (Build.VERSION.SDK_INT < 23) { //use for backwards compatibility with API levels below 23 int color = getResources().getColor(R.color.colorPrimary); } else { int color = getResources().getColor(R.color.colorPrimary, getActivity().getTheme()); } Using this method ensures that your app will remain compatible with new Android versions as well as existing versions. GoalKicker.com Android Notes for Professionals 739 Easier alternative: Use the Support Library If the Support Libraries are used, often there are static helper methods to accomplish the same task with less client code. Instead of the if/else block above, just use: final int color = android.support.v4.content.ContextCompat .getColor(context, R.color.colorPrimary); Most deprecated methods that have newer methods with a dierent signature and many new features that may not have been able to be used on older versions have compatibility helper methods like this. To nd others, browse through the support library for classes like ContextCompat, ViewCompat, etc. GoalKicker.com Android Notes for Professionals 740 Chapter 130: Loader Class LoaderManager Description An abstract class associated with an Activity or Fragment for managing one or more Loader instances. LoaderManager.LoaderCallbacks A callback interface for a client to interact with the LoaderManager. Loader An abstract class that performs asynchronous loading of data. AsyncTaskLoader Abstract loader that provides an AsyncTask to do the work. CursorLoader A subclass of AsyncTaskLoader that queries the ContentResolver and returns a Cursor. Loader is good choice for prevent memory leak if you want to load data in background when oncreate method is called. For example when we execute Asynctask in oncreate method and we rotate the screen so the activity will recreate which will execute another AsyncTask again, so probably two Asyntask running in parallel together rather than like loader which will continue the background process we executed before. Section 130.1: Basic AsyncTaskLoader AsyncTaskLoader is an abstract Loader that provides an AsyncTask to do the work. Here some basic implementation: final class BasicLoader extends AsyncTaskLoader<String> { public BasicLoader(Context context) { super(context); } @Override public String loadInBackground() { // Some work, e.g. load something from internet return "OK"; } @Override public void deliverResult(String data) { if (isStarted()) { // Deliver result if loader is currently started super.deliverResult(data); } } @Override protected void onStartLoading() { // Start loading forceLoad(); } @Override protected void onStopLoading() { cancelLoad(); } @Override protected void onReset() { super.onReset(); GoalKicker.com Android Notes for Professionals 741 // Ensure the loader is stopped onStopLoading(); } } Typically Loader is initialized within the activity's onCreate() method, or within the fragment's onActivityCreated(). Also usually activity or fragment implements LoaderManager.LoaderCallbacks interface: public class MainActivity extends Activity implements LoaderManager.LoaderCallbacks<String> { // Unique id for loader private static final int LDR_BASIC_ID = 1; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Initialize loader; Some data can be passed as second param instead of Bundle.Empty getLoaderManager().initLoader(LDR_BASIC_ID, Bundle.EMPTY, this); } @Override public Loader<String> onCreateLoader(int id, Bundle args) { return new BasicLoader(this); } @Override public void onLoadFinished(Loader<String> loader, String data) { Toast.makeText(this, data, Toast.LENGTH_LONG).show(); } @Override public void onLoaderReset(Loader<String> loader) { } } In this example, when loader completed, toast with result will be shown. Section 130.2: AsyncTaskLoader with cache It's a good practice to cache loaded result to avoid multiple loading of same data. To invalidate cache onContentChanged() should be called. If loader has been already started, forceLoad() will be called, otherwise (if loader in stopped state) loader will be able to understand content change with takeContentChanged() check. Remark: onContentChanged() must be called from the process's main thread. Javadocs says about takeContentChanged(): Take the current ag indicating whether the loader's content had changed while it was stopped. If it had, true is returned and the ag is cleared. public abstract class BaseLoader<T> extends AsyncTaskLoader<T> { // Cached result saved here private final AtomicReference<T> cache = new AtomicReference<>(); GoalKicker.com Android Notes for Professionals 742 public BaseLoader(@NonNull final Context context) { super(context); } @Override public final void deliverResult(final T data) { if (!isReset()) { // Save loaded result cache.set(data); if (isStarted()) { super.deliverResult(data); } } } @Override protected final void onStartLoading() { // Register observers registerObserver(); final T cached = cache.get(); // Start new loading if content changed in background // or if we never loaded any data if (takeContentChanged() || cached == null) { forceLoad(); } else { deliverResult(cached); } } @Override public final void onStopLoading() { cancelLoad(); } @Override protected final void onReset() { super.onReset(); onStopLoading(); // Clear cache and remove observers cache.set(null); unregisterObserver(); } /* virtual */ protected void registerObserver() { // Register observers here, call onContentChanged() to invalidate cache } /* virtual */ protected void unregisterObserver() { // Remove observers } } Section 130.3: Reloading To invalidate your old data and restart existing loader you can use restartLoader() method: private void reload() { getLoaderManager().reastartLoader(LOADER_ID, Bundle.EMPTY, this); GoalKicker.com Android Notes for Professionals 743 } Section 130.4: Pass parameters using a Bundle You can pass parameters by Bundle: Bundle myBundle = new Bundle(); myBundle.putString(MY_KEY, myValue); Get the value in onCreateLoader: @Override public Loader<String> onCreateLoader(int id, final Bundle args) { final String myParam = args.getString(MY_KEY); ... } GoalKicker.com Android Notes for Professionals 744 Chapter 131: ProGuard - Obfuscating and Shrinking your code Section 131.1: Rules for some of the widely used Libraries Currently it contains rules for following libraries: 1. ButterKnife 2. RxJava 3. Android Support Library 4. Android Design Support Library 5. Retrot 6. Gson and Jackson 7. Otto 8. Crashlitycs 9. Picasso 10. Volley 11. OkHttp3 12. Parcelable #Butterknife -keep class butterknife.** { *; } -keepnames class * { @butterknife.Bind *;} -dontwarn butterknife.internal.** -keep class **$$ViewBinder { *; } -keepclasseswithmembernames class * { @butterknife.* <fields>; } -keepclasseswithmembernames class * { @butterknife.* <methods>; } # rxjava -keep class rx.schedulers.Schedulers { public static <methods>; } -keep class rx.schedulers.ImmediateScheduler { public <methods>; } -keep class rx.schedulers.TestScheduler { public <methods>; } -keep class rx.schedulers.Schedulers { public static ** test(); } -keepclassmembers class rx.internal.util.unsafe.*ArrayQueue*Field* { long producerIndex; long consumerIndex; } -keepclassmembers class rx.internal.util.unsafe.BaseLinkedQueueProducerNodeRef { long producerNode; long consumerNode; } GoalKicker.com Android Notes for Professionals 745 # Support library -dontwarn android.support.** -dontwarn android.support.v4.** -keep class android.support.v4.** { *; } -keep interface android.support.v4.** { *; } -dontwarn android.support.v7.** -keep class android.support.v7.** { *; } -keep interface android.support.v7.** { *; } # support design -dontwarn android.support.design.** -keep class android.support.design.** { *; } -keep interface android.support.design.** { *; } -keep public class android.support.design.R$* { *; } # retrofit -dontwarn okio.** -keepattributes Signature -keepattributes *Annotation* -keep class com.squareup.okhttp.** { *; } -keep interface com.squareup.okhttp.** { *; } -dontwarn com.squareup.okhttp.** -dontwarn rx.** -dontwarn retrofit.** -keep class retrofit.** { *; } -keepclasseswithmembers class * { @retrofit.http.* <methods>; } -keep class sun.misc.Unsafe { *; } #your package path where your gson models are stored -keep class com.abc.model.** { *; } # Keep these for GSON and Jackson -keepattributes Signature -keepattributes *Annotation* -keepattributes EnclosingMethod -keep class sun.misc.Unsafe { *; } -keep class com.google.gson.** { *; } #keep otto -keepattributes *Annotation* -keepclassmembers class ** { @com.squareup.otto.Subscribe public *; @com.squareup.otto.Produce public *; } # Crashlitycs 2.+ -keep class com.crashlytics.** { *; } -keep class com.crashlytics.android.** -keepattributes SourceFile, LineNumberTable, *Annotation* # If you are using custom exceptions, add this line so that custom exception types are skipped during obfuscation: -keep public class * extends java.lang.Exception # For Fabric to properly de-obfuscate your crash reports, you need to remove this line from your ProGuard config: # -printmapping mapping.txt # Picasso -dontwarn com.squareup.okhttp.** GoalKicker.com Android Notes for Professionals 746 # Volley -keep class com.android.volley.toolbox.ImageLoader { *; } # OkHttp3 -keep class okhttp3.** { *; } -keep interface okhttp3.** { *; } -dontwarn okhttp3.** # Needed for Parcelable/SafeParcelable Creators to not get stripped -keepnames class * implements android.os.Parcelable { public static final ** CREATOR; } Section 131.2: Remove trace logging (and other) statements at build time If you want to remove calls to certain methods, assuming they return void and have no side aects (as in, calling them doesn't change any system values, reference arguments, statics, etc.) then you can have ProGuard remove them from the output after the build is complete. For example, I nd this useful in removing debug/verbose logging statements useful in debugging, but generating the strings for them is unnecessary in production. # Remove the debug and verbose level Logging statements. # That means the code to generate the arguments to these methods will also not be called. # ONLY WORKS IF -dontoptimize IS _NOT_ USED in any ProGuard configs -assumenosideeffects class android.util.Log { public static *** d(...); public static *** v(...); } Note: If -dontoptimize is used in any ProGuard cong so that it is not minifying/removing unused code, then this will not strip out the statements. (But who would not want to remove unused code, right?) Note2: this call will remove the call to log, but will not protect you code. The Strings will actually remain in the generated apk. Read more in this post. Section 131.3: Protecting your code from hackers Obfuscation is often considered as a magic solution for code protection, by making your code harder to understand if it ever gets de-compiled by hackers. But if you're thinking that removing the Log.x(..) actually removes the information the hackers need, you'll have a nasty surprise. Removing all your log calls with: -assumenosideeffects class android.util.Log { public static *** d(...); ...etc } will indeed remove the Log call itself, but usually not the Strings you put into them. If for example inside your log call you type a common log message such as: Log.d(MyTag,"Score="+score);, the compiler converts the + to a 'new StringBuilder()' outside the Log call. ProGuard doesn't change this new object. GoalKicker.com Android Notes for Professionals 747 Your de-compiled code will still have a hanging StringBuilder for "Score=", appended with the obfuscated version for score variable (let's say it was converted to b). Now the hacker knows what is b, and make sense of your code. A good practice to actually remove these residuals from your code is either not put them there in the rst place (Use String formatter instead, with proguard rules to remove them), or to wrap your Log calls with: if (BuildConfig.DEBUG) { Log.d(TAG,".."+var); } Tip: Test how well protected your obfuscated code is by de-compiling it yourself! 1. dex2jar - converts the apk to jar 2. jd - decompiles the jar and opens it in a gui editor Section 131.4: Enable ProGuard for your build For enabling ProGuard congurations for your application you need to enable it in your module level gradle le. you need to set the value of minifyEnabled true. You can also enable shrinkResources true which will remove resources that ProGuard aggs as unused. buildTypes { release { minifyEnabled true shrinkResources true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } The above code will apply your ProGuard congurations contained in proguard-rules.pro ("proguard-project.txt" in Eclipse) to your released apk. To enable you to later determine the line on which an exception occurred in a stack trace, "proguard-rules.pro" should contain following lines: -renamesourcefileattribute SourceFile -keepattributes SourceFile,LineNumberTable To enable Proguard in Eclipse add proguard.config=${sdk.dir}/tools/proguard/proguardandroid.txt:proguard-project.txt to "project.properties" Section 131.5: Enabling ProGuard with a custom obfuscation conguration le ProGuard allows the developer to obfuscate, shrink and optimize his code. #1 The rst step of the procedure is to enable proguard on the build. This can be done by setting the 'minifyEnabled' command to true on your desired build #2 The second step is to specify which proguard les are we using for the given build GoalKicker.com Android Notes for Professionals 748 This can be done by setting the 'proguardFiles' line with the proper lenames buildTypes { debug { minifyEnabled false } testRelease { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rulestests.pro' } productionRelease { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rulestests.pro', 'proguard-rules-release.pro' } } #3 The developer can then edit his proguard le with the rules he desires. That can be done by editting the le (for example 'proguard-rules-tests.pro') and adding the desired constraints. The following le serves as an example proguard le // default & basic optimization configurations -optimizationpasses 5 -dontpreverify -repackageclasses '' -allowaccessmodification -optimizations !code/simplification/arithmetic -keepattributes *Annotation* -verbose -dump obfuscation/class_files.txt -printseeds obfuscation/seeds.txt -printusage obfuscation/unused.txt // unused classes that are stripped out in the process -printmapping obfuscation/mapping.txt // mapping file that shows the obfuscated names of the classes after proguad is applied // the developer can specify keywords for the obfuscation (I myself use fruits for obfuscation names once in a while :-) ) -obfuscationdictionary obfuscation/keywords.txt -classobfuscationdictionary obfuscation/keywords.txt -packageobfuscationdictionary obfuscation/keywords.txt Finally, whenever the developer runs and/or generates his new .APK le, the custom proguard congurations will be applied thus fullling the requirements. GoalKicker.com Android Notes for Professionals 749 Chapter 132: Typedef Annotations: @IntDef, @StringDef Section 132.1: IntDef Annotations This annotation ensures that only the valid integer constants that you expect are used. The following example illustrates the steps to create an annotation: import android.support.annotation.IntDef; public abstract class Car { //Define the list of accepted constants @IntDef({MICROCAR, CONVERTIBLE, SUPERCAR, MINIVAN, SUV}) //Tell the compiler not to store annotation data in the .class file @Retention(RetentionPolicy.SOURCE) //Declare the CarType annotation public @interface CarType {} //Declare the constants public static final int MICROCAR = 0; public static final int CONVERTIBLE = 1; public static final int SUPERCAR = 2; public static final int MINIVAN = 3; public static final int SUV = 4; @CarType private int mType; @CarType public int getCarType(){ return mType; }; public void setCarType(@CarType int type){ mType = type; } } They also enable code completion to automatically oer the allowed constants. When you build this code, a warning is generated if the type parameter does not reference one of the dened constants. Section 132.2: Combining constants with ags Using the IntDef#flag() attribute set to true, multiple constants can be combined. Using the same example in this topic: public abstract class Car { //Define the list of accepted constants @IntDef(flag=true, value={MICROCAR, CONVERTIBLE, SUPERCAR, MINIVAN, SUV}) //Tell the compiler not to store annotation data in the .class file @Retention(RetentionPolicy.SOURCE) GoalKicker.com Android Notes for Professionals 750 ..... } Users can combine the allowed constants with a ag (such as |, &, ^ ). GoalKicker.com Android Notes for Professionals 751 Chapter 133: Capturing Screenshots Section 133.1: Taking a screenshot of a particular view If you want to take a screenshot of a particular View v, then you can use the following code: Bitmap viewBitmap = Bitmap.createBitmap(v.getWidth(), v.getHeight(), Bitmap.Config.RGB_565); Canvas viewCanvas = new Canvas(viewBitmap); Drawable backgroundDrawable = v.getBackground(); if(backgroundDrawable != null){ // Draw the background onto the canvas. backgroundDrawable.draw(viewCanvas); } else{ viewCanvas.drawColor(Color.GREEN); // Draw the view onto the canvas. v.draw(viewCanvas) } // Write the bitmap generated above into a file. String fileStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date()); OutputStream outputStream = null; try{ imgFile = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), fileStamp + ".png"); outputStream = new FileOutputStream(imgFile); viewBitmap.compress(Bitmap.CompressFormat.PNG, 40, outputStream); outputStream.close(); } catch(Exception e){ e.printStackTrace(); } Section 133.2: Capturing Screenshot via Android Studio 1. Open Android Monitor Tab 2. Click on Screen Capture Button GoalKicker.com Android Notes for Professionals 752 Section 133.3: Capturing Screenshot via ADB and saving directly in your PC If you use Linux (or Windows with Cygwin), you can run: adb shell screencap -p | sed 's/\r$//' > screenshot.png Section 133.4: Capturing Screenshot via Android Device Monitor 1. Open Android Device Monitor ( ie C:<ANDROID_SDK_LOCATION>\tools\monitor.bat) 2. Select your device 3. Click on Screen Capture Button GoalKicker.com Android Notes for Professionals 753 Section 133.5: Capturing Screenshot via ADB Example below saves a screenshot on Devices's Internal Storage. adb shell screencap /sdcard/screen.png GoalKicker.com Android Notes for Professionals 754 Chapter 134: MVP Architecture This topic will provide ModelViewPresenter (MVP) architecture of Android with various examples. Section 134.1: Login example in the Model View Presenter (MVP) pattern Let's see MVP in action using a simple Login Screen. There are two Buttonsone for login action and another for a registration screen; two EditTextsone for the email and the other for the password. LoginFragment (The View) public class LoginFragment extends Fragment implements LoginContract.PresenterToView, View.OnClickListener { private View view; private EditText email, password; private Button login, register; private LoginContract.ToPresenter presenter; @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_login, container, false); } @Override public void onViewCreated(View view, @Nullable Bundle savedInstanceState) { email = (EditText) view.findViewById(R.id.email_et); password = (EditText) view.findViewById(R.id.password_et); login = (Button) view.findViewById(R.id.login_btn); login.setOnClickListener(this); register = (Button) view.findViewById(R.id.register_btn); register.setOnClickListener(this); presenter = new LoginPresenter(this); presenter.isLoggedIn(); } @Override public void onLoginResponse(boolean isLoginSuccess) { if (isLoginSuccess) { startActivity(new Intent(getActivity(), MapActivity.class)); getActivity().finish(); } } @Override public void onError(String message) { Toast.makeText(getActivity(), message, Toast.LENGTH_SHORT).show(); } @Override public void isLoggedIn(boolean isLoggedIn) { if (isLoggedIn) { GoalKicker.com Android Notes for Professionals 755 startActivity(new Intent(getActivity(), MapActivity.class)); getActivity().finish(); } } @Override public void onClick(View view) { switch (view.getId()) { case R.id.login_btn: LoginItem loginItem = new LoginItem(); loginItem.setPassword(password.getText().toString().trim()); loginItem.setEmail(email.getText().toString().trim()); presenter.login(loginItem); break; case R.id.register_btn: startActivity(new Intent(getActivity(), RegisterActivity.class)); getActivity().finish(); break; } } } LoginPresenter (The Presenter) public class LoginPresenter implements LoginContract.ToPresenter { private LoginContract.PresenterToModel model; private LoginContract.PresenterToView view; public LoginPresenter(LoginContract.PresenterToView view) { this.view = view; model = new LoginModel(this); } @Override public void login(LoginItem userCredentials) { model.login(userCredentials); } @Override public void isLoggedIn() { model.isLoggedIn(); } @Override public void onLoginResponse(boolean isLoginSuccess) { view.onLoginResponse(isLoginSuccess); } @Override public void onError(String message) { view.onError(message); } @Override public void isloggedIn(boolean isLoggedin) { view.isLoggedIn(isLoggedin); } } LoginModel (The Model) GoalKicker.com Android Notes for Professionals 756 public class LoginModel implements LoginContract.PresenterToModel, ResponseErrorListener.ErrorListener { private static final String TAG = LoginModel.class.getSimpleName(); private LoginContract.ToPresenter presenter; public LoginModel(LoginContract.ToPresenter presenter) { this.presenter = presenter; } @Override public void login(LoginItem userCredentials) { if (validateData(userCredentials)) { try { performLoginOperation(userCredentials); } catch (JSONException e) { e.printStackTrace(); } } else { presenter.onError(BaseContext.getContext().getString(R.string.error_login_field_validation)); } } @Override public void isLoggedIn() { DatabaseHelper database = new DatabaseHelper(BaseContext.getContext()); presenter.isloggedIn(database.isLoggedIn()); } private boolean validateData(LoginItem userCredentials) { return Patterns.EMAIL_ADDRESS.matcher(userCredentials.getEmail()).matches() && !userCredentials.getPassword().trim().equals(""); } private void performLoginOperation(final LoginItem userCredentials) throws JSONException { JSONObject postData = new JSONObject(); postData.put(Constants.EMAIL, userCredentials.getEmail()); postData.put(Constants.PASSWORD, userCredentials.getPassword()); JsonObjectRequest request = new JsonObjectRequest(Request.Method.POST, Url.AUTH, postData, new Response.Listener<JSONObject>() { @Override public void onResponse(JSONObject response) { try { String token = response.getString(Constants.ACCESS_TOKEN); DatabaseHelper databaseHelper = new DatabaseHelper(BaseContext.getContext()); databaseHelper.login(token); Log.d(TAG, "onResponse: " + token); } catch (JSONException e) { e.printStackTrace(); } presenter.onLoginResponse(true); } }, new ErrorResponse(this)); RequestQueue queue = Volley.newRequestQueue(BaseContext.getContext()); queue.add(request); } GoalKicker.com Android Notes for Professionals 757 @Override public void onError(String message) { presenter.onError(message); } } Class Diagram Let's see the action in the form of class diagram. Notes: This example uses Volley for network communication, but this library is not required for MVP UrlUtils is a class which contains all the links for my API Endpoints ResponseErrorListener.ErrorListener is an interface that listens for error in ErrorResponse that implements Volley's Response.ErrorListener; these classes are not included here as they are not directly part of this example Section 134.2: Simple Login Example in MVP Required package structure GoalKicker.com Android Notes for Professionals 758 XML activity_login <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center_vertical" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin"> <EditText android:id="@+id/et_login_username" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="USERNAME" /> <EditText android:id="@+id/et_login_password" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="PASSWORD" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal"> GoalKicker.com Android Notes for Professionals 759 <Button android:id="@+id/btn_login_login" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="4dp" android:layout_weight="1" android:text="Login" /> <Button android:id="@+id/btn_login_clear" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="4dp" android:layout_weight="1" android:text="Clear" /> </LinearLayout> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="3dp" android:text="correct user: mvp, mvp" /> <ProgressBar android:id="@+id/progress_login" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="40dp" /> </LinearLayout> Activity Class LoginActivity.class public class LoginActivity extends AppCompatActivity implements ILoginView, View.OnClickListener { private EditText editUser; private EditText editPass; private Button btnLogin; private Button btnClear; private ILoginPresenter loginPresenter; private ProgressBar progressBar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_login); //find view editUser = (EditText) this.findViewById(R.id.et_login_username); editPass = (EditText) this.findViewById(R.id.et_login_password); btnLogin = (Button) this.findViewById(R.id.btn_login_login); btnClear = (Button) this.findViewById(R.id.btn_login_clear); progressBar = (ProgressBar) this.findViewById(R.id.progress_login); //set listener btnLogin.setOnClickListener(this); btnClear.setOnClickListener(this); //init loginPresenter = new LoginPresenterCompl(this); loginPresenter.setProgressBarVisiblity(View.INVISIBLE); } @Override GoalKicker.com Android Notes for Professionals 760 public void onClick(View v) { switch (v.getId()){ case R.id.btn_login_clear: loginPresenter.clear(); break; case R.id.btn_login_login: loginPresenter.setProgressBarVisiblity(View.VISIBLE); btnLogin.setEnabled(false); btnClear.setEnabled(false); loginPresenter.doLogin(editUser.getText().toString(), editPass.getText().toString()); break; } } @Override public void onClearText() { editUser.setText(""); editPass.setText(""); } @Override public void onLoginResult(Boolean result, int code) { loginPresenter.setProgressBarVisiblity(View.INVISIBLE); btnLogin.setEnabled(true); btnClear.setEnabled(true); if (result){ Toast.makeText(this,"Login Success",Toast.LENGTH_SHORT).show(); } else Toast.makeText(this,"Login Fail, code = " + code,Toast.LENGTH_SHORT).show(); } @Override protected void onDestroy() { super.onDestroy(); } @Override public void onSetProgressBarVisibility(int visibility) { progressBar.setVisibility(visibility); } } Creating an ILoginView Interface Create an ILoginView interface for update info from Presenter under view folder as follows: public interface ILoginView { public void onClearText(); public void onLoginResult(Boolean result, int code); public void onSetProgressBarVisibility(int visibility); } Creating an ILoginPresenter Interface Create an ILoginPresenter interface in order to communicate with LoginActivity (Views) and create the LoginPresenterCompl class for handling login functionality and reporting back to the Activity. The LoginPresenterCompl class implements the ILoginPresenter interface: ILoginPresenter.class public interface ILoginPresenter { GoalKicker.com Android Notes for Professionals 761 void clear(); void doLogin(String name, String passwd); void setProgressBarVisiblity(int visiblity); } LoginPresenterCompl.class public class LoginPresenterCompl implements ILoginPresenter { ILoginView iLoginView; IUser user; Handler handler; public LoginPresenterCompl(ILoginView iLoginView) { this.iLoginView = iLoginView; initUser(); handler = new Handler(Looper.getMainLooper()); } @Override public void clear() { iLoginView.onClearText(); } @Override public void doLogin(String name, String passwd) { Boolean isLoginSuccess = true; final int code = user.checkUserValidity(name,passwd); if (code!=0) isLoginSuccess = false; final Boolean result = isLoginSuccess; handler.postDelayed(new Runnable() { @Override public void run() { iLoginView.onLoginResult(result, code); } }, 5000); } @Override public void setProgressBarVisiblity(int visiblity){ iLoginView.onSetProgressBarVisibility(visiblity); } private void initUser(){ user = new UserModel("mvp","mvp"); } } Creating a UserModel Create a UserModel which is like a Pojo class for LoginActivity. Create an IUser interface for Pojo validations: UserModel.class public class UserModel implements IUser { String name; String passwd; public UserModel(String name, String passwd) { this.name = name; this.passwd = passwd; } @Override GoalKicker.com Android Notes for Professionals 762 public String getName() { return name; } @Override public String getPasswd() { return passwd; } @Override public int checkUserValidity(String name, String passwd){ if (name==null||passwd==null||!name.equals(getName())||!passwd.equals(getPasswd())){ return -1; } return 0; } IUser.class public interface IUser { String getName(); String getPasswd(); int checkUserValidity(String name, String passwd); } MVP A Model-view-presenter (MVP) is a derivation of the modelviewcontroller (MVC) architectural pattern. It is used mostly for building user interfaces and oers the following benets: Views are more separated from Models. The Presenter is the mediator between Model and View. It is easier to create unit tests. Generally, there is a one-to-one mapping between View and Presenter, with the possibility to use multiple Presenters for complex Views. GoalKicker.com Android Notes for Professionals 763 GoalKicker.com Android Notes for Professionals 764 Chapter 135: Orientation Changes Section 135.1: Saving and Restoring Activity State As your activity begins to stop, the system calls onSaveInstanceState() so your activity can save state information with a collection of key-value pairs. The default implementation of this method automatically saves information about the state of the activity's view hierarchy, such as the text in an EditText widget or the scroll position of a ListView. To save additional state information for your activity, you must implement onSaveInstanceState() and add keyvalue pairs to the Bundle object. For example: public class MainActivity extends Activity { static final String SOME_VALUE = "int_value"; static final String SOME_OTHER_VALUE = "string_value"; @Override protected void onSaveInstanceState(Bundle savedInstanceState) { // Save custom values into the bundle savedInstanceState.putInt(SOME_VALUE, someIntValue); savedInstanceState.putString(SOME_OTHER_VALUE, someStringValue); // Always call the superclass so it can save the view hierarchy state super.onSaveInstanceState(savedInstanceState); } } The system will call that method before an Activity is destroyed. Then later the system will call onRestoreInstanceState where we can restore state from the bundle: @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { // Always call the superclass so it can restore the view hierarchy super.onRestoreInstanceState(savedInstanceState); // Restore state members from saved instance someIntValue = savedInstanceState.getInt(SOME_VALUE); someStringValue = savedInstanceState.getString(SOME_OTHER_VALUE); } Instance state can also be restored in the standard Activity#onCreate method but it is convenient to do it in onRestoreInstanceState which ensures all of the initialization has been done and allows subclasses to decide whether to use the default implementation. Read this stackoverow post for details. Note that onSaveInstanceState and onRestoreInstanceState are not guaranteed to be called together. Android invokes onSaveInstanceState() when there's a chance the activity might be destroyed. However, there are cases where onSaveInstanceState is called but the activity is not destroyed and as a result onRestoreInstanceState is not invoked. Section 135.2: Retaining Fragments In many cases, we can avoid problems when an Activity is re-created by simply using fragments. If your views and state are within a fragment, we can easily have the fragment be retained when the activity is re-created: public class RetainedFragment extends Fragment { // data object we want to retain private MyDataObject data; GoalKicker.com Android Notes for Professionals 765 // this method is only called once for this fragment @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // retain this fragment when activity is re-initialized setRetainInstance(true); } public void setData(MyDataObject data) { this.data = data; } public MyDataObject getData() { return data; } } This approach keeps the fragment from being destroyed during the activity lifecycle. They are instead retained inside the Fragment Manager. See the Android ocial docs for more information. Now you can check to see if the fragment already exists by tag before creating one and the fragment will retain it's state across conguration changes. See the Handling Runtime Changes guide for more details. Section 135.3: Manually Managing Conguration Changes If your application doesn't need to update resources during a specic conguration change and you have a performance limitation that requires you to avoid the activity restart, then you can declare that your activity handles the conguration change itself, which prevents the system from restarting your activity. However, this technique should be considered a last resort when you must avoid restarts due to a conguration change and is not recommended for most applications. To take this approach, we must add the android:configChanges node to the activity within the AndroidManifest.xml: <activity android:name=".MyActivity" android:configChanges="orientation|screenSize|keyboardHidden" android:label="@string/app_name"> Now, when one of these congurations change, the activity does not restart but instead receives a call to onConfigurationChanged(): // Within the activity which receives these changes // Checks the current device orientation, and toasts accordingly @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); // Checks the orientation of the screen if (newConfig.orientation == Configuration.ORIENTATION_LANDSCAPE) { Toast.makeText(this, "landscape", Toast.LENGTH_SHORT).show(); } else if (newConfig.orientation == Configuration.ORIENTATION_PORTRAIT){ Toast.makeText(this, "portrait", Toast.LENGTH_SHORT).show(); } } See the Handling the Change docs. For more about which conguration changes you can handle in your activity, see the android:congChanges documentation and the Conguration class. GoalKicker.com Android Notes for Professionals 766 Section 135.4: Handling AsyncTask Problem: If after the AsyncTask starts there is a screen rotation the owning activity is destroyed and recreated. When the AsyncTask nishes it wants to update the UI that may not valid anymore. Solution: Using Loaders, one can easily overcome the activity destruction/recreation. Example: MainActivity: public class MainActivity extends AppCompatActivity implements LoaderManager.LoaderCallbacks<Bitmap> { //Unique id for the loader private static final int MY_LOADER = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); LoaderManager loaderManager = getSupportLoaderManager(); if(loaderManager.getLoader(MY_LOADER) == null) { loaderManager.initLoader(MY_LOADER, null, this).forceLoad(); } } @Override public Loader<Bitmap> onCreateLoader(int id, Bundle args) { //Create a new instance of your Loader<Bitmap> MyLoader loader = new MyLoader(MainActivity.this); return loader; } @Override public void onLoadFinished(Loader<Bitmap> loader, Bitmap data) { // do something in the parent activity/service // i.e. display the downloaded image Log.d("MyAsyncTask", "Received result: "); } @Override public void onLoaderReset(Loader<Bitmap> loader) { } } AsyncTaskLoader: public class MyLoader extends AsyncTaskLoader<Bitmap> { private WeakReference<Activity> motherActivity; public MyLoader(Activity activity) { super(activity); //We don't use this, but if you want you can use it, but remember, WeakReference motherActivity = new WeakReference<>(activity); GoalKicker.com Android Notes for Professionals 767 } @Override public Bitmap loadInBackground() { // Do work. I.e download an image from internet to be displayed in gui. // i.e. return the downloaded gui return result; } } Note: It is important to use either the v4 compatibility library or not, but do not use part of one and part of the other, as it will lead to compilation errors. To check you can look at the imports for android.support.v4.content and android.content (you shouldn't have both). Section 135.5: Lock Screen's rotation programmatically It is very common that during development, one may nd very useful to lock/unlock the device screen during specic parts of the code. For instance, while showing a Dialog with information, the developer might want to lock the screen's rotation to prevent the dialog from being dismissed and the current activity from being rebuilt to unlock it again when the dialog is dismissed. Even though we can achieve rotation locking from the manifest by doing : <activity android:name=".TheActivity" android:screenOrientation="portrait" android:label="@string/app_name" > </activity> One can do it programmatically as well by doing the following : public void lockDeviceRotation(boolean value) { if (value) { int currentOrientation = getResources().getConfiguration().orientation; if (currentOrientation == Configuration.ORIENTATION_LANDSCAPE) { setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_SENSOR_LANDSCAPE); } else { setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_SENSOR_PORTRAIT); } } else { getWindow().clearFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) { setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_FULL_USER); } else { setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_FULL_SENSOR); } } } And then calling the following, to respectively lock and unlock the device rotation lockDeviceRotation(true) and GoalKicker.com Android Notes for Professionals 768 lockDeviceRotation(false) Section 135.6: Saving and Restoring Fragment State Fragments also have a onSaveInstanceState() method which is called when their state needs to be saved: public class MySimpleFragment extends Fragment { private int someStateValue; private final String SOME_VALUE_KEY = "someValueToSave"; // Fires when a configuration change occurs and fragment needs to save state @Override protected void onSaveInstanceState(Bundle outState) { outState.putInt(SOME_VALUE_KEY, someStateValue); super.onSaveInstanceState(outState); } } Then we can pull data out of this saved state in onCreateView: public class MySimpleFragment extends Fragment { // ... // Inflate the view for the fragment based on layout XML @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.my_simple_fragment, container, false); if (savedInstanceState != null) { someStateValue = savedInstanceState.getInt(SOME_VALUE_KEY); // Do something with value if needed } return view; } } For the fragment state to be saved properly, we need to be sure that we aren't unnecessarily recreating the fragment on conguration changes. This means being careful not to reinitialize existing fragments when they already exist. Any fragments being initialized in an Activity need to be looked up by tag after a conguration change: public class ParentActivity extends AppCompatActivity { private MySimpleFragment fragmentSimple; private final String SIMPLE_FRAGMENT_TAG = "myfragmenttag"; @Override protected void onCreate(Bundle savedInstanceState) { if (savedInstanceState != null) { // saved instance state, fragment may exist // look up the instance that already exists by tag fragmentSimple = (MySimpleFragment) getSupportFragmentManager().findFragmentByTag(SIMPLE_FRAGMENT_TAG); } else if (fragmentSimple == null) { // only create fragment if they haven't been instantiated already fragmentSimple = new MySimpleFragment(); } } } This requires us to be careful to include a tag for lookup whenever putting a fragment into the activity within a GoalKicker.com Android Notes for Professionals 769 transaction: public class ParentActivity extends AppCompatActivity { private MySimpleFragment fragmentSimple; private final String SIMPLE_FRAGMENT_TAG = "myfragmenttag"; @Override protected void onCreate(Bundle savedInstanceState) { // ... fragment lookup or instantation from above... // Always add a tag to a fragment being inserted into container if (!fragmentSimple.isInLayout()) { getSupportFragmentManager() .beginTransaction() .replace(R.id.container, fragmentSimple, SIMPLE_FRAGMENT_TAG) .commit(); } } } With this simple pattern, we can properly re-use fragments and restore their state across conguration changes. GoalKicker.com Android Notes for Professionals 770 Chapter 136: Xposed Section 136.1: Creating a Xposed Module Xposed is a framework that allows you to hook method calls of other apps. When you do a modication by decompiling an APK, you can insert/change commands directly wherever you want. However, you will need to recompile/sign the APK afterwards and you can only distribute the whole package. With Xposed, you can inject your own code before or after methods, or replace whole methods completely. Unfortunately, you can only install Xposed on rooted devices. You should use Xposed whenever you want to manipulate the behavior of other apps or the core Android system and don't want to go through the hassle of decompiling, recompiling and signing APKs. First, you create a standard app without an Activity in Android Studio. Then you have to include the following code in your build.gradle: repositories { jcenter(); } After that you add the following dependencies: provided 'de.robv.android.xposed:api:82' provided 'de.robv.android.xposed:api:82:sources' Now you have to place these tags inside the application tag found in the AndroidManifest.xml so Xposed recognizes your module: <meta-data android:name="xposedmodule" android:value="true" /> <meta-data android:name="xposeddescription" android:value="YOUR_MODULE_DESCRIPTION" /> <meta-data android:name="xposedminversion" android:value="82" /> NOTE: Always replace 82 with the latest Xposed version. Section 136.2: Hooking a method Create a new class implementing IXposedHookLoadPackage and implement the handleLoadPackage method: public class MultiPatcher implements IXposedHookLoadPackage { @Override public void handleLoadPackage(XC_LoadPackage.LoadPackageParam loadPackageParam) throws Throwable { } } Inside the method, you check loadPackageParam.packageName for the package name of the app you want to hook: GoalKicker.com Android Notes for Professionals 771 @Override public void handleLoadPackage(XC_LoadPackage.LoadPackageParam loadPackageParam) throws Throwable { if (!loadPackageParam.packageName.equals("other.package.name")) { return; } } Now you can hook your method and either manipulate it before it's code is run, or after: @Override public void handleLoadPackage(XC_LoadPackage.LoadPackageParam loadPackageParam) throws Throwable { if (!loadPackageParam.packageName.equals("other.package.name")) { return; } XposedHelpers.findAndHookMethod( "other.package.name", loadPackageParam.classLoader, "otherMethodName", YourFirstParameter.class, YourSecondParameter.class, new XC_MethodHook() { @Override protected void beforeHookedMethod(MethodHookParam param) throws Throwable { Object[] args = param.args; args[0] = true; args[1] = "example string"; args[2] = 1; Object thisObject = param.thisObject; // Do something with the instance of the class } @Override protected void afterHookedMethod(MethodHookParam param) throws Throwable { Object result = param.getResult(); param.setResult(result + "example string"); } }); } GoalKicker.com Android Notes for Professionals 772 Chapter 137: PackageManager Section 137.1: Retrieve application version public String getAppVersion() throws PackageManager.NameNotFoundException { PackageManager manager = getApplicationContext().getPackageManager(); PackageInfo info = manager.getPackageInfo( getApplicationContext().getPackageName(), 0); return info.versionName; } Section 137.2: Version name and version code To get versionName and versionCode of current build of your application you should query Android's package manager. try { // Reference to Android's package manager PackageManager packageManager = this.getPackageManager(); // Getting package info of this application PackageInfo info = packageManager.getPackageInfo(this.getPackageName(), 0); // Version code info.versionCode // Version name info.versionName } catch (NameNotFoundException e) { // Handle the exception } Section 137.3: Install time and update time To get the time at which your app was installed or updated, you should query Android's package manager. try { // Reference to Android's package manager PackageManager packageManager = this.getPackageManager(); // Getting package info of this application PackageInfo info = packageManager.getPackageInfo(this.getPackageName(), 0); // Install time. Units are as per currentTimeMillis(). info.firstInstallTime // Last update time. Units are as per currentTimeMillis(). info.lastUpdateTime } catch (NameNotFoundException e) { // Handle the exception } GoalKicker.com Android Notes for Professionals 773 Section 137.4: Utility method using PackageManager Here we can nd some useful method using PackageManager, Below method will help to get the app name using package name private String getAppNameFromPackage(String packageName, Context context) { Intent mainIntent = new Intent(Intent.ACTION_MAIN, null); mainIntent.addCategory(Intent.CATEGORY_LAUNCHER); List<ResolveInfo> pkgAppsList = context.getPackageManager() .queryIntentActivities(mainIntent, 0); for (ResolveInfo app : pkgAppsList) { if (app.activityInfo.packageName.equals(packageName)) { return app.activityInfo.loadLabel(context.getPackageManager()).toString(); } } return null; } Below method will help to get the app icon using package name, private Drawable getAppIcon(String packageName, Context context) { Drawable appIcon = null; try { appIcon = context.getPackageManager().getApplicationIcon(packageName); } catch (PackageManager.NameNotFoundException e) { } return appIcon; } Below method will help to get the list of installed application. public static List<ApplicationInfo> getLaunchIntent(PackageManager packageManager) { List<ApplicationInfo> list = packageManager.getInstalledApplications(PackageManager.GET_META_DATA); return list; } Note: above method will give the launcher application too. Below method will help to hide the app icon from the launcher. public static void hideLockerApp(Context context, boolean hide) { ComponentName componentName = new ComponentName(context.getApplicationContext(), SplashActivity.class); int setting = hide ? PackageManager.COMPONENT_ENABLED_STATE_DISABLED : PackageManager.COMPONENT_ENABLED_STATE_ENABLED; int current = context.getPackageManager().getComponentEnabledSetting(componentName); if (current != setting) { context.getPackageManager().setComponentEnabledSetting(componentName, setting, PackageManager.DONT_KILL_APP); GoalKicker.com Android Notes for Professionals 774 } } Note: After switch o the device and switch on this icon will come back in the launcher. GoalKicker.com Android Notes for Professionals 775 Chapter 138: Gesture Detection Section 138.1: Swipe Detection public class OnSwipeListener implements View.OnTouchListener { private final GestureDetector gestureDetector; public OnSwipeListener(Context context) { gestureDetector = new GestureDetector(context, new GestureListener()); } @Override public boolean onTouch(View v, MotionEvent event) { return gestureDetector.onTouchEvent(event); } private final class GestureListener extends GestureDetector.SimpleOnGestureListener { private static final int SWIPE_VELOCITY_THRESHOLD = 100; private static final int SWIPE_THRESHOLD = 100; @Override public boolean onDown(MotionEvent e) { return true; } @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { float diffY = e2.getY() - e1.getY(); float diffX = e2.getX() - e1.getX(); if (Math.abs(diffX) > Math.abs(diffY)) { if (Math.abs(diffX) > SWIPE_THRESHOLD && Math.abs(velocityX) > SWIPE_VELOCITY_THRESHOLD) { if (diffX > 0) { onSwipeRight(); } else { onSwipeLeft(); } } } else if (Math.abs(diffY) > SWIPE_THRESHOLD && Math.abs(velocityY) > SWIPE_VELOCITY_THRESHOLD) { if (diffY > 0) { onSwipeBottom(); } else { onSwipeTop(); } } return true; } } public void onSwipeRight() { } public void onSwipeLeft() { } public void onSwipeTop() { } GoalKicker.com Android Notes for Professionals 776 public void onSwipeBottom() { } } Applied to a view... view.setOnTouchListener(new OnSwipeListener(context) { public void onSwipeTop() { Log.d("OnSwipeListener", "onSwipeTop"); } public void onSwipeRight() { Log.d("OnSwipeListener", "onSwipeRight"); } public void onSwipeLeft() { Log.d("OnSwipeListener", "onSwipeLeft"); } public void onSwipeBottom() { Log.d("OnSwipeListener", "onSwipeBottom"); } }); Section 138.2: Basic Gesture Detection public class GestureActivity extends Activity implements GestureDetector.OnDoubleTapListener, GestureDetector.OnGestureListener { private GestureDetector mGestureDetector; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mGestureDetector = new GestureDetector(this, this); mGestureDetector.setOnDoubleTapListener(this); } @Override public boolean onTouchEvent(MotionEvent event){ mGestureDetector.onTouchEvent(event); return super.onTouchEvent(event); } @Override public boolean onDown(MotionEvent event) { Log.d("GestureDetector","onDown"); return true; } @Override public boolean onFling(MotionEvent event1, MotionEvent event2, float velocityX, float velocityY) { Log.d("GestureDetector","onFling"); return true; } @Override public void onLongPress(MotionEvent event) { GoalKicker.com Android Notes for Professionals 777 Log.d("GestureDetector","onLongPress"); } @Override public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX, float distanceY) { Log.d("GestureDetector","onScroll"); return true; } @Override public void onShowPress(MotionEvent event) { Log.d("GestureDetector","onShowPress"); } @Override public boolean onSingleTapUp(MotionEvent event) { Log.d("GestureDetector","onSingleTapUp"); return true; } @Override public boolean onDoubleTap(MotionEvent event) { Log.d("GestureDetector","onDoubleTap"); return true; } @Override public boolean onDoubleTapEvent(MotionEvent event) { Log.d("GestureDetector","onDoubleTapEvent"); return true; } @Override public boolean onSingleTapConfirmed(MotionEvent event) { Log.d("GestureDetector","onSingleTapConfirmed"); return true; } } GoalKicker.com Android Notes for Professionals 778 Chapter 139: Doze Mode Section 139.1: Whitelisting an Android application programmatically Whitelisting won't disable the doze mode for your app, but you can do that by using network and hold-wake locks. Whitelisting an Android application programmatically can be done as follows: boolean isIgnoringBatteryOptimizations = pm.isIgnoringBatteryOptimizations(getPackageName()); if(!isIgnoringBatteryOptimizations){ Intent intent = new Intent(); intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS); intent.setData(Uri.parse("package:" + getPackageName())); startActivityForResult(intent, MY_IGNORE_OPTIMIZATION_REQUEST); } The result of starting the activity above can be vered by the following code: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == MY_IGNORE_OPTIMIZATION_REQUEST) { PowerManager pm = (PowerManager)getSystemService(Context.POWER_SERVICE); boolean isIgnoringBatteryOptimizations = pm.isIgnoringBatteryOptimizations(getPackageName()); if(isIgnoringBatteryOptimizations){ // Ignoring battery optimization }else{ // Not ignoring battery optimization } } } Section 139.2: Exclude app from using doze mode 1. Open phone's settings 2. open battery 3. open menu and select "battery optimization" 4. from the dropdown menu select "all apps" 5. select the app you want to whitelist 6. select "don't optimize" Now this app will show under not optimized apps. An app can check whether it's whitelisted by calling isIgnoringBatteryOptimizations() GoalKicker.com Android Notes for Professionals 779 Chapter 140: Colors Section 140.1: Color Manipulation To manipulate colors we will modify the argb (Alpha, Red, Green and Blue) values of a color. First extract RGB values from your color. int yourColor = Color.parse("#ae1f67"); int red = Color.red(yourColor); int green = Color.green(yourColor); int blue = Color.blue(yourColor); Now you can reduce or increase red, green, and blue values and combine them to be a color again: int newColor = Color.rgb(red, green, blue); Or if you want to add some alpha to it, you can add it while creating the color: int newColor = Color.argb(alpha, red, green, blue); Alpha and RGB values should be in the range [0-225]. GoalKicker.com Android Notes for Professionals 780 Chapter 141: Keyboard Section 141.1: Register a callback for keyboard open and close The idea is to measure a layout before and after each change and if there is a signicant change you can be somewhat certain that its the softkeyboard. // A variable to hold the last content layout hight private int mLastContentHeight = 0; private ViewTreeObserver.OnGlobalLayoutListener keyboardLayoutListener = new ViewTreeObserver.OnGlobalLayoutListener() { @Override public void onGlobalLayout() { int currentContentHeight = findViewById(Window.ID_ANDROID_CONTENT).getHeight(); if (mLastContentHeight > currentContentHeight + 100) { Timber.d("onGlobalLayout: Keyboard is open"); mLastContentHeight = currentContentHeight; } else if (currentContentHeight > mLastContentHeight + 100) { Timber.d("onGlobalLayout: Keyboard is closed"); mLastContentHeight = currentContentHeight; } } }; then in our onCreate set the initial value for mLastContentHeight mLastContentHeight = findViewById(Window.ID_ANDROID_CONTENT).getHeight(); and add the listener rootView.getViewTreeObserver().addOnGlobalLayoutListener(keyboardLayoutListener); don't forget to remove the listener on destroy rootView.getViewTreeObserver().removeOnGlobalLayoutListener(keyboardLayoutListener); Section 141.2: Hide keyboard when user taps anywhere else on the screen Add code in your Activity. This would work for Fragment also, no need to add this code in Fragment. @Override public boolean dispatchTouchEvent(MotionEvent ev) { View view = getCurrentFocus(); if (view != null && (ev.getAction() == MotionEvent.ACTION_UP || ev.getAction() == MotionEvent.ACTION_MOVE) && view instanceof EditText && !view.getClass().getName().startsWith("android.webkit.")) { int scrcoords[] = new int[2]; view.getLocationOnScreen(scrcoords); float x = ev.getRawX() + view.getLeft() - scrcoords[0]; float y = ev.getRawY() + view.getTop() - scrcoords[1]; if (x < view.getLeft() || x > view.getRight() || y < view.getTop() || y > view.getBottom()) GoalKicker.com Android Notes for Professionals 781 ((InputMethodManager)this.getSystemService(Context.INPUT_METHOD_SERVICE)).hideSoftInputFromWindow(( this.getWindow().getDecorView().getApplicationWindowToken()), 0); } return super.dispatchTouchEvent(ev); } GoalKicker.com Android Notes for Professionals 782 Chapter 142: RenderScript RenderScript is a scripting language that allows you to write high performance graphic rendering and raw computational code. It provides a means of writing performance critical code that the system later compiles to native code for the processor it can run on. This could be the CPU, a multi-core CPU, or even the GPU. Which it ultimately runs on depends on many factors that aren't readily available to the developer, but also depends on what architecture the internal platform compiler supports. Section 142.1: Getting Started RenderScript is a framework to allow high performance parallel computation on Android. Scripts you write will be executed across all available processors (e.g. CPU, GPU etc) in parallel allowing you to focus on the task you want to achieve instead of how it is scheduled and executed. Scripts are written in a C99 based language (C99 being an old version of the C programming language standard). For each Script a Java class is created which allows you to easily interact with RenderScript in your Java code. Setting up your project There exist two dierent ways to access RenderScript in your app, with the Android Framework libraries or the Support Library. Even if you don't want to target devices before API level 11 you should always use the Support Library implementation because it ensures devices compatibility across many dierent devices. To use the support library implementation you need to use at least build tools version 18.1.0! Now lets setup the build.gradle le of your application: android { compileSdkVersion 24 buildToolsVersion '24.0.1' defaultConfig { minSdkVersion 8 targetSdkVersion 24 renderscriptTargetApi 18 renderscriptSupportModeEnabled true } } renderscriptTargetApi: This should be set to the version earliest API level which provides all RenderScript functionality you require. renderscriptSupportModeEnabled: This enables the use of the Support Library RenderScript implementation. How RenderScript works A typical RenderScript consists of two things: Kernels and Functions. A function is just what it sounds like - it accepts an input, does something with that input and returns an output. A Kernel is where the real power of RenderScript comes from. A Kernel is a function which is executed against every element inside an Allocation. An Allocation can be used to pass data like a Bitmap or a byte array to a RenderScript and they are also used to get a result from a Kernel. Kernels can either take one Allocation as input and another as output or they can modify the data inside just one Allocation. GoalKicker.com Android Notes for Professionals 783 You can write your one Kernels, but there are also many predened Kernels which you can use to perform common operations like a Gaussian Image Blur. As already mentioned for every RenderScript le a class is generated to interact with it. These classes always start with the prex ScriptC_ followed by the name of the RenderScript le. For example if your RenderScript le is called example then the generated Java class will be called ScriptC_example. All predened Scripts just start with the prex Script - for example the Gaussian Image Blur Script is called ScriptIntrinsicBlur. Writing your rst RenderScript The following example is based on an example on GitHub. It performs basic image manipulation by modifying the saturation of an image. You can nd the source code here and check it out if you want to play around with it yourself. Here's a quick gif of what the result is supposed to look like: RenderScript Boilerplate RenderScript les reside in the folder src/main/rs in your project. Each le has the le extension .rs and has to contain two #pragma statements at the top: #pragma version(1) #pragma rs java_package_name(your.package.name) #pragma version(1): This can be used to set the version of RenderScript you are using. Currently there is only version 1. #pragma rs java_package_name(your.package.name): This can be used to set the package name of the Java class generated to interact with this particular RenderScript. There is another #pragma you should usually set in each of your RenderScript les and it is used to set the oating GoalKicker.com Android Notes for Professionals 784 point precision. You can set the oating point precision to three dierent levels: #pragma rs_fp_full: This is the strictest setting with the highest precision and it is also the default value if don't specify anything. You should use this if you require high oating point precision. #pragma rs_fp_relaxed: This is ensures not quite as high oating point precision, but on some architectures it enables a bunch of optimizations which can cause your scripts to run faster. #pragma rs_fp_imprecise: This ensures even less precision and should be used if oating point precision does not really matter to your script. Most scripts can just use #pragma rs_fp_relaxed unless you really need high oating point precision. Global Variables Now just like in C code you can dene global variables or constants: const static float3 gMonoMult = {0.299f, 0.587f, 0.114f}; float saturationLevel = 0.0f; The variable gMonoMult is of type float3. This means it is a vector consisting of 3 oat numbers. The other float variable called saturationValue is not constant, therefore you can set it at runtime to a value you like. You can use variables like this in your Kernels or functions and therefore they are another way to give input to or receive output from your RenderScripts. For each not constant variable a getter and setter method will be generated on the associated Java class. Kernels But now lets get started implementing the Kernel. For the purposes of this example I am not going to explain the math used in the Kernel to modify the saturation of the image, but instead will focus on how to implement a Kernel and and how to use it. At the end of this chapter I will quickly explain what the code in this Kernel is actually doing. Kernels in general Let's take a look at the source code rst: uchar4 __attribute__((kernel)) saturation(uchar4 in) { float4 f4 = rsUnpackColor8888(in); float3 dotVector = dot(f4.rgb, gMonoMult); float3 newColor = mix(dotVector, f4.rgb, saturationLevel); return rsPackColorTo8888(newColor); } As you can see it looks like a normal C function with one exception: The __attribute__((kernel)) between the return type and method name. This is what tells RenderScript that this method is a Kernel. Another thing you might notice is that this method accepts a uchar4 parameter and returns another uchar4 value. uchar4 is - like the float3 variable we discussed in the chapter before - a vector. It contains 4 uchar values which are just byte values in the range from 0 to 255. You can access these individual values in many dierent ways, for example in.r would return the byte which corresponds to the red channel of a pixel. We use a uchar4 since each pixel is made up of 4 values - r for red, g for green, b for blue and a for alpha - and you can access them with this shorthand. RenderScript also allows you to take any number of values from a vector and create another vector with them. For example in.rgb would return a uchar3 value which just contains the red, green and blue parts of the pixel without the alpha value. At runtime RenderScript will call this Kernel method for each pixel of an image which is why the return value and GoalKicker.com Android Notes for Professionals 785 parameter are just one uchar4 value. RenderScript will run many of these calls in parallel on all available processors which is why RenderScript is so powerful. This also means that you don't have to worry about threading or thread safety, you can just implement whatever you want to do to each pixel and RenderScript takes care of the rest. When calling a Kernel in Java you supply two Allocation variables, one which contains the input data and another one which will receive the output. Your Kernel method will be called for each value in the input Allocation and will write the result to the output Allocation. RenderScript Runtime API methods In the Kernel above a few methods are used which are provided out of the box. RenderScript provides many such methods and they are vital for almost anything you are going to do with RenderScript. Among them are methods to do math operations like sin() and helper methods like mix() which mixes two values according to another values. But there are also methods for more complex operations when dealing with vectors, quaternions and matrices. The ocial RenderScript Runtime API Reference is the best resource out there if you want to know more about a particular method or are looking for a specic method which performs a common operation like calculating the dot product of a matrix. You can nd this documentation here. Kernel Implementation Now let's take a look at the specics of what this Kernel is doing. Here's the rst line in the Kernel: float4 f4 = rsUnpackColor8888(in); The rst line calls the built in method rsUnpackColor8888() which transforms the uchar4 value to a float4 values. Each color channel is also transformed to the range 0.0f - 1.0f where 0.0f corresponds to a byte value of 0 and 1.0f to 255. The main purpose of this is to make all the math in this Kernel a lot simpler. float3 dotVector = dot(f4.rgb, gMonoMult); This next line uses the built in method dot() to calculate the dot product of two vectors. gMonoMult is a constant value we dened a few chapters above. Since both vectors need to be of the same length to calculate the dot product and also since we just want to aect the color channels and not the alpha channel of a pixel we use the shorthand .rgb to get a new float3 vector which just contains the red, green and blue color channels. Those of us who still remember from school how the dot product works will quickly notice that the dot product should return just one value and not a vector. Yet in the code above we are assigning the result to a float3 vector. This is again a feature of RenderScript. When you assign a one dimensional number to a vector all elements in the vector will be set to this value. For example the following snippet will assign 2.0f to each of the three values in the float3 vector: float3 example = 2.0f; So the result of the dot product above is assigned to each element in the float3 vector above. Now comes the part in which we actually use the global variable saturationLevel to modify the saturation of the image: float3 newColor = mix(dotVector, f4.rgb, saturationLevel); This uses the built in method mix() to mix together the original color with the dot product vector we created above. How they are mixed together is determined by the global saturationLevel variable. So a saturationLevel of 0.0f will cause the resulting color to have no part of the original color values and will only consist of values in the dotVector which results in a black and white or grayed out image. A value of 1.0f will cause the resulting color to GoalKicker.com Android Notes for Professionals 786 be completely made up of the original color values and values above 1.0f will multiply the original colors to make them more bright and intense. return rsPackColorTo8888(newColor); This is the last part in the Kernel. rsPackColorTo8888() transforms the float3 vector back to a uchar4 value which is then returned. The resulting byte values are clamped to a range between 0 and 255, so oat values higher than 1.0f will result in a byte value of 255 and values lower than 0.0 will result in a byte value of 0. And that is the whole Kernel implementation. Now there is only one part remaining: How to call a Kernel in Java. Calling RenderScript in Java Basics As was already mentioned above for each RenderScript le a Java class is generated which allows you to interact with the your scripts. These les have the prex ScriptC_ followed by the name of the RenderScript le. To create an instance of these classes you rst need an instance of the RenderScript class: final RenderScript renderScript = RenderScript.create(context); The static method create() can be used to create a RenderScript instance from a Context. You can then instantiate the Java class which was generated for your script. If you called the RenderScript le saturation.rs then the class will be called ScriptC_saturation: final ScriptC_saturation script = new ScriptC_saturation(renderScript); On this class you can now set the saturation level and call the Kernel. The setter which was generated for the saturationLevel variable will have the prex set_ followed by the name of the variable: script.set_saturationLevel(1.0f); There is also a getter prexed with get_ which allows you to get the saturation level currently set: float saturationLevel = script.get_saturationLevel(); Kernels you dene in your RenderScript are prexed with forEach_ followed by the name of the Kernel method. The Kernel we have written expects an input Allocation and an output Allocation as its parameters: script.forEach_saturation(inputAllocation, outputAllocation); The input Allocation needs to contain the input image, and after the forEach_saturation method has nished the output allocation will contain the modied image data. Once you have an Allocation instance you can copy data from and to those Allocations by using the methods copyFrom() and copyTo(). For example you can copy a new image into your input `Allocation by calling: inputAllocation.copyFrom(inputBitmap); The same way you can retrieve the result image by calling copyTo() on the output Allocation: outputAllocation.copyTo(outputBitmap); Creating Allocation instances GoalKicker.com Android Notes for Professionals 787 There are many ways to create an Allocation. Once you have an Allocation instance you can copy new data from and to those Allocations with copyTo() and copyFrom() like explained above, but to create them initially you have to know with what kind of data you are exactly working with. Let's start with the input Allocation: We can use the static method createFromBitmap() to quickly create our input Allocation from a Bitmap: final Allocation inputAllocation = Allocation.createFromBitmap(renderScript, image); In this example the input image never changes so we never need to modify the input Allocation again. We can reuse it each time the saturationLevel changes to create a new output Bitmap. Creating the output Allocation is a little more complex. First we need to create what's called a Type. A Type is used to tell an Allocation with what kind of data it's dealing with. Usually one uses the Type.Builder class to quickly create an appropriate Type. Let's take a look at the code rst: final Type outputType = new Type.Builder(renderScript, Element.RGBA_8888(renderScript)) .setX(inputBitmap.getWidth()) .setY(inputBitmap.getHeight()) .create(); We are working with a normal 32 bit (or in other words 4 byte) per pixel Bitmap with 4 color channels. That's why we are choosing Element.RGBA_8888 to create the Type. Then we use the methods setX() and setY() to set the width and height of the output image to the same size as the input image. The method create() then creates the Type with the parameters we specied. Once we have the correct Type we can create the output Allocation with the static method createTyped(): final Allocation outputAllocation = Allocation.createTyped(renderScript, outputType); Now we are almost done. We also need an output Bitmap in which we can copy the data from the output Allocation. To do this we use the static method createBitmap() to create a new empty Bitmap with the same size and conguration as the input Bitmap. final Bitmap outputBitmap = Bitmap.createBitmap( inputBitmap.getWidth(), inputBitmap.getHeight(), inputBitmap.getConfig() ); And with that we have all the puzzle pieces to execute our RenderScript. Full example Now let's put all this together in one example: // Create the RenderScript instance final RenderScript renderScript = RenderScript.create(context); // Create the input Allocation final Allocation inputAllocation = Allocation.createFromBitmap(renderScript, inputBitmap); // Create the output Type. final Type outputType = new Type.Builder(renderScript, Element.RGBA_8888(renderScript)) .setX(inputBitmap.getWidth()) .setY(inputBitmap.getHeight()) .create(); GoalKicker.com Android Notes for Professionals 788 // And use the Type to create am output Allocation final Allocation outputAllocation = Allocation.createTyped(renderScript, outputType); // Create an empty output Bitmap from the input Bitmap final Bitmap outputBitmap = Bitmap.createBitmap( inputBitmap.getWidth(), inputBitmap.getHeight(), inputBitmap.getConfig() ); // Create an instance of our script final ScriptC_saturation script = new ScriptC_saturation(renderScript); // Set the saturation level script.set_saturationLevel(2.0f); // Execute the Kernel script.forEach_saturation(inputAllocation, outputAllocation); // Copy the result data to the output Bitmap outputAllocation.copyTo(outputBitmap); // Display the result Bitmap somewhere someImageView.setImageBitmap(outputBitmap); Conclusion With this introduction you should be all set to write your own RenderScript Kernels for simple image manipulation. However there are a few things you have to keep in mind: RenderScript only works in Application projects: Currently RenderScript les cannot be part of a library project. Watch out for memory: RenderScript is very fast, but it can also be memory intensive. There should never be more than one instance of RenderScript at any time. You should also reuse as much as possible. Normally you just need to create your Allocation instances once and can reuse them in the future. The same goes for output Bitmaps or your script instances. Reuse as much as possible. Do your work in the background: Again RenderScript is very fast, but not instant in any way. Any Kernel, especially complex ones should be executed o the UI thread in an AsyncTask or something similar. However for the most part you don't have to worry about memory leaks. All RenderScript related classes only use the application Context and therefore don't cause memory leaks. But you still have to worry about the usual stu like leaking View, Activity or any Context instance which you use yourself! Use built in stu: There are many predened scripts which perform tasks like image blurring, blending, converting, resizing. And there are many more built in methods which help you implement your kernels. Chances are that if you want to do something there is either a script or method which already does what you are trying to do. Don't reinvent the wheel. If you want to quickly get started and play around with actual code I recommend you take a look at the example GitHub project which implements the exact example talked about in this tutorial. You can nd the project here. Have fun with RenderScript! Section 142.2: Blur a View BlurBitmapTask.java public class BlurBitmapTask extends AsyncTask<Bitmap, Void, Bitmap> { private final WeakReference<ImageView> imageViewReference; private final RenderScript renderScript; GoalKicker.com Android Notes for Professionals 789 private boolean shouldRecycleSource = false; public BlurBitmapTask(@NonNull Context context, @NonNull ImageView imageView) { // Use a WeakReference to ensure // the ImageView can be garbage collected imageViewReference = new WeakReference<>(imageView); renderScript = RenderScript.create(context); } // Decode image in background. @Override protected Bitmap doInBackground(Bitmap... params) { Bitmap bitmap = params[0]; return blurBitmap(bitmap); } // Once complete, see if ImageView is still around and set bitmap. @Override protected void onPostExecute(Bitmap bitmap) { if (bitmap == null || isCancelled()) { return; } final ImageView imageView = imageViewReference.get(); if (imageView == null) { return; } imageView.setImageBitmap(bitmap); } public Bitmap blurBitmap(Bitmap bitmap) { // https://plus.google.com/+MarioViviani/posts/fhuzYkji9zz //Let's create an empty bitmap with the same size of the bitmap we want to blur Bitmap outBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888); //Instantiate a new Renderscript //Create an Intrinsic Blur Script using the Renderscript ScriptIntrinsicBlur blurScript = ScriptIntrinsicBlur.create(renderScript, Element.U8_4(renderScript)); //Create the in/out Allocations with the Renderscript and the in/out bitmaps Allocation allIn = Allocation.createFromBitmap(renderScript, bitmap); Allocation allOut = Allocation.createFromBitmap(renderScript, outBitmap); //Set the radius of the blur blurScript.setRadius(25.f); //Perform the Renderscript blurScript.setInput(allIn); blurScript.forEach(allOut); //Copy the final bitmap created by the out Allocation to the outBitmap allOut.copyTo(outBitmap); // recycle the original bitmap // nope, we are using the original bitmap as well :/ if (shouldRecycleSource) { GoalKicker.com Android Notes for Professionals 790 bitmap.recycle(); } //After finishing everything, we destroy the Renderscript. renderScript.destroy(); return outBitmap; } public boolean isShouldRecycleSource() { return shouldRecycleSource; } public void setShouldRecycleSource(boolean shouldRecycleSource) { this.shouldRecycleSource = shouldRecycleSource; } } Usage: ImageView imageViewOverlayOnViewToBeBlurred .setImageDrawable(ContextCompat.getDrawable(this, android.R.color.transparent)); View viewToBeBlurred.setDrawingCacheQuality(View.DRAWING_CACHE_QUALITY_LOW); viewToBeBlurred.setDrawingCacheEnabled(true); BlurBitmapTask blurBitmapTask = new BlurBitmapTask(this, imageViewOverlayOnViewToBeBlurred); blurBitmapTask.execute(Bitmap.createBitmap(viewToBeBlurred.getDrawingCache())); viewToBeBlurred.setDrawingCacheEnabled(false); Section 142.3: Blur an image This example demonstrates how to use Renderscript API to blur an image (using Bitmap). This example uses ScriptInstrinsicBlur provided by android Renderscript API (API >= 17). public class BlurProcessor { private RenderScript rs; private Allocation inAllocation; private Allocation outAllocation; private int width; private int height; private ScriptIntrinsicBlur blurScript; public BlurProcessor(RenderScript rs) { this.rs = rs; } public void initialize(int width, int height) { blurScript = ScriptIntrinsicBlur.create(rs, Element.U8_4(rs)); blurScript.setRadius(7f); // Set blur radius. 25 is max if (outAllocation != null) { outAllocation.destroy(); outAllocation = null; } // Bitmap must have ARGB_8888 config for this type Type bitmapType = new Type.Builder(rs, Element.RGBA_8888(rs)) .setX(width) .setY(height) .setMipmaps(false) // We are using MipmapControl.MIPMAP_NONE GoalKicker.com Android Notes for Professionals 791 .create(); // Create output allocation outAllocation = Allocation.createTyped(rs, bitmapType); // Create input allocation with same type as output allocation inAllocation = Allocation.createTyped(rs, bitmapType); } public void release() { if (blurScript != null) { blurScript.destroy(); blurScript = null; } if (inAllocation != null) { inAllocation.destroy(); inAllocation = null; } if (outAllocation != null) { outAllocation.destroy(); outAllocation = null; } } public Bitmap process(Bitmap bitmap, boolean createNewBitmap) { if (bitmap.getWidth() != width || bitmap.getHeight() != height) { // Throw error if required return null; } // Copy data from bitmap to input allocations inAllocation.copyFrom(bitmap); // Set input for blur script blurScript.setInput(inAllocation); // process and set data to the output allocation blurScript.forEach(outAllocation); if (createNewBitmap) { Bitmap returnVal = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); outAllocation.copyTo(returnVal); return returnVal; } outAllocation.copyTo(bitmap); return bitmap; } } Each script has a kernel which processes the data and it is generally invoked via forEach method. public class BlurActivity extends AppCompatActivity { private BlurProcessor blurProcessor; @Override public void onCreate(Bundle savedInstanceState) { // setup layout and other stuff GoalKicker.com Android Notes for Professionals 792 blurProcessor = new BlurProcessor(Renderscript.create(getApplicationContext())); } private void loadImage(String path) { // Load image to bitmap Bitmap bitmap = loadBitmapFromPath(path); // Initialize processor for this bitmap blurProcessor.release(); blurProcessor.initialize(bitmap.getWidth(), bitmap.getHeight()); // Blur image Bitmap blurImage = blurProcessor.process(bitmap, true); // Use newBitamp as false if you don't want to create a new bitmap } } This concluded the example here. It is advised to do the processing in a background thread. GoalKicker.com Android Notes for Professionals 793 Chapter 143: Fresco Fresco is a powerful system for displaying images in Android applications. In Android 4.x and lower, Fresco puts images in a special region of Android memory (called ashmem). This lets your application run faster - and suer the dreaded OutOfMemoryError much less often. Fresco also supports streaming of JPEGs. Section 143.1: Getting Started with Fresco First, add Fresco to your build.gradle as shown in the Remarks section: If you need additional features, like animated GIF or WebP support, you have to add the corresponding Fresco artifacts as well. Fresco needs to be initialized. You should only do this 1 time, so placing the initialization in your Application is a good idea. An example for this would be: public class MyApplication extends Application { @Override public void onCreate() { super.onCreate(); Fresco.initialize(this); } } If you want to load remote images from a server, your app needs the internt permission. Simply add it to your AndroidManifest.xml: <uses-permission android:name="android.permission.INTERNET" /> Then, add a SimpleDraweeView to your XML layout. Fresco does not support wrap_content for image dimensions since you might have multiple images with dierent dimensions (placeholder image, error image, actual image, ...). So you can either add a SimpleDraweeView with xed dimensions (or match_parent): <com.facebook.drawee.view.SimpleDraweeView android:id="@+id/my_image_view" android:layout_width="120dp" android:layout_height="120dp" fresco:placeholderImage="@drawable/placeholder" /> Or supply an aspect ratio for your image: <com.facebook.drawee.view.SimpleDraweeView android:id="@+id/my_image_view" android:layout_width="120dp" android:layout_height="wrap_content" fresco:viewAspectRatio="1.33" fresco:placeholderImage="@drawable/placeholder" /> Finally, you can set your image URI in Java: SimpleDraweeView draweeView = (SimpleDraweeView) findViewById(R.id.my_image_view); GoalKicker.com Android Notes for Professionals 794 draweeView.setImageURI("http://yourdomain.com/yourimage.jpg"); That's it! You should see your placeholder drawable until the network image has been fetched. Section 143.2: Using OkHttp 3 with Fresco First, in addition to the normal Fresco Gradle dependency, you have to add the OkHttp 3 dependency to your build.gradle: compile "com.facebook.fresco:imagepipeline-okhttp3:1.2.0" // Or a newer version. When you initialize Fresco (usually in your custom Application implementation), you can now specify your OkHttp client: OkHttpClient okHttpClient = new OkHttpClient(); // Build on your own OkHttpClient. Context context = ... // Your Application context. ImagePipelineConfig config = OkHttpImagePipelineConfigFactory .newBuilder(context, okHttpClient) .build(); Fresco.initialize(context, config); Section 143.3: JPEG Streaming with Fresco using DraweeController This example assumes that you have already added Fresco to your app (see this example): SimpleDraweeView img = new SimpleDraweeView(context); ImageRequest request = ImageRequestBuilder .newBuilderWithSource(Uri.parse("http://example.com/image.png")) .setProgressiveRenderingEnabled(true) // This is where the magic happens. .build(); DraweeController controller = Fresco.newDraweeControllerBuilder() .setImageRequest(request) .setOldController(img.getController()) // Get the current controller from our SimpleDraweeView. .build(); img.setController(controller); // Set the new controller to the SimpleDraweeView to enable progressive JPEGs. GoalKicker.com Android Notes for Professionals 795 Chapter 144: Swipe to Refresh Section 144.1: How to add Swipe-to-Refresh To your app Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:support-core-ui:24.2.0' Then add the SwipeRefreshLayout in your layout: <android.support.v4.widget.SwipeRefreshLayout android:id="@+id/swipe_refresh_layout" android:layout_width="match_parent" android:layout_height="wrap_content"> <!-- place your view here --> </android.support.v4.widget.SwipeRefreshLayout> Finally implement the SwipeRefreshLayout.OnRefreshListener listener. mSwipeRefreshLayout = (SwipeRefreshLayout) findViewById(R.id.swipe_refresh_layout); mSwipeRefreshLayout.setOnRefreshListener(new OnRefreshListener() { @Override public void onRefresh() { // your code } }); Section 144.2: Swipe To Refresh with RecyclerView To add a Swipe To Refresh layout with a RecyclerView add the following to your Activity/Fragment layout le: <android.support.v4.widget.SwipeRefreshLayout android:id="@+id/refresh_layout" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior"> <android.support.v7.widget.RecyclerView android:id="@+id/recycler_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" android:scrollbars="vertical" /> </android.support.v4.widget.SwipeRefreshLayout> In your Activity/Fragment add the following to initialize the SwipeToRefreshLayout: SwipeRefreshLayout mSwipeRefreshLayout = (SwipeRefreshLayout) findViewById(R.id.refresh_layout); mSwipeRefreshLayout.setColorSchemeResources(R.color.green_bg, android.R.color.holo_green_light, android.R.color.holo_orange_light, android.R.color.holo_red_light); GoalKicker.com Android Notes for Professionals 796 mSwipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() { @Override public void onRefresh() { // Execute code when refresh layout swiped } }); GoalKicker.com Android Notes for Professionals 797 Chapter 145: Creating Splash screen Section 145.1: Splash screen with animation This example shows a simple but eective splash screen with animation that can be created by using Android Studio. Step 1: Create an animation Create a new directory named anim in the res directory. Right-click it and create a new Animation Resource le named fade_in.xml: Then, put the following code into the fade_in.xml le: <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android" android:fillAfter="true" > <alpha android:duration="1000" android:fromAlpha="0.0" android:interpolator="@android:anim/accelerate_interpolator" android:toAlpha="1.0" /> </set> Step 2: Create an activity Create an empty activity using Android Studio named Splash. Then, put the following code into it: public class Splash extends AppCompatActivity { Animation anim; ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_splash); imageView=(ImageView)findViewById(R.id.imageView2); // Declare an imageView to show the animation. anim = AnimationUtils.loadAnimation(getApplicationContext(), R.anim.fade_in); // Create the animation. anim.setAnimationListener(new Animation.AnimationListener() { @Override public void onAnimationStart(Animation animation) { } @Override public void onAnimationEnd(Animation animation) { startActivity(new Intent(this,HomeActivity.class)); // HomeActivity.class is the activity to go after showing the splash screen. } GoalKicker.com Android Notes for Professionals 798 @Override public void onAnimationRepeat(Animation animation) { } }); imageView.startAnimation(anim); } } Next, put the following code into the layout le: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_splash" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="your_packagename" android:orientation="vertical" android:background="@android:color/white"> <ImageView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/imageView2" android:layout_weight="1" android:src="@drawable/Your_logo_or_image" /> </LinearLayout> Step 3: Replace the default launcher Turn your Splash activity into a launcher by adding the following code to the AndroidManifest le: <activity android:name=".Splash" android:theme="@style/AppTheme.NoActionBar"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> Then, remove the default launcher activity by removing the following code from the AndroidManifest le: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> Section 145.2: A basic splash screen A splash screen is just like any other activity, but it can handle all of your startup-needs in the background. Example: Manifest: GoalKicker.com Android Notes for Professionals 799 <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.package" android:versionCode="1" android:versionName="1.0" > <application android:allowBackup="false" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Splash" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Now our splash-screen will be called as the rst activity. Here is an example splashscreen that also handles some critical app elements: public class Splash extends Activity{ public final int SPLASH_DISPLAY_LENGTH = 3000; private void checkPermission() { if (ContextCompat.checkSelfPermission(this, Manifest.permission.WAKE_LOCK) != PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this,Manifest.permission.INTERNET) != PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_NETWORK_STATE) != PackageManager.PERMISSION_GRANTED) {//Can add more as per requirement ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WAKE_LOCK, Manifest.permission.INTERNET, Manifest.permission.ACCESS_NETWORK_STATE}, 123); } } @Override protected void onCreate(Bundle sis){ super.onCreate(sis); //set the content view. The XML file can contain nothing but an image, such as a logo or the app icon setContentView(R.layout.splash); //we want to display the splash screen for a few seconds before it automatically GoalKicker.com Android Notes for Professionals 800 //disappears and loads the game. So we create a thread: new Handler().postDelayed(new Runnable() { @Override public void run() { //request permissions. NOTE: Copying this and the manifest will cause the app to crash as the permissions requested aren't defined in the manifest. if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M ) { checkPermission(); } String lang = [load or determine the system language and set to default if it isn't available.] Locale locale = new Locale(lang); Locale.setDefault(locale); Configuration config = new Configuration (); config.locale = locale; Splash.this.getResources().updateConfiguration(config, Splash.this.getResources().getDisplayMetrics()) ; //after three seconds, it will execute all of this code. //as such, we then want to redirect to the master-activity Intent mainIntent = new Intent(Splash.this, MainActivity.class); Splash.this.startActivity(mainIntent); //then we finish this class. Dispose of it as it is longer needed Splash.this.finish(); } }, SPLASH_DISPLAY_LENGTH); } public void onPause(){ super.onPause(); finish(); } } GoalKicker.com Android Notes for Professionals 801 Chapter 146: IntentService Section 146.1: Creating an IntentService To create an IntentService, create a class which extends IntentService, and within it, a method which overrides onHandleIntent: package com.example.myapp; public class MyIntentService extends IntentService { @Override protected void onHandleIntent (Intent workIntent) { //Do something in the background, based on the contents of workIntent. } } Section 146.2: Basic IntentService Example The abstract class IntentService is a base class for services, which run in the background without any user interface. Therefore, in order to update the UI, we have to make use of a receiver, which may be either a BroadcastReceiver or a ResultReceiver: A BroadcastReceiver should be used if your service needs to communicate with multiple components that want to listen for communication. A ResultReceiver: should be used if your service needs to communicate with only the parent application (i.e. your application). Within the IntentService, we have one key method, onHandleIntent(), in which we will do all actions, for example, preparing notications, creating alarms, etc. If you want to use you own IntentService, you have to extend it as follows: public class YourIntentService extends IntentService { public YourIntentService () { super("YourIntentService "); } @Override protected void onHandleIntent(Intent intent) { // TODO: Write your own code here. } } Calling/starting the activity can be done as follows: Intent i = new Intent(this, YourIntentService.class); startService(i); // For the service. startActivity(i); // For the activity; ignore this for now. Similar to any activity, you can pass extra information such as bundle data to it as follows: Intent passDataIntent = new Intent(this, YourIntentService.class); msgIntent.putExtra("foo","bar"); startService(passDataIntent); Now assume that we passed some data to the YourIntentService class. Based on this data, an action can be GoalKicker.com Android Notes for Professionals 802 performed as follows: public class YourIntentService extends IntentService { private String actvityValue="bar"; String retrivedValue=intent.getStringExtra("foo"); public YourIntentService () { super("YourIntentService "); } @Override protected void onHandleIntent(Intent intent) { if(retrivedValue.equals(actvityValue)){ // Send the notification to foo. } else { // Retrieving data failed. } } } The code above also shows how to handle constraints in the OnHandleIntent() method. Section 146.3: Sample Intent Service Here is an example of an IntentService that pretends to load images in the background. All you need to do to implement an IntentService is to provide a constructor that calls the super(String) constructor, and you need to implement the onHandleIntent(Intent) method. public class ImageLoaderIntentService extends IntentService { public static final String IMAGE_URL = "url"; /** * Define a constructor and call the super(String) constructor, in order to name the worker * thread - this is important if you want to debug and know the name of the thread upon * which this Service is operating its jobs. */ public ImageLoaderIntentService() { super("Example"); } @Override protected void onHandleIntent(Intent intent) { // This is where you do all your logic - this code is executed on a background thread String imageUrl = intent.getStringExtra(IMAGE_URL); if (!TextUtils.isEmpty(imageUrl)) { Drawable image = HttpUtils.loadImage(imageUrl); // HttpUtils is made-up for the example } // Send your drawable back to the UI now, so that you can use it - there are many ways // to achieve this, but they are out of reach for this example } } In order to start an IntentService, you need to send an Intent to it. You can do so from an Activity, for an example. Of course, you're not limited to that. Here is an example of how you would summon your new Service from an Activity class. GoalKicker.com Android Notes for Professionals 803 Intent serviceIntent = new Intent(this, ImageLoaderIntentService.class); // you can use 'this' as the first parameter if your class is a Context (i.e. an Activity, another Service, etc.), otherwise, supply the context differently serviceIntent.putExtra(IMAGE_URL, "http://www.example-site.org/some/path/to/an/image"); startService(serviceIntent); // if you are not using 'this' in the first line, you also have to put the call to the Context object before startService(Intent) here The IntentService processes the data from its Intents sequentially, so that you can send multiple Intents without worrying whether they will collide with each other. Only one Intent at a time is processed, the rest go in a queue. When all the jobs are complete, the IntentService will shut itself down automatically. GoalKicker.com Android Notes for Professionals 804 Chapter 147: Implicit Intents Parameters Details o Intent action String: The Intent action, such as ACTION_VIEW. uri Uri: The Intent data URI. packageContext Context: A Context of the application package implementing this class. cls Class: The component class that is to be used for the intent. Section 147.1: Implicit and Explicit Intents An explicit intent is used for starting an activity or service within the same application package. In this case the name of the intended class is explicitly mentioned: Intent intent = new Intent(this, MyComponent.class); startActivity(intent); However, an implicit intent is sent across the system for any application installed on the user's device that can handle that intent. This is used to share information between dierent applications. Intent intent = new Intent("com.stackoverflow.example.VIEW"); //We need to check to see if there is an application installed that can handle this intent if (getPackageManager().resolveActivity(intent, 0) != null){ startActivity(intent); }else{ //Handle error } More details on the dierences can be found in the Android Developer docs here: Intent Resolution Section 147.2: Implicit Intents Implicit intents do not name a specic component, but instead declare a general action to perform, which allows a component from another app to handle it. For example, if you want to show the user a location on a map, you can use an implicit intent to request that another capable app show a specied location on a map. Example: // Create the text message with a string Intent sendIntent = new Intent(); sendIntent.setAction(Intent.ACTION_SEND); sendIntent.putExtra(Intent.EXTRA_TEXT, textMessage); sendIntent.setType("text/plain"); // Verify that the intent will resolve to an activity if (sendIntent.resolveActivity(getPackageManager()) != null) { startActivity(sendIntent); } GoalKicker.com Android Notes for Professionals 805 Chapter 148: Publish to Play Store Section 148.1: Minimal app submission guide Requirements: A developer account An apk already built and signed with a non-debug key A free app that doesn't have in-app billing no Firebase Cloud Messaging or Game Services 1. Head to https://play.google.com/apps/publish/ 1a) Create your developer account if you do not have one 2. Click button Create new Application 3. Click submit APK 4. Fill in all required elds in the form, including some assets that will be displayed on the Play Store (see image below) 5. When satised hit Publish app button GoalKicker.com Android Notes for Professionals 806 See more about signing in Congure Signing Settings GoalKicker.com Android Notes for Professionals 807 Chapter 149: Universal Image Loader Section 149.1: Basic usage 1. Load an image, decode it into a bitmap, and display the bitmap in an ImageView (or any other view which implements the ImageAware interface): ImageLoader.getInstance().displayImage(imageUri, imageView); 2. Load an image, decode it into a bitmap, and return the bitmap to a callback: ImageLoader.getInstance().loadImage(imageUri, new SimpleImageLoadingListener() { @Override public void onLoadingComplete(String imageUri, View view, Bitmap loadedImage) { // Do whatever you want with the bitmap. } }); 3. Load an image, decode it into a bitmap and return the bitmap synchronously: Bitmap bmp = ImageLoader.getInstance().loadImageSync(imageUri); Section 149.2: Initialize Universal Image Loader 1. Add the following dependency to the build.gradle le: compile 'com.nostra13.universalimageloader:universal-image-loader:1.9.5' 2. Add the following permissions to the AndroidManifest.xml le: <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> 3. Initialize the Universal Image Loader. This must be done before the rst usage: ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(this) // ... .build(); ImageLoader.getInstance().init(config); The full conguration options can be found here. GoalKicker.com Android Notes for Professionals 808 Chapter 150: Image Compression Section 150.1: How to compress image without size change Get Compressed Bitmap from Singleton class: ImageView imageView = (ImageView)findViewById(R.id.imageView); Bitmap bitmap = ImageUtils.getInstant().getCompressedBitmap("Your_Image_Path_Here"); imageView.setImageBitmap(bitmap); ImageUtils.java: public class ImageUtils { public static ImageUtils mInstant; public static ImageUtils getInstant(){ if(mInstant==null){ mInstant = new ImageUtils(); } return mInstant; } public Bitmap getCompressedBitmap(String imagePath) { float maxHeight = 1920.0f; float maxWidth = 1080.0f; Bitmap scaledBitmap = null; BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; Bitmap bmp = BitmapFactory.decodeFile(imagePath, options); int actualHeight = options.outHeight; int actualWidth = options.outWidth; float imgRatio = (float) actualWidth / (float) actualHeight; float maxRatio = maxWidth / maxHeight; if (actualHeight > maxHeight || actualWidth > maxWidth) { if (imgRatio < maxRatio) { imgRatio = maxHeight / actualHeight; actualWidth = (int) (imgRatio * actualWidth); actualHeight = (int) maxHeight; } else if (imgRatio > maxRatio) { imgRatio = maxWidth / actualWidth; actualHeight = (int) (imgRatio * actualHeight); actualWidth = (int) maxWidth; } else { actualHeight = (int) maxHeight; actualWidth = (int) maxWidth; } } options.inSampleSize = calculateInSampleSize(options, actualWidth, actualHeight); options.inJustDecodeBounds = false; options.inDither = false; options.inPurgeable = true; options.inInputShareable = true; options.inTempStorage = new byte[16 * 1024]; GoalKicker.com Android Notes for Professionals 809 try { bmp = BitmapFactory.decodeFile(imagePath, options); } catch (OutOfMemoryError exception) { exception.printStackTrace(); } try { scaledBitmap = Bitmap.createBitmap(actualWidth, actualHeight, Bitmap.Config.ARGB_8888); } catch (OutOfMemoryError exception) { exception.printStackTrace(); } float ratioX = actualWidth / (float) options.outWidth; float ratioY = actualHeight / (float) options.outHeight; float middleX = actualWidth / 2.0f; float middleY = actualHeight / 2.0f; Matrix scaleMatrix = new Matrix(); scaleMatrix.setScale(ratioX, ratioY, middleX, middleY); Canvas canvas = new Canvas(scaledBitmap); canvas.setMatrix(scaleMatrix); canvas.drawBitmap(bmp, middleX - bmp.getWidth() / 2, middleY - bmp.getHeight() / 2, new Paint(Paint.FILTER_BITMAP_FLAG)); ExifInterface exif = null; try { exif = new ExifInterface(imagePath); int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, 0); Matrix matrix = new Matrix(); if (orientation == 6) { matrix.postRotate(90); } else if (orientation == 3) { matrix.postRotate(180); } else if (orientation == 8) { matrix.postRotate(270); } scaledBitmap = Bitmap.createBitmap(scaledBitmap, 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight(), matrix, true); } catch (IOException e) { e.printStackTrace(); } ByteArrayOutputStream out = new ByteArrayOutputStream(); scaledBitmap.compress(Bitmap.CompressFormat.JPEG, 85, out); byte[] byteArray = out.toByteArray(); Bitmap updatedBitmap = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length); return updatedBitmap; } private int calculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight) { final int height = options.outHeight; final int width = options.outWidth; int inSampleSize = 1; if (height > reqHeight || width > reqWidth) { final int heightRatio = Math.round((float) height / (float) reqHeight); final int widthRatio = Math.round((float) width / (float) reqWidth); inSampleSize = heightRatio < widthRatio ? heightRatio : widthRatio; } GoalKicker.com Android Notes for Professionals 810 final float totalPixels = width * height; final float totalReqPixelsCap = reqWidth * reqHeight * 2; while (totalPixels / (inSampleSize * inSampleSize) > totalReqPixelsCap) { inSampleSize++; } return inSampleSize; } } Dimensions are same after compressing Bitmap. How did I checked ? Bitmap beforeBitmap = BitmapFactory.decodeFile("Your_Image_Path_Here"); Log.i("Before Compress Dimension", beforeBitmap.getWidth()+"-"+beforeBitmap.getHeight()); Bitmap afterBitmap = ImageUtils.getInstant().getCompressedBitmap("Your_Image_Path_Here"); Log.i("After Compress Dimension", afterBitmap.getWidth() + "-" + afterBitmap.getHeight()); Output: Before Compress : Dimension: 1080-1452 After Compress : Dimension: 1080-1452 GoalKicker.com Android Notes for Professionals 811 Chapter 151: 9-Patch Images Section 151.1: Basic rounded corners The key to correctly stretching is in the top and left border. The top border controls horizontal stretching and the left border controls vertical stretching. This example creates rounded corners suitable for a Toast. The parts of the image that are below the top border and to the right of the left border will expand to ll all unused space. This example will stretch to all combinations of sizes, as shown below: Section 151.2: Optional padding lines Nine-patch images allow optional denition of the padding lines in the image. The padding lines are the lines on the right and at the bottom. If a View sets the 9-patch image as its background, the padding lines are used to dene the space for the View's content (e.g. the text input in an EditText). If the padding lines are not dened, the left and top lines are used instead. The content area of the stretched image then looks like this: GoalKicker.com Android Notes for Professionals 812 Section 151.3: Basic spinner The Spinner can be reskinned according to your own style requirements using a Nine Patch. As an example, see this Nine Patch: As you can see, it has 3 extremely small areas of stretching marked. The top border has only left of the icon marked. That indicates that I want the left side (complete transparency) of the drawable to ll the Spinner view until the icon is reached. The left border has marked transparent segments at the top and bottom of the icon marked. That indicates that both the top and the bottom will expand to the size of the Spinner view. This will leave the icon itself centered vertically. Using the image without Nine Patch metadata: Using the image with Nine Patch metadata: GoalKicker.com Android Notes for Professionals 813 Chapter 152: Email Validation Section 152.1: Email address validation Add the following method to check whether an email address is valid or not: private boolean isValidEmailId(String email){ return Pattern.compile("^(([\\w-]+\\.)+[\\w-]+|([a-zA-Z]{1}|[\\w-]{2,}))@" + "((([0-1]?[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\.([0-1]?" + "[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\." + "([0-1]?[0-9]{1,2}|25[0-5]|2[0-4][0-9])\\.([0-1]?" + "[0-9]{1,2}|25[0-5]|2[0-4][0-9])){1}|" + "([a-zA-Z]+[\\w-]+\\.)+[a-zA-Z]{2,4})$").matcher(email).matches(); } The above method can easily be veried by converting the text of an EditText widget into a String: if(isValidEmailId(edtEmailId.getText().toString().trim())){ Toast.makeText(getApplicationContext(), "Valid Email Address.", Toast.LENGTH_SHORT).show(); }else{ Toast.makeText(getApplicationContext(), "InValid Email Address.", Toast.LENGTH_SHORT).show(); } Section 152.2: Email Address validation with using Patterns if (Patterns.EMAIL_ADDRESS.matcher(email).matches()){ Log.i("EmailCheck","It is valid"); } GoalKicker.com Android Notes for Professionals 814 Chapter 153: Bottom Sheets A bottom sheet is a sheet that slides up from the bottom edge of the screen. Section 153.1: Quick Setup Make sure the following dependency is added to your app's build.gradle le under dependencies: compile 'com.android.support:design:25.3.1' Then you can use the Bottom sheet using these options: BottomSheetBehavior to be used with CoordinatorLayout BottomSheetDialog which is a dialog with a bottom sheet behavior BottomSheetDialogFragment which is an extension of DialogFragment, that creates a BottomSheetDialog instead of a standard dialog. Section 153.2: BottomSheetBehavior like Google maps Version 2.1.x This example depends on Support Library 23.4.0.+. BottomSheetBehavior is characterized by : 1. Two toolbars with animations that respond to the bottom sheet movements. 2. A FAB that hides when it is near to the "modal toolbar" (the one that appears when you are sliding up). 3. A backdrop image behind bottom sheet with some kind of parallax eect. 4. A Title (TextView) in Toolbar that appears when bottom sheet reach it. 5. The notication satus bar can turn its background to transparent or full color. 6. A custom bottom sheet behavior with an "anchor" state. Now let's check them one by one: ToolBars When you open that view in Google Maps, you can see a toolbar in where you can search, it's the only one that I'm not doing exactly like Google Maps, because I wanted to do it more generic. Anyway that ToolBar is inside an AppBarLayout and it got hidden when you start dragging the BottomSheet and it appears again when the BottomSheet reach the COLLAPSED state. To achieve it you need to: create a Behavior and extend it from AppBarLayout.ScrollingViewBehavior override layoutDependsOn and onDependentViewChanged methods. Doing it you will listen for bottomSheet movements. create some methods to hide and unhide the AppBarLayout/ToolBar with animations. This is how I did it for rst toolbar or ActionBar: @Override GoalKicker.com Android Notes for Professionals 815 public boolean layoutDependsOn(CoordinatorLayout parent, View child, View dependency) { return dependency instanceof NestedScrollView; } @Override public boolean onDependentViewChanged(CoordinatorLayout parent, View child, View dependency) { if (mChild == null) { initValues(child, dependency); return false; } float dVerticalScroll = dependency.getY() - mPreviousY; mPreviousY = dependency.getY(); //going up if (dVerticalScroll <= 0 && !hidden) { dismissAppBar(child); return true; } return false; } private void initValues(final View child, View dependency) { mChild = child; mInitialY = child.getY(); BottomSheetBehaviorGoogleMapsLike bottomSheetBehavior = BottomSheetBehaviorGoogleMapsLike.from(dependency); bottomSheetBehavior.addBottomSheetCallback(new BottomSheetBehaviorGoogleMapsLike.BottomSheetCallback() { @Override public void onStateChanged(@NonNull View bottomSheet, @BottomSheetBehaviorGoogleMapsLike.State int newState) { if (newState == BottomSheetBehaviorGoogleMapsLike.STATE_COLLAPSED || newState == BottomSheetBehaviorGoogleMapsLike.STATE_HIDDEN) showAppBar(child); } @Override public void onSlide(@NonNull View bottomSheet, float slideOffset) { } }); } private void dismissAppBar(View child){ hidden = true; AppBarLayout appBarLayout = (AppBarLayout)child; mToolbarAnimation = appBarLayout.animate().setDuration(mContext.getResources().getInteger(android.R.integer.config_shor tAnimTime)); mToolbarAnimation.y(-(mChild.getHeight()+25)).start(); } private void showAppBar(View child) { hidden = false; AppBarLayout appBarLayout = (AppBarLayout)child; mToolbarAnimation = GoalKicker.com Android Notes for Professionals 816 appBarLayout.animate().setDuration(mContext.getResources().getInteger(android.R.integer.config_medi umAnimTime)); mToolbarAnimation.y(mInitialY).start(); } Here is the complete le if you need it The second Toolbar or "Modal" toolbar: You have to override the same methods, but in this one you have to take care of more behaviors: show/hide the ToolBar with animations change status bar color/background show/hide the BottomSheet title in the ToolBar close the bottomSheet or send it to collapsed state The code for this one is a little extensive, so I will let the link The FAB This is a Custom Behavior too, but extends from FloatingActionButton.Behavior. In onDependentViewChanged you have to look when it reach the "oSet" or point in where you want to hide it. In my case I want to hide it when it's near to the second toolbar, so I dig into FAB parent (a CoordinatorLayout) looking for the AppBarLayout that contains the ToolBar, then I use the ToolBar position like OffSet: @Override public boolean onDependentViewChanged(CoordinatorLayout parent, FloatingActionButton child, View dependency) { if (offset == 0) setOffsetValue(parent); if (dependency.getY() <=0) return false; if (child.getY() <= (offset + child.getHeight()) && child.getVisibility() == View.VISIBLE) child.hide(); else if (child.getY() > offset && child.getVisibility() != View.VISIBLE) child.show(); return false; } Complete Custom FAB Behavior link The image behind the BottomSheet with parallax eect: Like the others, it's a custom behavior, the only "complicated" thing in this one is the little algorithm that keeps the image anchored to the BottomSheet and avoid the image collapse like default parallax eect: @Override public boolean onDependentViewChanged(CoordinatorLayout parent, View child, View dependency) { if (mYmultiplier == 0) { initValues(child, dependency); GoalKicker.com Android Notes for Professionals 817 return true; } float dVerticalScroll = dependency.getY() - mPreviousY; mPreviousY = dependency.getY(); //going up if (dVerticalScroll <= 0 && child.getY() <= 0) { child.setY(0); return true; } //going down if (dVerticalScroll >= 0 && dependency.getY() <= mImageHeight) return false; child.setY( (int)(child.getY() + (dVerticalScroll * mYmultiplier) ) ); return true; } The complete le for backdrop image with parallax eect Now for the end: The Custom BottomSheet Behavior To achieve the 3 steps, rst you need to understand that default BottomSheetBehavior has 5 states: STATE_DRAGGING, STATE_SETTLING, STATE_EXPANDED, STATE_COLLAPSED, STATE_HIDDEN and for the Google Maps behavior you need to add a middle state between collapsed and expanded: STATE_ANCHOR_POINT. I tried to extend the default bottomSheetBehavior with no success, so I just copy pasted all the code and modied what I need. To achieve what I'm talking about follow the next steps: 1. Create a Java class and extend it from CoordinatorLayout.Behavior<V> 2. Copy paste code from default BottomSheetBehavior le to your new one. 3. Modify the method clampViewPositionVertical with the following code: @Override public int clampViewPositionVertical(View child, int top, int dy) { return constrain(top, mMinOffset, mHideable ? mParentHeight : mMaxOffset); } int constrain(int amount, int low, int high) { return amount < low ? low : (amount > high ? high : amount); } 4. Add a new state public static nal int STATE_ANCHOR_POINT = X; 5. Modify the next methods: onLayoutChild, onStopNestedScroll, BottomSheetBehavior<V> from(V view) and setState (optional) GoalKicker.com Android Notes for Professionals 818 public boolean onLayoutChild(CoordinatorLayout parent, V child, int layoutDirection) { // First let the parent lay it out if (mState != STATE_DRAGGING && mState != STATE_SETTLING) { if (ViewCompat.getFitsSystemWindows(parent) && !ViewCompat.getFitsSystemWindows(child)) { ViewCompat.setFitsSystemWindows(child, true); } parent.onLayoutChild(child, layoutDirection); } // Offset the bottom sheet mParentHeight = parent.getHeight(); mMinOffset = Math.max(0, mParentHeight - child.getHeight()); mMaxOffset = Math.max(mParentHeight - mPeekHeight, mMinOffset); //if (mState == STATE_EXPANDED) { // ViewCompat.offsetTopAndBottom(child, mMinOffset); //} else if (mHideable && mState == STATE_HIDDEN... if (mState == STATE_ANCHOR_POINT) { ViewCompat.offsetTopAndBottom(child, mAnchorPoint); } else if (mState == STATE_EXPANDED) { ViewCompat.offsetTopAndBottom(child, mMinOffset); } else if (mHideable && mState == STATE_HIDDEN) { ViewCompat.offsetTopAndBottom(child, mParentHeight); } else if (mState == STATE_COLLAPSED) { ViewCompat.offsetTopAndBottom(child, mMaxOffset); } if (mViewDragHelper == null) { mViewDragHelper = ViewDragHelper.create(parent, mDragCallback); } mViewRef = new WeakReference<>(child); mNestedScrollingChildRef = new WeakReference<>(findScrollingChild(child)); return true; } public void onStopNestedScroll(CoordinatorLayout coordinatorLayout, V child, View target) { if (child.getTop() == mMinOffset) { setStateInternal(STATE_EXPANDED); return; } if (target != mNestedScrollingChildRef.get() || !mNestedScrolled) { return; } int top; int targetState; if (mLastNestedScrollDy > 0) { //top = mMinOffset; //targetState = STATE_EXPANDED; int currentTop = child.getTop(); if (currentTop > mAnchorPoint) { top = mAnchorPoint; targetState = STATE_ANCHOR_POINT; } else { top = mMinOffset; targetState = STATE_EXPANDED; } } else if (mHideable && shouldHide(child, getYVelocity())) { top = mParentHeight; targetState = STATE_HIDDEN; } else if (mLastNestedScrollDy == 0) { int currentTop = child.getTop(); GoalKicker.com Android Notes for Professionals 819 if (Math.abs(currentTop - mMinOffset) < Math.abs(currentTop - mMaxOffset)) { top = mMinOffset; targetState = STATE_EXPANDED; } else { top = mMaxOffset; targetState = STATE_COLLAPSED; } } else { //top = mMaxOffset; //targetState = STATE_COLLAPSED; int currentTop = child.getTop(); if (currentTop > mAnchorPoint) { top = mMaxOffset; targetState = STATE_COLLAPSED; } else { top = mAnchorPoint; targetState = STATE_ANCHOR_POINT; } } if (mViewDragHelper.smoothSlideViewTo(child, child.getLeft(), top)) { setStateInternal(STATE_SETTLING); ViewCompat.postOnAnimation(child, new SettleRunnable(child, targetState)); } else { setStateInternal(targetState); } mNestedScrolled = false; } public final void setState(@State int state) { if (state == mState) { return; } if (mViewRef == null) { // The view is not laid out yet; modify mState and let onLayoutChild handle it later /** * New behavior (added: state == STATE_ANCHOR_POINT ||) */ if (state == STATE_COLLAPSED || state == STATE_EXPANDED || state == STATE_ANCHOR_POINT || (mHideable && state == STATE_HIDDEN)) { mState = state; } return; } V child = mViewRef.get(); if (child == null) { return; } int top; if (state == STATE_COLLAPSED) { top = mMaxOffset; } else if (state == STATE_ANCHOR_POINT) { top = mAnchorPoint; } else if (state == STATE_EXPANDED) { top = mMinOffset; } else if (mHideable && state == STATE_HIDDEN) { top = mParentHeight; } else { throw new IllegalArgumentException("Illegal state argument: " + state); } setStateInternal(STATE_SETTLING); GoalKicker.com Android Notes for Professionals 820 if (mViewDragHelper.smoothSlideViewTo(child, child.getLeft(), top)) { ViewCompat.postOnAnimation(child, new SettleRunnable(child, state)); } } public static <V extends View> BottomSheetBehaviorGoogleMapsLike<V> from(V view) { ViewGroup.LayoutParams params = view.getLayoutParams(); if (!(params instanceof CoordinatorLayout.LayoutParams)) { throw new IllegalArgumentException("The view is not a child of CoordinatorLayout"); } CoordinatorLayout.Behavior behavior = ((CoordinatorLayout.LayoutParams) params) .getBehavior(); if (!(behavior instanceof BottomSheetBehaviorGoogleMapsLike)) { throw new IllegalArgumentException( "The view is not associated with BottomSheetBehaviorGoogleMapsLike"); } return (BottomSheetBehaviorGoogleMapsLike<V>) behavior; } Link to the whole project where you can see all the Custom Behaviors And here it is how it looks like: [ GoalKicker.com Android Notes for Professionals ] 821 Section 153.3: Modal bottom sheets with BottomSheetDialog The BottomSheetDialog is a dialog styled as a bottom sheet Just use: //Create a new BottomSheetDialog BottomSheetDialog dialog = new BottomSheetDialog(context); //Inflate the layout R.layout.my_dialog_layout dialog.setContentView(R.layout.my_dialog_layout); //Show the dialog dialog.show(); In this case you don't need to attach a BottomSheet behavior. Section 153.4: Modal bottom sheets with BottomSheetDialogFragment You can realize a modal bottom sheets using a BottomSheetDialogFragment. The BottomSheetDialogFragment is a modal bottom sheet. This is a version of DialogFragment that shows a bottom sheet using BottomSheetDialog instead of a oating dialog. Just dene the fragment: public class MyBottomSheetDialogFragment extends BottomSheetDialogFragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.my_fragment_bottom_sheet, container); } } Then use this code to show the fragment: MyBottomSheetDialogFragment mySheetDialog = new MyBottomSheetDialogFragment(); FragmentManager fm = getSupportFragmentManager(); mySheetDialog.show(fm, "modalSheetDialog"); This Fragment will create a BottomSheetDialog. Section 153.5: Persistent Bottom Sheets You can achieve a Persistent Bottom Sheet attaching a BottomSheetBehavior to a child View of a CoordinatorLayout: <android.support.design.widget.CoordinatorLayout > <!-- ..... --> <LinearLayout android:id="@+id/bottom_sheet" android:elevation="4dp" android:minHeight="120dp" GoalKicker.com Android Notes for Professionals 822 app:behavior_peekHeight="120dp" ... app:layout_behavior="android.support.design.widget.BottomSheetBehavior"> <!-- ..... --> </LinearLayout> </android.support.design.widget.CoordinatorLayout> Then in your code you can create a reference using: // The View with the BottomSheetBehavior View bottomSheet = coordinatorLayout.findViewById(R.id.bottom_sheet); BottomSheetBehavior mBottomSheetBehavior = BottomSheetBehavior.from(bottomSheet); You can set the state of your BottomSheetBehavior using the setState() method: mBottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED); You can use one of these states: STATE_COLLAPSED: this collapsed state is the default and shows just a portion of the layout along the bottom. The height can be controlled with the app:behavior_peekHeight attribute (defaults to 0) STATE_EXPANDED: the fully expanded state of the bottom sheet, where either the whole bottom sheet is visible (if its height is less than the containing CoordinatorLayout) or the entire CoordinatorLayout is lled STATE_HIDDEN: disabled by default (and enabled with the app:behavior_hideable attribute), enabling this allows users to swipe down on the bottom sheet to completely hide the bottom sheet If youd like to receive callbacks of state changes, you can add a BottomSheetCallback: mBottomSheetBehavior.setBottomSheetCallback(new BottomSheetCallback() { @Override public void onStateChanged(@NonNull View bottomSheet, int newState) { // React to state change } @Override public void onSlide(@NonNull View bottomSheet, float slideOffset) { // React to dragging events } }); Section 153.6: Open BottomSheet DialogFragment in Expanded mode by default BottomSheet DialogFragment opens up in STATE_COLLAPSED by default. Which can be forced to open to STATE_EXPANDEDand take up the full device screen with help of the following code template. @NonNull @Override public Dialog onCreateDialog(Bundle savedInstanceState) { BottomSheetDialog dialog = (BottomSheetDialog) super.onCreateDialog(savedInstanceState); dialog.setOnShowListener(new DialogInterface.OnShowListener() { @Override public void onShow(DialogInterface dialog) { GoalKicker.com Android Notes for Professionals 823 BottomSheetDialog d = (BottomSheetDialog) dialog; FrameLayout bottomSheet = (FrameLayout) d.findViewById(android.support.design.R.id.design_bottom_sheet); BottomSheetBehavior.from(bottomSheet).setState(BottomSheetBehavior.STATE_EXPANDED); } }); // Do something with your dialog like setContentView() or whatever return dialog; } Although dialog animation is slightly noticeable but does the task of opening the DialogFragment in full screen very well. GoalKicker.com Android Notes for Professionals 824 Chapter 154: EditText Section 154.1: Working with EditTexts The EditText is the standard text entry widget in Android apps. If the user needs to enter text into an app, this is the primary way for them to do that. EditText There are many important properties that can be set to customize the behavior of an EditText. Several of these are listed below. Check out the ocial text elds guide for even more input eld details. Usage An EditText is added to a layout with all default behaviors with the following XML: <EditText android:id="@+id/et_simple" android:layout_height="wrap_content" android:layout_width="match_parent"> </EditText> Note that an EditText is simply a thin extension of the TextView and inherits all of the same properties. Retrieving the Value Getting the value of the text entered into an EditText is as follows: EditText simpleEditText = (EditText) findViewById(R.id.et_simple); String strValue = simpleEditText.getText().toString(); Further Entry Customization We might want to limit the entry to a single-line of text (avoid newlines): <EditText android:singleLine="true" android:lines="1" /> You can limit the characters that can be entered into a eld using the digits attribute: <EditText android:inputType="number" android:digits="01" /> This would restrict the digits entered to just "0" and "1". We might want to limit the total number of characters with: <EditText android:maxLength="5" /> Using these properties we can dene the expected input behavior for text elds. GoalKicker.com Android Notes for Professionals 825 Adjusting Colors You can adjust the highlight background color of selected text within an EditText with the android:textColorHighlight property: <EditText android:textColorHighlight="#7cff88" /> Displaying Placeholder Hints You may want to set the hint for the EditText control to prompt a user for specic input with: <EditText ... android:hint="@string/my_hint"> </EditText> Hints Changing the bottom line color Assuming you are using the AppCompat library, you can override the styles colorControlNormal, colorControlActivated, and colorControlHighlight: <style name="Theme.App.Base" parent="Theme.AppCompat.Light.DarkActionBar"> <item name="colorControlNormal">#d32f2f</item> <item name="colorControlActivated">#ff5722</item> <item name="colorControlHighlight">#f44336</item> </style> If you do not see these styles applied within a DialogFragment, there is a known bug when using the LayoutInater passed into the onCreateView() method. The issue has already been xed in the AppCompat v23 library. See this guide about how to upgrade. Another temporary workaround is to use the Activity's layout inater instead of the one passed into the onCreateView() method: public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = getActivity().getLayoutInflater().inflate(R.layout.dialog_fragment, container); } Listening for EditText Input Check out the basic event listeners clinotes for a look at how to listen for changes to an EditText and perform an action when those changes occur. Displaying Floating Label Feedback Traditionally, the EditText hides the hint message (explained above) after the user starts typing. In addition, any validation error messages had to be managed manually by the developer. With the TextInputLayout you can setup a oating label to display hints and error messages. You can nd more details here. GoalKicker.com Android Notes for Professionals 826 Section 154.2: Customizing the InputType Text elds can have dierent input types, such as number, date, password, or email address. The type determines what kind of characters are allowed inside the eld, and may prompt the virtual keyboard to optimize its layout for frequently used characters. By default, any text contents within an EditText control is displayed as plain text. By setting the inputType attribute, we can facilitate input of dierent types of information, like phone numbers and passwords: <EditText ... android:inputType="phone"> </EditText> Most common input types include: Type textUri Description Text that will be used as a URI textEmailAddress Text that will be used as an e-mail address textPersonName Text that is the name of a person textPassword Text that is a password that should be obscured number A numeric only eld phone For entering a phone number date For entering a date time For entering a time textMultiLine Allow multiple lines of text in the eld The android:inputType also allows you to specify certain keyboard behaviors, such as whether to capitalize all new words or use features like auto-complete and spelling suggestions. Here are some of the common input type values that dene keyboard behaviors: Type Description textCapSentences Normal text keyboard that capitalizes the rst letter for each new sentence textCapWords Normal text keyboard that capitalizes every word. Good for titles or person names textAutoCorrect Normal text keyboard that corrects commonly misspelled words You can set multiple inputType attributes if needed (separated by '|'). Example: <EditText android:id="@+id/postal_address" android:layout_width="fill_parent" android:layout_height="wrap_content" android:hint="@string/postal_address_hint" android:inputType="textPostalAddress| textCapWords| textNoSuggestions" /> You can see a list of all available input types here. Section 154.3: Icon or button inside Custom Edit Text and its GoalKicker.com Android Notes for Professionals 827 action and click listeners This example will help to have the Edit text with the icon at the right side. Note: In this just I am using setCompoundDrawablesWithIntrinsicBounds, So if you want to change the icon position you can achieve that using setCompoundDrawablesWithIntrinsicBounds in setIcon. public class MKEditText extends AppCompatEditText { public interface IconClickListener { public void onClick(); } private IconClickListener mIconClickListener; private static final String TAG = MKEditText.class.getSimpleName(); private final int EXTRA_TOUCH_AREA = 50; private Drawable mDrawable; private boolean touchDown; public MKEditText(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } public MKEditText(Context context) { super(context); } public MKEditText(Context context, AttributeSet attrs) { super(context, attrs); } public void showRightIcon() { mDrawable = ContextCompat.getDrawable(getContext(), R.drawable.ic_android_black_24dp); setIcon(); } public void setIconClickListener(IconClickListener iconClickListener) { mIconClickListener = iconClickListener; } private void setIcon() { Drawable[] drawables = getCompoundDrawables(); setCompoundDrawablesWithIntrinsicBounds(drawables[0], drawables[1], mDrawable, drawables[3]); setInputType(InputType.TYPE_CLASS_TEXT | InputType.TYPE_TEXT_VARIATION_PASSWORD); setSelection(getText().length()); } @Override public boolean onTouchEvent(MotionEvent event) { final int right = getRight(); final int drawableSize = getCompoundPaddingRight(); final int x = (int) event.getX(); switch (event.getAction()) { GoalKicker.com Android Notes for Professionals 828 case MotionEvent.ACTION_DOWN: if (x + EXTRA_TOUCH_AREA >= right - drawableSize && x <= right + EXTRA_TOUCH_AREA) { touchDown = true; return true; } break; case MotionEvent.ACTION_UP: if (x + EXTRA_TOUCH_AREA >= right - drawableSize && x <= right + EXTRA_TOUCH_AREA && touchDown) { touchDown = false; if (mIconClickListener != null) { mIconClickListener.onClick(); } return true; } touchDown = false; break; } return super.onTouchEvent(event); } } If you want to change the touch area you can change the EXTRA_TOUCH_AREA values default I gave as 50. And for Enable the button and click listener you can call from your Activity or Fragment like this, MKEditText mkEditText = (MKEditText) findViewById(R.id.password); mkEditText.showRightIcon(); mkEditText.setIconClickListener(new MKEditText.IconClickListener() { @Override public void onClick() { // You can do action here for the icon. } }); Section 154.4: Hiding SoftKeyboard Hiding Softkeyboard is a basic requirement usually when working with EditText. The softkeyboard by default can only be closed by pressing back button and so most developers use InputMethodManager to force Android to hide the virtual keyboard calling hideSoftInputFromWindow and passing in the token of the window containing your focused view. The code to do the following: public void hideSoftKeyboard() { InputMethodManager inputMethodManager = (InputMethodManager) getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.hideSoftInputFromWindow(getCurrentFocus().getWindowToken(), 0); } The code is direct, but another major problems that arises is that the hide function needs to be called when some event occurs. What to do when you need the Softkeyboard hidden upon pressing anywhere other than your EditText? The following code gives a neat function that needs to be called in your onCreate() method just once. GoalKicker.com Android Notes for Professionals 829 public void setupUI(View view) { String s = "inside"; //Set up touch listener for non-text box views to hide keyboard. if (!(view instanceof EditText)) { view.setOnTouchListener(new View.OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { hideSoftKeyboard(); return false; } }); } //If a layout container, iterate over children and seed recursion. if (view instanceof ViewGroup) { for (int i = 0; i < ((ViewGroup) view).getChildCount(); i++) { View innerView = ((ViewGroup) view).getChildAt(i); setupUI(innerView); } } } Section 154.5: `inputype` attribute inputype attribute in EditText widget: (tested on Android 4.4.3 and 2.3.3) <EditText android:id="@+id/et_test" android:inputType="?????"/> textLongMessage= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: lowercase. Suggestion: yes. Add. chars: , and . and everything textFilter= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: lowercase. Suggestion: no. Add. chars: , and . and everything textCapWords= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: Camel Case. Suggestion: yes. Add. chars: , and . and everything textCapSentences= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: Sentence case. Suggestion: yes. Add. chars: , and . and everything time= Keyboard: numeric. Enter button: Send/Next. Emotion: no. Case: -. Suggestion: no. Add. chars: : textMultiLine= Keyboard: alphabet/default. Enter button: nextline. Emotion: yes. Case: lowercase. Suggestion: yes. Add. chars: , and . and everything number= Keyboard: numeric. Enter button: Send/Next. Emotion: no. Case: -. Suggestion: no. Add. chars: nothing textEmailAddress= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: no. Case: lowercase. Suggestion: no. Add. chars: @ and . and everything (No type)= Keyboard: alphabet/default. Enter button: nextline. Emotion: yes. Case: lowercase. Suggestion: yes. GoalKicker.com Android Notes for Professionals 830 Add. chars: , and . and everything textPassword= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: no. Case: lowercase. Suggestion: no. Add. chars: , and . and everything text= Keyboard: Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: lowercase. Suggestion: yes. Add. chars: , and . and everything textShortMessage= Keyboard: alphabet/default. Enter button: emotion. Emotion: yes. Case: lowercase. Suggestion: yes. Add. chars: , and . and everything textUri= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: no. Case: lowercase. Suggestion: no. Add. chars: / and . and everything textCapCharacters= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: UPPERCASE. Suggestion: yes. Add. chars: , and . and everything phone= Keyboard: numeric. Enter button: Send/Next. Emotion: no. Case: -. Suggestion: no. Add. chars: *** # . - / () W P N , +** textPersonName= Keyboard: alphabet/default. Enter button: Send/Next. Emotion: yes. Case: lowercase. Suggestion: yes. Add. chars: , and . and everything Note: Auto-capitalization setting will change the default behavior. Note 2: In the Numeric keyboard, ALL numbers are English 1234567890. Note 3: Correction/Suggestion setting will change the default behavior. GoalKicker.com Android Notes for Professionals 831 Chapter 155: Speech to Text Conversion Section 155.1: Speech to Text With Default Google Prompt Dialog Trigger speech to text translation private void startListening() { //Intent to listen to user vocal input and return result in same activity Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); //Use a language model based on free-form speech recognition. intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault()); //Message to display in dialog box intent.putExtra(RecognizerIntent.EXTRA_PROMPT, getString(R.string.speech_to_text_info)); try { startActivityForResult(intent, REQ_CODE_SPEECH_INPUT); } catch (ActivityNotFoundException a) { Toast.makeText(getApplicationContext(), getString(R.string.speech_not_supported), Toast.LENGTH_SHORT).show(); } } Get translated results in onActivityResult @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); switch (requestCode) { case REQ_CODE_SPEECH_INPUT: { if (resultCode == RESULT_OK && null != data) { ArrayList<String> result = data .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); txtSpeechInput.setText(result.get(0)); } break; } } } Output GoalKicker.com Android Notes for Professionals 832 Section 155.2: Speech to Text without Dialog The following code can be used to trigger speech-to-text translation without showing a dialog: public void startListeningWithoutDialog() { // Intent to listen to user vocal input and return the result to the same activity. Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); // Use a language model based on free-form speech recognition. intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault()); intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5); intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, appContext.getPackageName()); // Add custom listeners. CustomRecognitionListener listener = new CustomRecognitionListener(); SpeechRecognizer sr = SpeechRecognizer.createSpeechRecognizer(appContext); sr.setRecognitionListener(listener); sr.startListening(intent); } The custom listener class CustomRecognitionListener used in the code above is implemented as follows: class CustomRecognitionListener implements RecognitionListener { private static final String TAG = "RecognitionListener"; public void onReadyForSpeech(Bundle params) { Log.d(TAG, "onReadyForSpeech"); } public void onBeginningOfSpeech() { Log.d(TAG, "onBeginningOfSpeech"); } public void onRmsChanged(float rmsdB) { Log.d(TAG, "onRmsChanged"); } public void onBufferReceived(byte[] buffer) { Log.d(TAG, "onBufferReceived"); } public void onEndOfSpeech() { Log.d(TAG, "onEndofSpeech"); } GoalKicker.com Android Notes for Professionals 833 public void onError(int error) { Log.e(TAG, "error " + error); conversionCallaback.onErrorOccured(TranslatorUtil.getErrorText(error)); } public void onResults(Bundle results) { ArrayList<String> result = data .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); txtSpeechInput.setText(result.get(0)); } public void onPartialResults(Bundle partialResults) { Log.d(TAG, "onPartialResults"); } public void onEvent(int eventType, Bundle params) { Log.d(TAG, "onEvent " + eventType); } } GoalKicker.com Android Notes for Professionals 834 Chapter 156: Installing apps with ADB Section 156.1: Uninstall an app Write the following command in your terminal to uninstall an app with a provided package name: adb uninstall <packagename> Section 156.2: Install all apk le in directory Windows : for %f in (C:\your_app_path\*.apk) do adb install "%f" Linux : for f in *.apk ; do adb install "$f" ; done Section 156.3: Install an app Write the following command in your terminal: adb install [-rtsdg] <file> Note that you have to pass a le that is on your computer and not on your device. If you append -r at the end, then any existing conicting apks will be overwritten. Otherwise, the command will quit with an error. -g will immediately grant all runtime permissions. -d allows version code downgrade (only appliable on debuggable packages). Use -s to install the application on the external SD card. -t will allow to use test applications. GoalKicker.com Android Notes for Professionals 835 Chapter 157: Count Down Timer Parameter long millisInFuture Details The total duration the timer will run for, a.k.a how far in the future you want the timer to end. In milliseconds. long countDownInterval The interval at which you would like to receive timer updates. In milliseconds. long millisUntilFinished A parameter provided in onTick() that tells how long the CountDownTimer has remaining. In milliseconds Section 157.1: Creating a simple countdown timer CountDownTimer is useful for repeatedly performing an action in a steady interval for a set duration. In this example, we will update a text view every second for 30 seconds telling how much time is remaining. Then when the timer nishes, we will set the TextView to say "Done." TextView textView = (TextView)findViewById(R.id.text_view); CountDownTimer countDownTimer = new CountDownTimer(30000, 1000) { public void onTick(long millisUntilFinished) { textView.setText(String.format(Locale.getDefault(), "%d sec.", millisUntilFinished / 1000L)); } public void onFinish() { textView.setText("Done."); } }.start(); Section 157.2: A More Complex Example In this example, we will pause/resume the CountDownTimer based o of the Activity lifecycle. private static final long TIMER_DURATION = 60000L; private static final long TIMER_INTERVAL = 1000L; private CountDownTimer mCountDownTimer; private TextView textView; private long mTimeRemaining; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = (TextView)findViewById(R.id.text_view); // Define in xml layout. mCountDownTimer = new CountDownTimer(TIMER_DURATION, TIMER_INTERVAL) { @Override public void onTick(long millisUntilFinished) { textView.setText(String.format(Locale.getDefault(), "%d sec.", millisUntilFinished / 1000L)); mTimeRemaining = millisUntilFinished; // Saving timeRemaining in Activity for pause/resume of CountDownTimer. } GoalKicker.com Android Notes for Professionals 836 @Override public void onFinish() { textView.setText("Done."); } }.start(); } @Override protected void onResume() { super.onResume(); if (mCountDownTimer == null) { // Timer was paused, re-create with saved time. mCountDownTimer = new CountDownTimer(timeRemaining, INTERVAL) { @Override public void onTick(long millisUntilFinished) { textView.setText(String.format(Locale.getDefault(), "%d sec.", millisUntilFinished / 1000L)); timeRemaining = millisUntilFinished; } @Override public void onFinish() { textView.setText("Done."); } }.start(); } } @Override protected void onPause() { super.onPause(); mCountDownTimer.cancel(); mCountDownTimer = null; } GoalKicker.com Android Notes for Professionals 837 Chapter 158: Barcode and QR code reading Section 158.1: Using QRCodeReaderView (based on Zxing) QRCodeReaderView implements an Android view which show camera and notify when there's a QR code inside the preview. It uses the zxing open-source, multi-format 1D/2D barcode image processing library. Adding the library to your project Add QRCodeReaderView dependency to your build.gradle dependencies{ compile 'com.dlazaro66.qrcodereaderview:qrcodereaderview:2.0.0' } First use Add to your layout a QRCodeReaderView <com.dlazaro66.qrcodereaderview.QRCodeReaderView android:id="@+id/qrdecoderview" android:layout_width="match_parent" android:layout_height="match_parent" /> Create an Activity which implements onQRCodeReadListener, and use it as a listener of the QrCodeReaderView. Make sure you have camera permissions in order to use the library. (https://developer.android.com/training/permissions/requesting.html) Then in your Activity, you can use it as follows: public class DecoderActivity extends Activity implements OnQRCodeReadListener { private TextView resultTextView; private QRCodeReaderView qrCodeReaderView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_decoder); qrCodeReaderView = (QRCodeReaderView) findViewById(R.id.qrdecoderview); qrCodeReaderView.setOnQRCodeReadListener(this); // Use this function to enable/disable decoding qrCodeReaderView.setQRDecodingEnabled(true); // Use this function to change the autofocus interval (default is 5 secs) qrCodeReaderView.setAutofocusInterval(2000L); // Use this function to enable/disable Torch qrCodeReaderView.setTorchEnabled(true); // Use this function to set front camera preview qrCodeReaderView.setFrontCamera(); GoalKicker.com Android Notes for Professionals 838 // Use this function to set back camera preview qrCodeReaderView.setBackCamera(); } // Called when a QR is decoded // "text" : the text encoded in QR // "points" : points where QR control points are placed in View @Override public void onQRCodeRead(String text, PointF[] points) { resultTextView.setText(text); } @Override protected void onResume() { super.onResume(); qrCodeReaderView.startCamera(); } @Override protected void onPause() { super.onPause(); qrCodeReaderView.stopCamera(); } } GoalKicker.com Android Notes for Professionals 839 Chapter 159: Android PayPal Gateway Integration Section 159.1: Setup PayPal in your android code 1) First go through Paypal Developer web site and create an application. 2) Now open your manifest le and give the below permissions <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> 3) And some required Activity and Services <service android:name="com.paypal.android.sdk.payments.PayPalService" android:exported="false" /> <activity android:name="com.paypal.android.sdk.payments.PaymentActivity" /> <activity android:name="com.paypal.android.sdk.payments.LoginActivity" /> <activity android:name="com.paypal.android.sdk.payments.PaymentMethodActivity" /> <activity android:name="com.paypal.android.sdk.payments.PaymentConfirmActivity" /> <activity android:name="com.paypal.android.sdk.payments.PayPalFuturePaymentActivity" /> <activity android:name="com.paypal.android.sdk.payments.FuturePaymentConsentActivity" /> <activity android:name="com.paypal.android.sdk.payments.FuturePaymentInfoActivity" /> <activity android:name="io.card.payment.CardIOActivity" android:configChanges="keyboardHidden|orientation" /> <activity android:name="io.card.payment.DataEntryActivity" /> 4) Open your Activity class and set Conguration for your app //set the environment for production/sandbox/no netowrk private static final String CONFIG_ENVIRONMENT = PayPalConfiguration.ENVIRONMENT_PRODUCTION; 5) Now set client id from the Paypal developer account private static final String CONFIG_CLIENT_ID = "PUT YOUR CLIENT ID"; 6) Inside onCreate method call the Paypal service Intent intent = new Intent(this, PayPalService.class); intent.putExtra(PayPalService.EXTRA_PAYPAL_CONFIGURATION, config); startService(intent); 7) Now you are ready to make a payment just on button press call the Payment Activity PayPalPayment thingToBuy = new PayPalPayment(new BigDecimal(1),"USD", "androidhub4you.com", PayPalPayment.PAYMENT_INTENT_SALE); Intent intent = new Intent(MainActivity.this, PaymentActivity.class); intent.putExtra(PaymentActivity.EXTRA_PAYMENT, thingToBuy); startActivityForResult(intent, REQUEST_PAYPAL_PAYMENT); 8) And nally from the onActivityResult get the payment response GoalKicker.com Android Notes for Professionals 840 @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == REQUEST_PAYPAL_PAYMENT) { if (resultCode == Activity.RESULT_OK) { PaymentConfirmation confirm = data .getParcelableExtra(PaymentActivity.EXTRA_RESULT_CONFIRMATION); if (confirm != null) { try { System.out.println("Responseeee"+confirm); Log.i("paymentExample", confirm.toJSONObject().toString()); JSONObject jsonObj=new JSONObject(confirm.toJSONObject().toString()); String paymentId=jsonObj.getJSONObject("response").getString("id"); System.out.println("payment id:-=="+paymentId); Toast.makeText(getApplicationContext(), paymentId, Toast.LENGTH_LONG).show(); } catch (JSONException e) { Log.e("paymentExample", "an extremely unlikely failure occurred: ", e); } } } else if (resultCode == Activity.RESULT_CANCELED) { Log.i("paymentExample", "The user canceled."); } else if (resultCode == PaymentActivity.RESULT_EXTRAS_INVALID) { Log.i("paymentExample", "An invalid Payment was submitted. Please see the docs."); } } } GoalKicker.com Android Notes for Professionals 841 Chapter 160: Drawables Section 160.1: Custom Drawable Extend your class with Drawable and override these methods public class IconDrawable extends Drawable { /** * Paint for drawing the shape */ private Paint paint; /** * Icon drawable to be drawn to the center of the shape */ private Drawable icon; /** * Desired width and height of icon */ private int desiredIconHeight, desiredIconWidth; /** * Public constructor for the Icon drawable * * @param icon pass the drawable of the icon to be drawn at the center * @param backgroundColor background color of the shape */ public IconDrawable(Drawable icon, int backgroundColor) { this.icon = icon; paint = new Paint(Paint.ANTI_ALIAS_FLAG); paint.setColor(backgroundColor); desiredIconWidth = 50; desiredIconHeight = 50; } @Override public void draw(Canvas canvas) { //if we are setting this drawable to a 80dpX80dp imageview //getBounds will return that measurements,we can draw according to that width. Rect bounds = getBounds(); //drawing the circle with center as origin and center distance as radius canvas.drawCircle(bounds.centerX(), bounds.centerY(), bounds.centerX(), paint); //set the icon drawable's bounds to the center of the shape icon.setBounds(bounds.centerX() - (desiredIconWidth / 2), bounds.centerY() (desiredIconHeight / 2), (bounds.centerX() - (desiredIconWidth / 2)) + desiredIconWidth, (bounds.centerY() - (desiredIconHeight / 2)) + desiredIconHeight); //draw the icon to the bounds icon.draw(canvas); } @Override public void setAlpha(int alpha) { //sets alpha to your whole shape paint.setAlpha(alpha); } @Override public void setColorFilter(ColorFilter colorFilter) { //sets color filter to your whole shape paint.setColorFilter(colorFilter); GoalKicker.com Android Notes for Professionals 842 } @Override public int getOpacity() { //give the desired opacity of the shape return PixelFormat.TRANSLUCENT; } } Declare a ImageView in your layout <ImageView android:layout_width="80dp" android:id="@+id/imageView" android:layout_height="80dp" /> Set your custom drawable to the ImageView IconDrawable iconDrawable=new IconDrawable(ContextCompat.getDrawable(this,android.R.drawable.ic_media_play),ContextCompat.getColo r(this,R.color.pink_300)); imageView.setImageDrawable(iconDrawable); Screenshot Section 160.2: Tint a drawable A drawable can be tinted a certain color. This is useful for supporting dierent themes within your application, and reducing the number of drawable resource les. Using framework APIs on SDK 21+: Drawable d = context.getDrawable(R.drawable.ic_launcher); d.setTint(Color.WHITE); Using android.support.v4 library on SDK 4+: //Load the untinted resource final Drawable drawableRes = ContextCompat.getDrawable(context, R.drawable.ic_launcher); //Wrap it with the compatibility library so it can be altered Drawable tintedDrawable = DrawableCompat.wrap(drawableRes); //Apply a coloured tint DrawableCompat.setTint(tintedDrawable, Color.WHITE); //At this point you may use the tintedDrawable just as you usually would //(and drawableRes can be discarded) //NOTE: If your original drawableRes was in use somewhere (i.e. it was the result of //a call to a `getBackground()` method then at this point you still need to replace //the background. setTint does *not* alter the instance that drawableRes points to, //but instead creates a new drawable instance Please not that int color is not referring to a color Resource, however you are not limited to those colours dened GoalKicker.com Android Notes for Professionals 843 in the 'Color' class. When you have a colour dened in your XML which you want to use you must just rst get it's value. You can replace usages of Color.WHITE using the methods below When targetting older API's: getResources().getColor(R.color.your_color); Or on newer targets: ContextCompat.getColor(context, R.color.your_color); Section 160.3: Circular View For a circular View (in this case TextView) create a drawble round_view.xml in drawble folder: <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="oval"> <solid android:color="#FAA23C" /> <stroke android:color="#FFF" android:width="2dp" /> </shape> Assign the drawable to the View: <TextView android:id="@+id/game_score" android:layout_width="60dp" android:layout_height="60dp" android:background="@drawable/round_score" android:padding="6dp" android:text="100" android:textColor="#fff" android:textSize="20sp" android:textStyle="bold" android:gravity="center" /> Now it should look like the orange circle: Section 160.4: Make View with rounded corners Create drawable le named with custom_rectangle.xml in drawable folder: GoalKicker.com Android Notes for Professionals 844 <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" > <solid android:color="@android:color/white" /> <corners android:radius="10dip" /> <stroke android:width="1dp" android:color="@android:color/white" /> </shape> Now apply rectangle background on View: mView.setBackGround(R.drawlable.custom_rectangle); Reference screenshot: GoalKicker.com Android Notes for Professionals 845 Chapter 161: TransitionDrawable Section 161.1: Animate views background color (switch-color) with TransitionDrawable public void setCardColorTran(View view) { ColorDrawable[] color = {new ColorDrawable(Color.BLUE), new ColorDrawable(Color.RED)}; TransitionDrawable trans = new TransitionDrawable(color); if(Build.VERSION.SDK_INT < android.os.Build.VERSION_CODES.JELLY_BEAN) { view.setBackgroundDrawable(trans); }else { view.setBackground(trans); } trans.startTransition(5000); } Section 161.2: Add transition or Cross-fade between two images Step 1: Create a transition drawable in XML Save this le transition.xml in res/drawable folder of your project. <transition xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/image1"/> <item android:drawable="@drawable/image2"/> </transition> The image1 and image2 are the two images that we want to transition and they should be put in your res/drawable folder too. Step 2: Add code for ImageView in your XML layout to display the above drawable. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity" > <ImageView android:id="@+id/image_view" android:layout_width="match_parent" android:layout_height="match_parent" android:src="@drawable/image1"/> </LinearLayout> Step 3: Access the XML transition drawable in onCreate() method of your Activity and start transition in onClick() event. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); imageView = (ImageView) findViewById(R.id.image_view); transitionDrawable = (TransitionDrawable) GoalKicker.com Android Notes for Professionals 846 ContextCompat.getDrawable(this, R.drawable.transition); birdImageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(final View view) { birdImageView.setImageDrawable(transitionDrawable); transitionDrawable.startTransition(1000); } }); } GoalKicker.com Android Notes for Professionals 847 Chapter 162: Vector Drawables Parameter Details <vector> Used to dene a vector drawable <group> Denes a group of paths or subgroups, plus transformation information. The transformations are dened in the same coordinates as the viewport. And the transformations are applied in the order of scale, rotate then translate. <path> Denes paths to be drawn. <clip-path> Denes path to be the current clip. Note that the clip path only apply to the current group and its children. As the name implies, vector drawables are based on vector graphics. Vector graphics are a way of describing graphical elements using geometric shapes. This lets you create a drawable based on an XML vector graphic. Now there is no need to design dierent size image for mdpi, hdpi, xhdpi and etc. With Vector Drawable you need to create image only once as an xml le and you can scale it for all dpi and for dierent devices. This also not save space but also simplies maintenance. Section 162.1: Importing SVG le as VectorDrawable You can import an SVG le as a VectorDrawable in Android Studio, follow these steps : "Right-click" on the res folder and select new > Vector Asset. GoalKicker.com Android Notes for Professionals 848 Select the Local File option and browse to your .svg le. Change the options to your liking and hit next. Done. GoalKicker.com Android Notes for Professionals 849 Section 162.2: VectorDrawable Usage Example Heres an example vector asset which were actually using in AppCompat: res/drawable/ic_search.xml <vector xmlns:android="..." android:width="24dp" android:height="24dp" android:viewportWidth="24.0" android:viewportHeight="24.0" android:tint="?attr/colorControlNormal"> <path android:pathData="..." android:fillColor="@android:color/white"/> </vector> Using this drawable, an example ImageView declaration would be: <ImageView GoalKicker.com Android Notes for Professionals 850 android:layout_width="wrap_content" android:layout_height="wrap_content" app:srcCompat="@drawable/ic_search"/> You can also set it at run-time: ImageView iv = (ImageView) findViewById(...); iv.setImageResource(R.drawable.ic_search); The same attribute and calls work for ImageButton too. Section 162.3: VectorDrawable xml example Here is a simple VectorDrawable in this vectordrawable.xml le. <vector xmlns:android="http://schemas.android.com/apk/res/android" android:height="64dp" android:width="64dp" android:viewportHeight="600" android:viewportWidth="600" > <group android:name="rotationGroup" android:pivotX="300.0" android:pivotY="300.0" android:rotation="45.0" > <path android:name="v" android:fillColor="#000000" android:pathData="M300,70 l 0,-70 70,70 0,0 -70,70z" /> </group> </vector> GoalKicker.com Android Notes for Professionals 851 Chapter 163: VectorDrawable and AnimatedVectorDrawable Section 163.1: Basic VectorDrawable A VectorDrawable should consist of at least one <path> tag dening a shape <vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="24dp" android:height="24dp" android:viewportWidth="24.0" android:viewportHeight="24.0"> <path android:fillColor="#FF000000" android:pathData="M0,24 l12,-24 l12,24 z"/> </vector> This would produce a black triangle: Section 163.2: <group> tags A <group> tag allows the scaling, rotation, and position of one or more elements of a VectorDrawable to be adjusted: <vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="24dp" android:height="24dp" android:viewportWidth="24.0" android:viewportHeight="24.0"> <path android:pathData="M0,0 h4 v4 h-4 z" android:fillColor="#FF000000"/> <group android:name="middle square group" android:translateX="10" android:translateY="10" android:rotation="45"> <path GoalKicker.com Android Notes for Professionals 852 android:pathData="M0,0 h4 v4 h-4 z" android:fillColor="#FF000000"/> </group> <group android:name="last square group" android:translateX="18" android:translateY="18" android:scaleX="1.5"> <path android:pathData="M0,0 h4 v4 h-4 z" android:fillColor="#FF000000"/> </group> </vector> The example code above contains three identical <path> tags, all describing black squares. The rst square is unadjusted. The second square is wrapped in a <group> tag which moves it and rotates it by 45. The third square is wrapped in a <group> tag which moves it and stretches it horizontally by 50%. The result is as follows: A <group> tag can contain multiple <path> and <clip-path> tags. It can even contain another <group>. Section 163.3: Basic AnimatedVectorDrawable An AnimatedVectorDrawable requires at least 3 components: A VectorDrawable which will be manipulated An objectAnimator which denes what property to change and how The AnimatedVectorDrawable itself which connects the objectAnimator to the VectorDrawable to create the animation The following creates a triangle that transitions its color from black to red. The VectorDrawable, lename: triangle_vector_drawable.xml <vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="24dp" android:height="24dp" android:viewportWidth="24.0" android:viewportHeight="24.0"> <path GoalKicker.com Android Notes for Professionals 853 android:name="triangle" android:fillColor="@android:color/black" android:pathData="M0,24 l12,-24 l12,24 z"/> </vector> The objectAnimator, lename: color_change_animator.xml <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android" android:propertyName="fillColor" android:duration="2000" android:repeatCount="infinite" android:valueFrom="@android:color/black" android:valueTo="@android:color/holo_red_light"/> The AnimatedVectorDrawable, lename: triangle_animated_vector.xml <animated-vector xmlns:android="http://schemas.android.com/apk/res/android" android:drawable="@drawable/triangle_vector_drawable"> <target android:animation="@animator/color_change_animator" android:name="triangle"/> </animated-vector> Note that the <target> species android:name="triangle" which matches the <path> in the VectorDrawable. A VectorDrawable may contain multiple elements and the android:name property is used to dene which element is being targeted. Result: Section 163.4: Using Strokes Using SVG stroke makes it easier to create a Vector drawable with unied stroke length, as per Material Design guidelines: Consistent stroke weights are key to unifying the overall system icon family. Maintain a 2dp width for all GoalKicker.com Android Notes for Professionals 854 stroke instances, including curves, angles, and both interior and exterior strokes. So, for example, this is how you would create a "plus" sign using strokes: <vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="24dp" android:height="24dp" android:viewportHeight="24.0" android:viewportWidth="24.0"> <path android:fillColor="#FF000000" android:strokeColor="#F000" android:strokeWidth="2" android:pathData="M12,0 V24 M0,12 H24" /> </vector> strokeColor denes the color of the stroke. strokeWidth denes the width (in dp) of the stroke (2dp in this case, as suggested by the guidelines). pathData is where we describe our SVG image: M12,0 moves the "cursor" to the position 12,0 V24 creates a vertical line to the position 12, 24 etc., see SVG documentation and this useful "SVG Path" tutorial from w3schools to learn more about the specic path commands. As a result, we got this no-frills plus sign: This is especially useful for creating an AnimatedVectorDrawable, since you are now operating with a single stroke with an unied length, instead of an otherwise complicated path. GoalKicker.com Android Notes for Professionals 855 Section 163.5: Using <clip-path> A <clip-path> denes a shape which acts as a window, only allowing parts of a <path> to show if they are within the <clip-path> shape and cutting o the rest. <vector xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:width="24dp" android:height="24dp" android:viewportWidth="24.0" android:viewportHeight="24.0"> <clip-path android:name="square clip path" android:pathData="M6,6 h12 v12 h-12 z"/> <path android:name="triangle" android:fillColor="#FF000000" android:pathData="M0,24 l12,-24 l12,24 z"/> </vector> In this case the <path> produces a black triangle, but the <clip-path> denes a smaller square shape, only allowing part of the triangle to show through: Section 163.6: Vector compatibility through AppCompat A few pre-requisites in the build.gradle for vectors to work all the way down to API 7 for VectorDrawables and API 13 for AnimatedVectorDrawables (with some caveats currently): //Build Tools has to be 24+ buildToolsVersion '24.0.0' defaultConfig { vectorDrawables.useSupportLibrary = true generatedDensities = [] aaptOptions { additionalParameters "--no-version-vectors" } } dependencies { GoalKicker.com Android Notes for Professionals 856 compile 'com.android.support:appcompat-v7:24.1.1' } In your layout.xml: <ImageView android:id="@+id/android" android:layout_width="wrap_content" android:layout_height="wrap_content" appCompat:src="@drawable/vector_drawable" android:contentDescription="@null" /> GoalKicker.com Android Notes for Professionals 857 Chapter 164: Port Mapping using Cling library in Android Section 164.1: Mapping a NAT port String myIp = getIpAddress(); int port = 55555; //creates a port mapping configuration with the external/internal port, an internal host IP, the protocol and an optional description PortMapping[] desiredMapping = new PortMapping[2]; desiredMapping[0] = new PortMapping(port,myIp, PortMapping.Protocol.TCP); desiredMapping[1] = new PortMapping(port,myIp, PortMapping.Protocol.UDP); //starting the UPnP service UpnpService upnpService = new UpnpServiceImpl(new AndroidUpnpServiceConfiguration()); RegistryListener registryListener = new PortMappingListener(desiredMapping); upnpService.getRegistry().addListener(registryListener); upnpService.getControlPoint().search(); //method for getting local ip private String getIpAddress() { String ip = ""; try { Enumeration<NetworkInterface> enumNetworkInterfaces = NetworkInterface .getNetworkInterfaces(); while (enumNetworkInterfaces.hasMoreElements()) { NetworkInterface networkInterface = enumNetworkInterfaces .nextElement(); Enumeration<InetAddress> enumInetAddress = networkInterface .getInetAddresses(); while (enumInetAddress.hasMoreElements()) { InetAddress inetAddress = enumInetAddress.nextElement(); if (inetAddress.isSiteLocalAddress()) { ip +=inetAddress.getHostAddress(); } } } } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); ip += "Something Wrong! " + e.toString() + "\n"; } return ip; } Section 164.2: Adding Cling Support to your Android Project build.gradle repositories { maven { url 'http://4thline.org/m2' } } dependencies { GoalKicker.com Android Notes for Professionals 858 // Cling compile 'org.fourthline.cling:cling-support:2.1.0' //Other dependencies required by Cling compile 'org.eclipse.jetty:jetty-server:8.1.18.v20150929' compile 'org.eclipse.jetty:jetty-servlet:8.1.18.v20150929' compile 'org.eclipse.jetty:jetty-client:8.1.18.v20150929' compile 'org.slf4j:slf4j-jdk14:1.7.14' } GoalKicker.com Android Notes for Professionals 859 Chapter 165: Creating Overlay (always-ontop) Windows Section 165.1: Popup overlay In order to put your view on top of every application, you have to assign your view to the corresponding window manager. For that you need the system alert permission, which can be requested by adding the following line to your manifest le: <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" /> Note: If your application gets destroyed, your view will be removed from the window manager. Therefore, it is better to create the view and assign it to the window manager by a foreground service. Assigning a view to the WindowManager You can retrieve a window manager instance as follows: WindowManager mWindowManager = (WindowManager) mContext.getSystemService(Context.WINDOW_SERVICE); In order to dene the position of your view, you have to create some layout parameters as follows: WindowManager.LayoutParams mLayoutParams = new WindowManager.LayoutParams( ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT, WindowManager.LayoutParams.TYPE_PHONE, WindowManager.LayoutParams.FLAG_TURN_SCREEN_ON, PixelFormat.TRANSLUCENT); mLayoutParams.gravity = Gravity.CENTER_HORIZONTAL | Gravity.CENTER_VERTICAL; Now, you can assign your view together with the created layout parameters to the window manager instance as follows: mWindowManager.addView(yourView, mLayoutParams); Voila! Your view has been successfully placed on top of all other applications. Note: You view will not be put on top of the keyguard. Section 165.2: Granting SYSTEM_ALERT_WINDOW Permission on android 6.0 and above From android 6.0 this permission needs to grant dynamically, <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> Throwing below permission denied error on 6.0, Caused by: android.view.WindowManager$BadTokenException: Unable to add window android.view.ViewRootImpl$W@86fb55b -- permission denied for this window type Solution: GoalKicker.com Android Notes for Professionals 860 Requesting Overlay permission as below, if(!Settings.canDrawOverlays(this)){ // ask for setting Intent intent = new Intent(Settings.ACTION_MANAGE_OVERLAY_PERMISSION, Uri.parse("package:" + getPackageName())); startActivityForResult(intent, REQUEST_OVERLAY_PERMISSION); } Check for the result, @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == REQUEST_OVERLAY_PERMISSION) { if (Settings.canDrawOverlays(this)) { // permission granted... }else{ // permission not granted... } } } GoalKicker.com Android Notes for Professionals 861 Chapter 166: ExoPlayer Section 166.1: Add ExoPlayer to the project Via jCenter including the following in your project's build.gradle le: compile 'com.google.android.exoplayer:exoplayer:rX.X.X' where rX.X.X is the your preferred version. For the latest version, see the project's Releases. For more details, see the project on Bintray. Section 166.2: Using ExoPlayer Instantiate your ExoPlayer: exoPlayer = ExoPlayer.Factory.newInstance(RENDERER_COUNT, minBufferMs, minRebufferMs); To play audio only you can use these values: RENDERER_COUNT = 1 //since you want to render simple audio minBufferMs = 1000 minRebufferMs = 5000 Both buer values can be tweaked according to your requirements. Now you have to create a DataSource. When you want to stream mp3 you can use the DefaultUriDataSource. You have to pass the Context and a UserAgent. To keep it simple play a local le and pass null as userAgent: DataSource dataSource = new DefaultUriDataSource(context, null); Then create the sampleSource: ExtractorSampleSource sampleSource = new ExtractorSampleSource( uri, dataSource, new Mp3Extractor(), RENDERER_COUNT, requestedBufferSize); uri points to your le, as an Extractor you can use a simple default Mp3Extractor if you want to play mp3. requestedBuerSize can be tweaked again according to your requirements. Use 5000 for example. Now you can create your audio track renderer using the sample source as follows: MediaCodecAudioTrackRenderer audioRenderer = new MediaCodecAudioTrackRenderer(sampleSource); Finally call prepare on your exoPlayer instance: exoPlayer.prepare(audioRenderer); To start playback call: exoPlayer.setPlayWhenReady(true); GoalKicker.com Android Notes for Professionals 862 Section 166.3: Main steps to play video & audio using the standard TrackRenderer implementations // 1. Instantiate the player. player = ExoPlayer.Factory.newInstance(RENDERER_COUNT); // 2. Construct renderers. MediaCodecVideoTrackRenderer videoRenderer = ... MediaCodecAudioTrackRenderer audioRenderer = ... // 3. Inject the renderers through prepare. player.prepare(videoRenderer, audioRenderer); // 4. Pass the surface to the video renderer. player.sendMessage(videoRenderer, MediaCodecVideoTrackRenderer.MSG_SET_SURFACE, // 5. Start playback. player.setPlayWhenReady(true); ... player.release(); // Dont forget to release when done! GoalKicker.com Android Notes for Professionals surface); 863 Chapter 167: XMPP register login and chat simple example Section 167.1: XMPP register login and chat basic example Install openre or any chat server in your system or on server. For more details click here. Create android project and add these libraries in gradle: compile 'org.igniterealtime.smack:smack-android:4.2.0' compile 'org.igniterealtime.smack:smack-tcp:4.2.0' compile 'org.igniterealtime.smack:smack-im:4.2.0' compile 'org.igniterealtime.smack:smack-android-extensions:4.2.0' Next create one xmpp class from xmpp connection purpose: public class XMPP { public static final int PORT = 5222; private static XMPP instance; private XMPPTCPConnection connection; private static String TAG = "XMPP-EXAMPLE"; public static final String ACTION_LOGGED_IN = "liveapp.loggedin"; private String HOST = "192.168.0.10"; private XMPPTCPConnectionConfiguration buildConfiguration() throws XmppStringprepException { XMPPTCPConnectionConfiguration.Builder builder = XMPPTCPConnectionConfiguration.builder(); builder.setHost(HOST); builder.setPort(PORT); builder.setCompressionEnabled(false); builder.setDebuggerEnabled(true); builder.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled); builder.setSendPresence(true); if (Build.VERSION.SDK_INT >= 14) { builder.setKeystoreType("AndroidCAStore"); // config.setTruststorePassword(null); builder.setKeystorePath(null); } else { builder.setKeystoreType("BKS"); String str = System.getProperty("javax.net.ssl.trustStore"); if (str == null) { str = System.getProperty("java.home") + File.separator + "etc" + File.separator + "security" + File.separator + "cacerts.bks"; } builder.setKeystorePath(str); } DomainBareJid serviceName = JidCreate.domainBareFrom(HOST); builder.setServiceName(serviceName); return builder.build(); } GoalKicker.com Android Notes for Professionals 864 private XMPPTCPConnection getConnection() throws XMPPException, SmackException, IOException, InterruptedException { Log.logDebug(TAG, "Getting XMPP Connect"); if (isConnected()) { Log.logDebug(TAG, "Returning already existing connection"); return this.connection; } long l = System.currentTimeMillis(); try { if(this.connection != null){ Log.logDebug(TAG, "Connection found, trying to connect"); this.connection.connect(); }else{ Log.logDebug(TAG, "No Connection found, trying to create a new connection"); XMPPTCPConnectionConfiguration config = buildConfiguration(); SmackConfiguration.DEBUG = true; this.connection = new XMPPTCPConnection(config); this.connection.connect(); } } catch (Exception e) { Log.logError(TAG,"some issue with getting connection :" + e.getMessage()); } Log.logDebug(TAG, "Connection Properties: " + connection.getHost() + " " + connection.getServiceName()); Log.logDebug(TAG, "Time taken in first time connect: " + (System.currentTimeMillis() - l)); return this.connection; } public static XMPP getInstance() { if (instance == null) { synchronized (XMPP.class) { if (instance == null) { instance = new XMPP(); } } } return instance; } public void close() { Log.logInfo(TAG, "Inside XMPP close method"); if (this.connection != null) { this.connection.disconnect(); } } private XMPPTCPConnection connectAndLogin(Context context) { Log.logDebug(TAG, "Inside connect and Login"); if (!isConnected()) { Log.logDebug(TAG, "Connection not connected, trying to login and connect"); try { // Save username and password then use here String username = AppSettings.getUser(context); String password = AppSettings.getPassword(context); this.connection = getConnection(); Log.logDebug(TAG, "XMPP username :" + username); Log.logDebug(TAG, "XMPP password :" + password); this.connection.login(username, password); Log.logDebug(TAG, "Connect and Login method, Login successful"); GoalKicker.com Android Notes for Professionals 865 context.sendBroadcast(new Intent(ACTION_LOGGED_IN)); } catch (XMPPException localXMPPException) { Log.logError(TAG, "Error in Connect and Login Method"); localXMPPException.printStackTrace(); } catch (SmackException e) { Log.logError(TAG, "Error in Connect and Login Method"); e.printStackTrace(); } catch (IOException e) { Log.logError(TAG, "Error in Connect and Login Method"); e.printStackTrace(); } catch (InterruptedException e) { Log.logError(TAG, "Error in Connect and Login Method"); e.printStackTrace(); } catch (IllegalArgumentException e) { Log.logError(TAG, "Error in Connect and Login Method"); e.printStackTrace(); } catch (Exception e) { Log.logError(TAG, "Error in Connect and Login Method"); e.printStackTrace(); } } Log.logInfo(TAG, "Inside getConnection - Returning connection"); return this.connection; } public boolean isConnected() { return (this.connection != null) && (this.connection.isConnected()); } public EntityFullJid getUser() { if (isConnected()) { return connection.getUser(); } else { return null; } } public void login(String user, String pass, String username) throws XMPPException, SmackException, IOException, InterruptedException, PurplKiteXMPPConnectException { Log.logInfo(TAG, "inside XMPP getlogin Method"); long l = System.currentTimeMillis(); XMPPTCPConnection connect = getConnection(); if (connect.isAuthenticated()) { Log.logInfo(TAG, "User already logged in"); return; } Log.logInfo(TAG, "Time taken to connect: " + (System.currentTimeMillis() - l)); l = System.currentTimeMillis(); try{ connect.login(user, pass); }catch (Exception e){ Log.logError(TAG, "Issue in login, check the stacktrace"); e.printStackTrace(); } Log.logInfo(TAG, "Time taken to login: " + (System.currentTimeMillis() - l)); Log.logInfo(TAG, "login step passed"); GoalKicker.com Android Notes for Professionals 866 PingManager pingManager = PingManager.getInstanceFor(connect); pingManager.setPingInterval(5000); } public void register(String user, String pass) throws XMPPException, SmackException.NoResponseException, SmackException.NotConnectedException { Log.logInfo(TAG, "inside XMPP register method, " + user + " : " + pass); long l = System.currentTimeMillis(); try { AccountManager accountManager = AccountManager.getInstance(getConnection()); accountManager.sensitiveOperationOverInsecureConnection(true); accountManager.createAccount(Localpart.from(user), pass); } catch (SmackException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } catch (PurplKiteXMPPConnectException e) { e.printStackTrace(); } Log.logInfo(TAG, "Time taken to register: " + (System.currentTimeMillis() - l)); } public void addStanzaListener(Context context, StanzaListener stanzaListener){ XMPPTCPConnection connection = connectAndLogin(context); connection.addAsyncStanzaListener(stanzaListener, null); } public void removeStanzaListener(Context context, StanzaListener stanzaListener){ XMPPTCPConnection connection = connectAndLogin(context); connection.removeAsyncStanzaListener(stanzaListener); } public void addChatListener(Context context, ChatManagerListener chatManagerListener){ ChatManager.getInstanceFor(connectAndLogin(context)) .addChatListener(chatManagerListener); } public void removeChatListener(Context context, ChatManagerListener chatManagerListener){ ChatManager.getInstanceFor(connectAndLogin(context)).removeChatListener(chatManagerListener); } public void getSrvDeliveryManager(Context context){ ServiceDiscoveryManager sdm = ServiceDiscoveryManager .getInstanceFor(XMPP.getInstance().connectAndLogin( context)); //sdm.addFeature("http://jabber.org/protocol/disco#info"); //sdm.addFeature("jabber:iq:privacy"); sdm.addFeature("jabber.org/protocol/si"); sdm.addFeature("http://jabber.org/protocol/si"); sdm.addFeature("http://jabber.org/protocol/disco#info"); sdm.addFeature("jabber:iq:privacy"); } public String getUserLocalPart(Context context){ return connectAndLogin(context).getUser().getLocalpart().toString(); } GoalKicker.com Android Notes for Professionals 867 public EntityFullJid getUser(Context context){ return connectAndLogin(context).getUser(); } public Chat getThreadChat(Context context, String party1, String party2){ Chat chat = ChatManager.getInstanceFor( XMPP.getInstance().connectAndLogin(context)) .getThreadChat(party1 + "-" + party2); return chat; } public Chat createChat(Context context, EntityJid jid, String party1, String party2, ChatMessageListener messageListener){ Chat chat = ChatManager.getInstanceFor( XMPP.getInstance().connectAndLogin(context)) .createChat(jid, party1 + "-" + party2, messageListener); return chat; } public void sendPacket(Context context, Stanza packet){ try { connectAndLogin(context).sendStanza(packet); } catch (SmackException.NotConnectedException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } } } Finally, add this activiy: private UserLoginTask mAuthTask = null; private ChatManagerListener chatListener; private Chat chat; private Jid opt_jid; private ChatMessageListener messageListener; private StanzaListener packetListener; private boolean register(final String paramString1,final String paramString2) { try { XMPP.getInstance().register(paramString1, paramString2); return true; } catch (XMPPException localXMPPException) { localXMPPException.printStackTrace(); } catch (SmackException.NoResponseException e) { e.printStackTrace(); } catch (SmackException.NotConnectedException e) { e.printStackTrace(); } return false; } private boolean login(final String user,final String pass,final String username) { try { XMPP.getInstance().login(user, pass, username); GoalKicker.com Android Notes for Professionals 868 sendBroadcast(new Intent("liveapp.loggedin")); return true; } catch (Exception e) { e.printStackTrace(); try { XMPP.getInstance() .login(user, pass, username); sendBroadcast(new Intent("liveapp.loggedin")); return true; } catch (XMPPException e1) { e1.printStackTrace(); } catch (SmackException e1) { e1.printStackTrace(); } catch (InterruptedException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); }catch (Exception e1){ e1.printStackTrace(); } } return false; } public class UserLoginTask extends AsyncTask<Void, Void, Boolean> { public UserLoginTask() { } protected Boolean doInBackground(Void... paramVarArgs) { String mEmail = "abc"; String mUsername = "abc"; String mPassword = "welcome"; if (register(mEmail, mPassword)) { try { XMPP.getInstance().close(); } catch (Exception e) { e.printStackTrace(); } } return login(mEmail, mPassword, mUsername); } protected void onCancelled() { mAuthTask = null; } @Override protected void onPreExecute() { super.onPreExecute(); } protected void onPostExecute(Boolean success) { mAuthTask = null; try { GoalKicker.com Android Notes for Professionals 869 if (success) { messageListener = new ChatMessageListener() { @Override public void processMessage(Chat chat, Message message) { // here you will get only connected user by you } }; packetListener = new StanzaListener() { @Override public void processPacket(Stanza packet) throws SmackException.NotConnectedException, InterruptedException { if (packet instanceof Message) { final Message message = (Message) packet; // here you will get all messages send by anybody } } }; chatListener = new ChatManagerListener() { @Override public void chatCreated(Chat chatCreated, boolean local) { onChatCreated(chatCreated); } }; try { String opt_jidStr = "abc"; try { opt_jid = JidCreate.bareFrom(Localpart.from(opt_jidStr), Domainpart.from(HOST)); } catch (XmppStringprepException e) { e.printStackTrace(); } String addr1 = XMPP.getInstance().getUserLocalPart(getActivity()); String addr2 = opt_jid.toString(); if (addr1.compareTo(addr2) > 0) { String addr3 = addr2; addr2 = addr1; addr1 = addr3; } chat = XMPP.getInstance().getThreadChat(getActivity(), addr1, addr2); if (chat == null) { chat = XMPP.getInstance().createChat(getActivity(), (EntityJid) opt_jid, addr1, addr2, messageListener); PurplkiteLogs.logInfo(TAG, "chat value single chat 1 :" + chat); } else { chat.addMessageListener(messageListener); PurplkiteLogs.logInfo(TAG, "chat value single chat 2:" + chat); } } catch (Exception e) { e.printStackTrace(); } GoalKicker.com Android Notes for Professionals 870 XMPP.getInstance().addStanzaListener(getActivity(), packetListener); XMPP.getInstance().addChatListener(getActivity(), chatListener); XMPP.getInstance().getSrvDeliveryManager(getActivity()); } else { } } catch (Exception e) { e.printStackTrace(); } } } /** * user attemptLogin for xmpp * */ private void attemptLogin() { if (mAuthTask != null) { return; } boolean cancel = false; View focusView = null; if (cancel) { focusView.requestFocus(); } else { try { mAuthTask = new UserLoginTask(); mAuthTask.execute((Void) null); } catch (Exception e) { } } } void onChatCreated(Chat chatCreated) { if (chat != null) { if (chat.getParticipant().getLocalpart().toString().equals( chatCreated.getParticipant().getLocalpart().toString())) { chat.removeMessageListener(messageListener); chat = chatCreated; chat.addMessageListener(messageListener); } } else { chat = chatCreated; chat.addMessageListener(messageListener); } } private void sendMessage(String message) { if (chat != null) { try { chat.sendMessage(message); } catch (SmackException.NotConnectedException e) { e.printStackTrace(); } catch (Exception e) { GoalKicker.com Android Notes for Professionals 871 e.printStackTrace(); } } } @Override public void onDestroy() { // TODO Auto-generated method stub super.onDestroy(); try { XMPP.getInstance().removeChatListener(getActivity(), chatListener); if (chat != null && messageListener != null) { XMPP.getInstance().removeStanzaListener(getActivity(), packetListener); chat.removeMessageListener(messageListener); } } catch (Exception e) { e.printStackTrace(); } } Make sure the internet permission is added in your manifest le. GoalKicker.com Android Notes for Professionals 872 Chapter 168: Android Authenticator Section 168.1: Basic Account Authenticator Service The Android Account Authenticator system can be used to make the client authenticate with a remote server. Three pieces of information are required: A service, triggered by the android.accounts.AccountAuthenticator. Its onBind method should return a subclass of AbstractAccountAuthenticator. An activity to prompt the user for credentials (Login activity) An xml resource le to describe the account 1. The service: Place the following permissions in your AndroidManifest.xml: <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.MANAGE_ACCOUNTS" /> <uses-permission android:name="android.permission.AUTHENTICATE_ACCOUNTS" /> <uses-permission android:name="android.permission.USE_CREDENTIALS" /> Declare the service in the manifest le: <service android:name="com.example.MyAuthenticationService"> <intent-filter> <action android:name="android.accounts.AccountAuthenticator" /> </intent-filter> <meta-data android:name="android.accounts.AccountAuthenticator" android:resource="@xml/authenticator" /> </service> Note that the android.accounts.AccountAuthenticator is included within the intent-filter tag. The xml resource (named authenticator here) is specied in the meta-data tag. The service class: public class MyAuthenticationService extends Service { private static final Object lock = new Object(); private MyAuthenticator mAuthenticator; public MyAuthenticationService() { super(); } @Override public void onCreate() { super.onCreate(); synchronized (lock) { if (mAuthenticator == null) { mAuthenticator = new MyAuthenticator(this); } } } GoalKicker.com Android Notes for Professionals 873 @Override public IBinder onBind(Intent intent) { return mAuthenticator.getIBinder(); } } 2. The xml resource: <account-authenticator xmlns:android="http://schemas.android.com/apk/res/android" android:accountType="com.example.account" android:icon="@drawable/appicon" android:smallIcon="@drawable/appicon" android:label="@string/app_name" /> Do not directly assign a string to android:label or assign missing drawables. It will crash without warning. 3. Extend the AbstractAccountAuthenticator class: public class MyAuthenticator extends AbstractAccountAuthenticator { private Context mContext; public MyAuthenticator(Context context) { super(context); mContext = context; } @Override public Bundle addAccount(AccountAuthenticatorResponse response, String accountType, String authTokenType, String[] requiredFeatures, Bundle options) throws NetworkErrorException { Intent intent = new Intent(mContext, LoginActivity.class); intent.putExtra(AccountManager.KEY_ACCOUNT_AUTHENTICATOR_RESPONSE, response); Bundle bundle = new Bundle(); bundle.putParcelable(AccountManager.KEY_INTENT, intent); return bundle; } @Override public Bundle confirmCredentials(AccountAuthenticatorResponse response, Account account, Bundle options) throws NetworkErrorException { return null; } @Override public Bundle editProperties(AccountAuthenticatorResponse response, String accountType) { return null; } @Override public Bundle getAuthToken(AccountAuthenticatorResponse response, Account account, String authTokenType, Bundle options) throws NetworkErrorException { return null; } GoalKicker.com Android Notes for Professionals 874 @Override public String getAuthTokenLabel(String authTokenType) { return null; } @Override public Bundle hasFeatures(AccountAuthenticatorResponse response, Account account, String[] features) throws NetworkErrorException { return null; } @Override public Bundle updateCredentials(AccountAuthenticatorResponse response, Account account, String authTokenType, Bundle options) throws NetworkErrorException { return null; } } The addAccount() method in AbstractAccountAuthenticator class is important as this method is called when adding an account from the "Add Account" screen in under settings. AccountManager.KEY_ACCOUNT_AUTHENTICATOR_RESPONSE is important, as it will include the AccountAuthenticatorResponse object that is needed to return the account keys upon successful user verication. GoalKicker.com Android Notes for Professionals 875 Chapter 169: AudioManager Section 169.1: Requesting Transient Audio Focus audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE); audioManager.requestAudioFocus(audioListener, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN_TRANSIENT); changedListener = new AudioManager.OnAudioFocusChangeListener() { @Override public void onAudioFocusChange(int focusChange) { if (focusChange == AudioManager.AUDIOFOCUS_GAIN) { // You now have the audio focus and may play sound. // When the sound has been played you give the focus back. audioManager.abandonAudioFocus(changedListener); } } } Section 169.2: Requesting Audio Focus audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE); audioManager.requestAudioFocus(audioListener, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN); changedListener = new AudioManager.OnAudioFocusChangeListener() { @Override public void onAudioFocusChange(int focusChange) { if (focusChange == AudioManager.AUDIOFOCUS_GAIN) { // You now have the audio focus and may play sound. } else if (focusChange == AudioManager.AUDIOFOCUS_REQUEST_FAILED) { // Handle the failure. } } } GoalKicker.com Android Notes for Professionals 876 Chapter 170: AudioTrack Section 170.1: Generate tone of a specic frequency To play a sound of with a specic tone,we rst have to create a sine wave sound.This is done in the following way. final int duration = 10; // duration of sound final int sampleRate = 22050; // Hz (maximum frequency is 7902.13Hz (B8)) final int numSamples = duration * sampleRate; final double samples[] = new double[numSamples]; final short buffer[] = new short[numSamples]; for (int i = 0; i < numSamples; ++i) { samples[i] = Math.sin(2 * Math.PI * i / (sampleRate / note[0])); // Sine wave buffer[i] = (short) (samples[i] * Short.MAX_VALUE); // Higher amplitude increases volume } Now we have to congure AudioTrack to play in accordance with the generated buer . It is done in the following manner AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, buffer.length, AudioTrack.MODE_STATIC); Write the generated buer and play the track audioTrack.write(buffer, 0, buffer.length); audioTrack.play(); Hope this helps :) GoalKicker.com Android Notes for Professionals 877 Chapter 171: Job Scheduling Section 171.1: Basic usage Create a new JobService This is done by extending the JobService class and implementing/overriding the required methods onStartJob() and onStopJob(). public class MyJobService extends JobService { final String TAG = getClass().getSimpleName(); @Override public boolean onStartJob(JobParameters jobParameters) { Log.i(TAG, "Job started"); // ... your code here ... jobFinished(jobParameters, false); // signal that we're done and don't want to reschedule return false; // finished: no more work to be done the job } @Override public boolean onStopJob(JobParameters jobParameters) { Log.w(TAG, "Job stopped"); return false; } } Add the new JobService to your AndroidManifest.xml The following step is mandatory, otherwise you won't be able to run your job: Declare your MyJobService class as a new <service> element between <application> </application> in your AndroidManifest.xml. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name=".MyJobService" android:permission="android.permission.BIND_JOB_SERVICE" /> </application> GoalKicker.com Android Notes for Professionals 878 </manifest> Setup and run the job After you implemented a new JobService and added it to your AndroidManifest.xml, you can continue with the nal steps. onButtonClick_startJob() prepares and runs a periodical job. Besides periodic jobs, JobInfo.Builder allows to specify many other settings and constraints. For example you can dene that a plugged in charger or a network connection is required to run the job. onButtonClick_stopJob() cancels all running jobs public class MainActivity extends AppCompatActivity { final String TAG = getClass().getSimpleName(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } public void onButtonClick_startJob(View v) { // get the jobScheduler instance from current context JobScheduler jobScheduler = (JobScheduler) getSystemService(JOB_SCHEDULER_SERVICE); // MyJobService provides the implementation for the job ComponentName jobService = new ComponentName(getApplicationContext(), MyJobService.class); // define that the job will run periodically in intervals of 10 seconds JobInfo jobInfo = new JobInfo.Builder(1, jobService).setPeriodic(10 * 1000).build(); // schedule/start the job int result = jobScheduler.schedule(jobInfo); if (result == JobScheduler.RESULT_SUCCESS) Log.d(TAG, "Successfully scheduled job: " + result); else Log.e(TAG, "RESULT_FAILURE: " + result); } public void onButtonClick_stopJob(View v) { JobScheduler jobScheduler = (JobScheduler) getSystemService(JOB_SCHEDULER_SERVICE); Log.d(TAG, "Stopping all jobs..."); jobScheduler.cancelAll(); // cancel all potentially running jobs } } After calling onButtonClick_startJob(), the job will approximately run in intervals of 10 seconds, even when the app is in the paused state (user pressed home button and app is no longer visible). Instead of cancelling all running jobs inside onButtonClick_stopJob(), you can also call jobScheduler.cancel() to cancel a specic job based on it's job ID. GoalKicker.com Android Notes for Professionals 879 Chapter 172: Accounts and AccountManager Section 172.1: Understanding custom accounts/authentication The following example is high level coverage of the key concepts and basic skeletal setup: 1. Collects credentials from the user (Usually from a login screen you've created) 2. Authenticates the credentials with the server (stores custom authentication) 3. Stores the credentials on the device Extend an AbstractAccountAuthenticator (Primarily used to retrieve authentication & re-authenticate them) public class AccountAuthenticator extends AbstractAccountAuthenticator { @Override public Bundle addAccount(AccountAuthenticatorResponse response, String accountType, String authTokenType, String[] requiredFeatures, Bundle options) { //intent to start the login activity } @Override public Bundle confirmCredentials(AccountAuthenticatorResponse response, Account account, Bundle options) { } @Override public Bundle editProperties(AccountAuthenticatorResponse response, String accountType) { } @Override public Bundle getAuthToken(AccountAuthenticatorResponse response, Account account, String authTokenType, Bundle options) throws NetworkErrorException { //retrieve authentication tokens from account manager storage or custom storage or reauthenticate old tokens and return new ones } @Override public String getAuthTokenLabel(String authTokenType) { } @Override public Bundle hasFeatures(AccountAuthenticatorResponse response, Account account, String[] features) throws NetworkErrorException { //check whether the account supports certain features } @Override public Bundle updateCredentials(AccountAuthenticatorResponse response, Account account, String authTokenType, Bundle options) { //when the user's session has expired or requires their previously available credentials to be updated, here is the function to do it. GoalKicker.com Android Notes for Professionals 880 } } Create a service (Account Manager framework connects to the extended AbstractAccountAuthenticator through the service interface) public class AuthenticatorService extends Service { private AccountAuthenticator authenticator; @Override public void onCreate(){ authenticator = new AccountAuthenticator(this); } @Override public IBinder onBind(Intent intent) { return authenticator.getIBinder(); } } Authenticator XML conguration (The account manager framework requires. This is what you'll see inside Settings -> Accounts in Android) <account-authenticator xmlns:android="http://schemas.android.com/apk/res/android" android:accountType="rename.with.your.applicationid" android:icon="@drawable/app_icon" android:label="@string/app_name" android:smallIcon="@drawable/app_icon" /> Changes to the AndroidManifest.xml (Bring all the above concepts together to make it usable programmatically through the AccountManager) <application ...> <service android:name=".authenticator.AccountAuthenticatorService" android:exported="false" android:process=":authentication"> <intent-filter> <action android:name="android.accounts.AccountAuthenticator"/> </intent-filter> <meta-data android:name="android.accounts.AccountAuthenticator" android:resource="@xml/authenticator"/> </service> </application> The next example will contain how to make use of this setup. GoalKicker.com Android Notes for Professionals 881 Chapter 173: Integrate OpenCV into Android Studio Section 173.1: Instructions Tested with A.S. v1.4.1 but should work with newer versions too. 1. Create a new Android Studio project using the project wizard (Menu:/File/New Project): Call it "cvtest1" Form factor: API 19, Android 4.4 (KitKat) Blank Activity named MainActivity You should have a cvtest1 directory where this project is stored. (the title bar of Android studio shows you where cvtest1 is when you open the project) 2. Verify that your app runs correctly. Try changing something like the "Hello World" text to conrm that the build/test cycle is OK for you. (I'm testing with an emulator of an API 19 device). 3. Download the OpenCV package for Android v3.1.0 and unzip it in some temporary directory somewhere. (Make sure it is the package specically for Android and not just the OpenCV for Java package.) I'll call this directory "unzip-dir" Below unzip-dir you should have a sdk/native/libs directory with subdirectories that start with things like arm..., mips... and x86... (one for each type of "architecture" Android runs on) 4. From Android Studio import OpenCV into your project as a module: Menu:/File/New/Import_Module: Source-directory: {unzip-dir}/sdk/java Module name: Android studio automatically lls in this eld with openCVLibrary310 (the exact name probably doesn't matter but we'll go with this). Click on next. You get a screen with three checkboxes and questions about jars, libraries and import options. All three should be checked. Click on Finish. Android Studio starts to import the module and you are shown an import-summary.txt le that has a list of what was not imported (mostly javadoc les) and other pieces of information. GoalKicker.com Android Notes for Professionals 882 But you also get an error message saying failed to nd target with hash string 'android-14'.... This happens because the build.gradle le in the OpenCV zip le you downloaded says to compile using android API version 14, which by default you don't have with Android Studio v1.4.1. 5. Open the project structure dialogue (Menu:/File/Project_Structure). Select the "app" module, click on the Dependencies tab and add :openCVLibrary310 as a Module Dependency. When you select Add/Module_Dependency it should appear in the list of modules you can add. It will now show up as a dependency but you will get a few more cannot-nd-android-14 errors in the event log. 6. Look in the build.gradle le for your app module. There are multiple build.gradle les in an Android project. GoalKicker.com Android Notes for Professionals 883 The one you want is in the cvtest1/app directory and from the project view it looks like build.gradle (Module: app). Note the values of these four elds: compileSDKVersion (mine says 23) buildToolsVersion (mine says 23.0.2) minSdkVersion (mine says 19) targetSdkVersion (mine says 23) 7. Your project now has a cvtest1/OpenCVLibrary310 directory but it is not visible from the project view: Use some other tool, such as any le manager, and go to this directory. You can also switch the project view from Android to Project Files and you can nd this directory as shown in this screenshot: Inside there is another build.gradle le (it's highlighted in the above screenshot). Update this le with the four values from step 6. 8. Resynch your project and then clean/rebuild it. (Menu:/Build/Clean_Project) It should clean and build GoalKicker.com Android Notes for Professionals 884 without errors and you should see many references to :openCVLibrary310 in the 0:Messages screen. At this point the module should appear in the project hierarchy as openCVLibrary310, just like app. (Note that in that little drop-down menu I switched back from Project View to Android View ). You should also see an additional build.gradle le under "Gradle Scripts" but I nd the Android Studio interface a little bit glitchy and sometimes it does not do this right away. So try resynching, cleaning, even restarting Android Studio. You should see the openCVLibrary310 module with all the OpenCV functions under java like in this screenshot: GoalKicker.com Android Notes for Professionals 885 9. Copy the {unzip-dir}/sdk/native/libs directory (and everything under it) to your Android project, to cvtest1/OpenCVLibrary310/src/main/, and then rename your copy from libs to jniLibs. You should now have a cvtest1/OpenCVLibrary310/src/main/jniLibs directory. Resynch your project and this directory should now appear in the project view under openCVLibrary310. GoalKicker.com Android Notes for Professionals 886 10. Go to the onCreate method of MainActivity.java and append this code: if (!OpenCVLoader.initDebug()) { Log.e(this.getClass().getSimpleName(), " } else { Log.d(this.getClass().getSimpleName(), " } OpenCVLoader.initDebug(), not working."); OpenCVLoader.initDebug(), working."); Then run your application. You should see lines like this in the Android Monitor: GoalKicker.com Android Notes for Professionals 887 (I don't know why that line with the error message is there) 11. Now try to actually use some openCV code. In the example below I copied a .jpg le to the cache directory of the cvtest1 application on the android emulator. The code below loads this image, runs the canny edge detection algorithm and then writes the results back to a .png le in the same directory. Put this code just below the code from the previous step and alter it to match your own files/directories. String inputFileName="simm_01"; String inputExtension = "jpg"; String inputDir = getCacheDir().getAbsolutePath(); // use the cache directory for i/o String outputDir = getCacheDir().getAbsolutePath(); String outputExtension = "png"; String inputFilePath = inputDir + File.separator + inputFileName + "." + inputExtension; Log.d (this.getClass().getSimpleName(), "loading " + inputFilePath + "..."); Mat image = Imgcodecs.imread(inputFilePath); Log.d (this.getClass().getSimpleName(), "width of " + inputFileName + ": " + image.width()); // if width is 0 then it did not read your image. // for the canny edge detection algorithm, play with these to see different results int threshold1 = 70; int threshold2 = 100; Mat im_canny = new Mat(); // you have to initialize output image before giving it to the Canny method Imgproc.Canny(image, im_canny, threshold1, threshold2); String cannyFilename = outputDir + File.separator + inputFileName + "_canny-" + threshold1 + "" + threshold2 + "." + outputExtension; Log.d (this.getClass().getSimpleName(), "Writing " + cannyFilename); Imgcodecs.imwrite(cannyFilename, im_canny); GoalKicker.com Android Notes for Professionals 888 12. Run your application. Your emulator should create a black and white "edge" image. You can use the Android Device Monitor to retrieve the output or write an activity to show it. GoalKicker.com Android Notes for Professionals 889 Chapter 174: MVVM (Architecture) Section 174.1: MVVM Example using DataBinding Library The whole point of MVVM is to separate layers containing logic from the view layer. On Android we can use the DataBinding Library to help us with this and make most of our logic Unit-testable without worrying about Android dependencies. In this example I'll show the central components for a stupid simple App that does the following: At start up fake a network call and show a loading spinner Show a view with a click counter TextView, a message TextView, and a button to increment the counter On button click update counter and update counter color and message text if counter reaches some number Let's start with the view layer: activity_main.xml: If you're unfamiliar with how DataBinding works you should probably take 10 minutes to make yourself familiar with it. As you can see, all elds you would usually update with setters are bound to functions on the viewModel variable. If you've got a question about the android:visibility or app:textColor properties check the 'Remarks' section. <layout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools"> <data> <import type="android.view.View" /> <variable name="viewModel" type="de.walled.mvvmtest.viewmodel.ClickerViewModel"/> </data> <RelativeLayout android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="@dimen/activity_horizontal_margin" tools:context="de.walled.mvvmtest.view.MainActivity"> <LinearLayout android:id="@+id/click_counter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="60dp" android:visibility="@{viewModel.contentVisible ? View.VISIBLE : View.GONE}" android:padding="8dp" android:orientation="horizontal"> GoalKicker.com Android Notes for Professionals 890 <TextView android:id="@+id/number_of_clicks" android:layout_width="wrap_content" android:layout_height="wrap_content" style="@style/ClickCounter" android:text="@{viewModel.numberOfClicks}" android:textAlignment="center" app:textColor="@{viewModel.counterColor}" tools:text="8" tools:textColor="@color/red" /> <TextView android:id="@+id/static_label" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="4dp" android:layout_marginStart="4dp" style="@style/ClickCounter" android:text="@string/label.clicks" app:textColor="@{viewModel.counterColor}" android:textAlignment="center" tools:textColor="@color/red" /> </LinearLayout> <TextView android:id="@+id/message" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/click_counter" android:layout_centerHorizontal="true" android:visibility="@{viewModel.contentVisible ? View.VISIBLE : View.GONE}" android:text="@{viewModel.labelText}" android:textAlignment="center" android:textSize="18sp" tools:text="You're bad and you should feel bad!" /> <Button android:id="@+id/clicker" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/message" android:layout_centerHorizontal="true" android:layout_marginTop="8dp" android:visibility="@{viewModel.contentVisible ? View.VISIBLE : View.GONE}" android:padding="8dp" android:text="@string/label.button" android:onClick="@{() -> viewModel.onClickIncrement()}" /> GoalKicker.com Android Notes for Professionals 891 <android.support.v4.widget.ContentLoadingProgressBar android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="90dp" android:layout_centerHorizontal="true" style="@android:style/Widget.ProgressBar.Inverse" android:visibility="@{viewModel.loadingVisible ? View.VISIBLE : View.GONE}" android:indeterminate="true" /> </RelativeLayout> </layout> Next the model layer. Here I have: two elds that represent the state of the app getters to read the number of clicks and the state of excitement a method to increment my click count a method to restore some previous state (important for orientation changes) Also I dene here a 'state of excitement' that is dependent on the number of clicks. This will later be used to update color and message on the View. It is important to note that there are no assumptions made in the model about how the state might be displayed to the user! ClickerModel.java import com.google.common.base.Optional; import de.walled.mvvmtest.viewmodel.ViewState; public class ClickerModel implements IClickerModel { private int numberOfClicks; private Excitement stateOfExcitement; public void incrementClicks() { numberOfClicks += 1; updateStateOfExcitement(); } public int getNumberOfClicks() { return Optional.fromNullable(numberOfClicks).or(0); } public Excitement getStateOfExcitement() { return Optional.fromNullable(stateOfExcitement).or(Excitement.BOO); } public void restoreState(ViewState state) { numberOfClicks = state.getNumberOfClicks(); updateStateOfExcitement(); } private void updateStateOfExcitement() { if (numberOfClicks < 10) { stateOfExcitement = Excitement.BOO; GoalKicker.com Android Notes for Professionals 892 } else if (numberOfClicks <= 20) { stateOfExcitement = Excitement.MEH; } else { stateOfExcitement = Excitement.WOOHOO; } } } Next the ViewModel. This will trigger changes on the model and format data from the model to show them on the view. Note that it is here where we evaluate which GUI representation is appropriate for the state given by the model (resolveCounterColor and resolveLabelText). So we could for example easily implement an UnderachieverClickerModel that has lower thresholds for the state of excitement without touching any code in the viewModel or view. Also note that the ViewModel does not hold any references to view objects. All properties are bound via the @Bindable annotations and updated when either notifyChange() (signals all properties need to be updated) or notifyPropertyChanged(BR.propertyName) (signals this properties need to be updated). ClickerViewModel.java import android.databinding.BaseObservable; import android.databinding.Bindable; import android.support.annotation.ColorRes; import android.support.annotation.StringRes; import com.android.databinding.library.baseAdapters.BR; import de.walled.mvvmtest.R; import de.walled.mvvmtest.api.IClickerApi; import de.walled.mvvmtest.model.Excitement; import de.walled.mvvmtest.model.IClickerModel; import rx.Observable; public class ClickerViewModel extends BaseObservable { private final IClickerApi api; boolean isLoading = false; private IClickerModel model; public ClickerViewModel(IClickerModel model, IClickerApi api) { this.model = model; this.api = api; } public void onClickIncrement() { model.incrementClicks(); notifyChange(); } public ViewState getViewState() { ViewState viewState = new ViewState(); viewState.setNumberOfClicks(model.getNumberOfClicks()); return viewState; } public Observable<ViewState> loadData() { isLoading = true; GoalKicker.com Android Notes for Professionals 893 return api.fetchInitialState() .doOnNext(this::initModel) .doOnTerminate(() -> { isLoading = false; notifyPropertyChanged(BR.loadingVisible); notifyPropertyChanged(BR.contentVisible); }); } public void initFromSavedState(ViewState savedState) { initModel(savedState); } @Bindable public String getNumberOfClicks() { final int clicks = model.getNumberOfClicks(); return String.valueOf(clicks); } @Bindable @StringRes public int getLabelText() { final Excitement stateOfExcitement = model.getStateOfExcitement(); return resolveLabelText(stateOfExcitement); } @Bindable @ColorRes public int getCounterColor() { final Excitement stateOfExcitement = model.getStateOfExcitement(); return resolveCounterColor(stateOfExcitement); } @Bindable public boolean isLoadingVisible() { return isLoading; } @Bindable public boolean isContentVisible() { return !isLoading; } private void initModel(final ViewState viewState) { model.restoreState(viewState); notifyChange(); } @ColorRes private int resolveCounterColor(Excitement stateOfExcitement) { switch (stateOfExcitement) { case MEH: return R.color.yellow; case WOOHOO: return R.color.green; default: return R.color.red; } } @StringRes private int resolveLabelText(Excitement stateOfExcitement) { GoalKicker.com Android Notes for Professionals 894 switch (stateOfExcitement) { case MEH: return R.string.label_indifferent; case WOOHOO: return R.string.label_excited; default: return R.string.label_negative; } } } Tying it all together in the Activity! Here we see the view initializing the viewModel with all dependencies it might need, that have to be instantiated from an android context. After the viewModel is initialized it is bound to the xml layout via the DataBindingUtil (Please check 'Syntax' section for naming of generated classes). Note subscriptions are subscribed to on this layer because we have to handle unsubscribing them when the activity is paused or destroyed to avoid memory leaks and NPEs. Also persisting and reloading of the viewState on OrientationChanges is triggered here MainActivity.java import android.databinding.DataBindingUtil; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import de.walled.mvvmtest.R; import de.walled.mvvmtest.api.ClickerApi; import de.walled.mvvmtest.api.IClickerApi; import de.walled.mvvmtest.databinding.ActivityMainBinding; import de.walled.mvvmtest.model.ClickerModel; import de.walled.mvvmtest.viewmodel.ClickerViewModel; import de.walled.mvvmtest.viewmodel.ViewState; import rx.Subscription; import rx.subscriptions.Subscriptions; public class MainActivity extends AppCompatActivity { private static final String KEY_VIEW_STATE = "state.view"; private ClickerViewModel viewModel; private Subscription fakeLoader = Subscriptions.unsubscribed(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // would usually be injected but I feel Dagger would be out of scope final IClickerApi api = new ClickerApi(); setupViewModel(savedInstanceState, api); ActivityMainBinding binding = DataBindingUtil.setContentView(this, R.layout.activity_main); binding.setViewModel(viewModel); } @Override GoalKicker.com Android Notes for Professionals 895 protected void onPause() { fakeLoader.unsubscribe(); super.onPause(); } @Override protected void onDestroy() { fakeLoader.unsubscribe(); super.onDestroy(); } @Override protected void onSaveInstanceState(Bundle outState) { outState.putSerializable(KEY_VIEW_STATE, viewModel.getViewState()); } private void setupViewModel(Bundle savedInstance, IClickerApi api) { viewModel = new ClickerViewModel(new ClickerModel(), api); final ViewState savedState = getViewStateFromBundle(savedInstance); if (savedState == null) { fakeLoader = viewModel.loadData().subscribe(); } else { viewModel.initFromSavedState(savedState); } } private ViewState getViewStateFromBundle(Bundle savedInstance) { if (savedInstance != null) { return (ViewState) savedInstance.getSerializable(KEY_VIEW_STATE); } return null; } } To see everything in action check out this example project. GoalKicker.com Android Notes for Professionals 896 Chapter 175: ORMLite in android Section 175.1: Android OrmLite over SQLite example ORMLite is an Object Relational Mapping package that provides simple and lightweight functionality for persisting Java objects to SQL databases while avoiding the complexity and overhead of more standard ORM packages. Speaking for Android, OrmLite is implemented over the out-of-the-box supported database, SQLite. It makes direct calls to the API to access SQLite. Gradle setup To get started you should include the package to the build gradle. // https://mvnrepository.com/artifact/com.j256.ormlite/ormlite-android compile group: 'com.j256.ormlite', name: 'ormlite-android', version: '5.0' POJO configuration Then you should congure a POJO to be persisted to the database. Here care must be taken to the annotations: Add the @DatabaseTable annotation to the top of each class. You can also use @Entity. Add the @DatabaseField annotation right before each eld to be persisted. You can also use @Column and others. Add a no-argument constructor to each class with at least package visibility. @DatabaseTable(tableName = "form_model") public class FormModel implements Serializable { @DatabaseField(generatedId = true) private Long id; @DatabaseField(dataType = DataType.SERIALIZABLE) ArrayList<ReviewItem> reviewItems; @DatabaseField(index = true) private String username; @DatabaseField private String createdAt; public FormModel() { } public FormModel(ArrayList<ReviewItem> reviewItems, String username, String createdAt) { this.reviewItems = reviewItems; this.username = username; this.createdAt = createdAt; } } At the example above there is one table (form_model) with 4 elds. id eld is auto generated index. username is an index to the database. More information about the annotation can be found at the ocial documentation. GoalKicker.com Android Notes for Professionals 897 Database Helper To continue with, you will need to create a database helper class which should extend the OrmLiteSqliteOpenHelper class. This class creates and upgrades the database when your application is installed and can also provide the DAO classes used by your other classes. DAO stands for Data Access Object and it provides all the scrum functionality and specializes in the handling a single persisted class. The helper class must implement the following two methods: onCreate(SQLiteDatabase sqliteDatabase, ConnectionSource connectionSource); onCreate creates the database when your app is rst installed onUpgrade(SQLiteDatabase database, ConnectionSource connectionSource, int oldVersion, int newVersion); onUpgrade handles the upgrading of the database tables when you upgrade your app to a new version Database Helper class example: public class OrmLite extends OrmLiteSqliteOpenHelper { //Database name private static final String DATABASE_NAME = "gaia"; //Version of the database. Changing the version will call {@Link OrmLite.onUpgrade} private static final int DATABASE_VERSION = 2; /** * The data access object used to interact with the Sqlite database to do C.R.U.D operations. */ private Dao<FormModel, Long> todoDao; public OrmLite(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION, /** * R.raw.ormlite_config is a reference to the ormlite_config2.txt file in the * /res/raw/ directory of this project * */ R.raw.ormlite_config2); } @Override public void onCreate(SQLiteDatabase database, ConnectionSource connectionSource) { try { /** * creates the database table */ TableUtils.createTable(connectionSource, FormModel.class); } catch (SQLException e) { GoalKicker.com Android Notes for Professionals 898 e.printStackTrace(); } catch (java.sql.SQLException e) { e.printStackTrace(); } } /* It is called when you construct a SQLiteOpenHelper with version newer than the version of the opened database. */ @Override public void onUpgrade(SQLiteDatabase database, ConnectionSource connectionSource, int oldVersion, int newVersion) { try { /** * Recreates the database when onUpgrade is called by the framework */ TableUtils.dropTable(connectionSource, FormModel.class, false); onCreate(database, connectionSource); } catch (SQLException | java.sql.SQLException e) { e.printStackTrace(); } } /** * Returns an instance of the data access object * @return * @throws SQLException */ public Dao<FormModel, Long> getDao() throws SQLException { if(todoDao == null) { try { todoDao = getDao(FormModel.class); } catch (java.sql.SQLException e) { e.printStackTrace(); } } return todoDao; } } Persisting Object to SQLite Finally, the class that persists the object to the database. public class ReviewPresenter { Dao<FormModel, Long> simpleDao; public ReviewPresenter(Application application) { this.application = (GaiaApplication) application; simpleDao = this.application.getHelper().getDao(); } public void storeFormToSqLite(FormModel form) { try { simpleDao.create(form); } catch (SQLException e) { e.printStackTrace(); } List<FormModel> list = null; GoalKicker.com Android Notes for Professionals 899 try { // query for all of the data objects in the database list = simpleDao.queryForAll(); } catch (SQLException e) { e.printStackTrace(); } // our string builder for building the content-view StringBuilder sb = new StringBuilder(); int simpleC = 1; for (FormModel simple : list) { sb.append('#').append(simpleC).append(": ").append(simple.getUsername()).append('\n'); simpleC++; } System.out.println(sb.toString()); } //Query to database to get all forms by username public List<FormModel> getAllFormsByUsername(String username) { List<FormModel> results = null; try { results = simpleDao.queryBuilder().where().eq("username", PreferencesManager.getInstance().getString(Constants.USERNAME)).query(); } catch (SQLException e) { e.printStackTrace(); } return results; } } The accessor of the DOA at the constructor of the above class is dened as: private OrmLite dbHelper = null; /* Provides the SQLite Helper Object among the application */ public OrmLite getHelper() { if (dbHelper == null) { dbHelper = OpenHelperManager.getHelper(this, OrmLite.class); } return dbHelper; } GoalKicker.com Android Notes for Professionals 900 Chapter 176: Retrot2 with RxJava Section 176.1: Retrot2 with RxJava First, add relevant dependencies into the build.gradle le. dependencies { .... compile 'com.squareup.retrofit2:retrofit:2.3.0' compile 'com.squareup.retrofit2:converter-gson:2.3.0' compile 'com.squareup.retrofit2:adapter-rxjava:2.3.0' .... } Then create the model you would like to receive: public class Server { public String name; public String url; public String apikey; public List<Site> siteList; } Create an interface containing methods used to exchange data with remote server: public interface ApiServerRequests { @GET("api/get-servers") public Observable<List<Server>> getServers(); } Then create a Retrofit instance: public ApiRequests DeviceAPIHelper () { Gson gson = new GsonBuilder().create(); Retrofit retrofit = new Retrofit.Builder() .baseUrl("http://example.com/") .addConverterFactory(GsonConverterFactory.create(gson)) .addCallAdapterFactory(RxJavaCallAdapterFactory.create()) .build(); api = retrofit.create(ApiServerRequests.class); return api; } Then, anywhere from the code, call the method: apiRequests.getServers() .subscribeOn(Schedulers.io()) // the observable is emitted on io thread .observerOn(AndroidSchedulers.mainThread()) // Methods needed to handle request in background thread .subscribe(new Subscriber<List<Server>>() { @Override public void onCompleted() { GoalKicker.com Android Notes for Professionals 901 } @Override public void onError(Throwable e) { } @Override public void onNext(List<Server> servers) { //A list of servers is fetched successfully } }); Section 176.2: Nested requests example: multiple requests, combine results Suppose we have an API which allows us to get object metadata in single request (getAllPets), and other request which have full data of single resource (getSinglePet). How we can query all of them in a single chain? public class PetsFetcher { static class PetRepository { List<Integer> ids; } static class Pet { int id; String name; int weight; int height; } interface PetApi { @GET("pets") Observable<PetRepository> getAllPets(); @GET("pet/{id}") Observable<Pet> getSinglePet(@Path("id") int id); } PetApi petApi; Disposable petsDisposable; public void requestAllPets() { petApi.getAllPets() .doOnSubscribe(new Consumer<Disposable>() { @Override public void accept(Disposable disposable) throws Exception { petsDisposable = disposable; } }) .flatMap(new Function<PetRepository, ObservableSource<Integer>>() { @Override public ObservableSource<Integer> apply(PetRepository petRepository) throws Exception { List<Integer> petIds = petRepository.ids; return Observable.fromIterable(petIds); } GoalKicker.com Android Notes for Professionals 902 }) .flatMap(new Function<Integer, ObservableSource<Pet>>() { @Override public ObservableSource<Pet> apply(Integer id) throws Exception { return petApi.getSinglePet(id); } }) .toList() .toObservable() .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Consumer<List<Pet>>() { @Override public void accept(List<Pet> pets) throws Exception { //use your pets here } }, new Consumer<Throwable>() { @Override public void accept(Throwable throwable) throws Exception { //show user something goes wrong } }); } void cancelRequests(){ if (petsDisposable!=null){ petsDisposable.dispose(); petsDisposable = null; } } } Section 176.3: Retrot with RxJava to fetch data asyncronously From the GitHub repo of RxJava, RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences. It extends the observer pattern to support sequences of data/events and adds operators that allow you to compose sequences together declaratively while abstracting away concerns about things like low-level threading, synchronisation, thread-safety and concurrent data structures. Retrot is a type-safe HTTP client for Android and Java, using this, developers can make all network stu much more easier. As an example, we are going to download some JSON and show it in RecyclerView as a list. Getting started: Add RxJava, RxAndroid and Retrot dependencies in your app level build.gradle le: compile "io.reactivex:rxjava:1.1.6" compile "io.reactivex:rxandroid:1.2.1" compile "com.squareup.retrofit2:adapter-rxjava:2.0.2" compile "com.google.code.gson:gson:2.6.2" compile "com.squareup.retrofit2:retrofit:2.0.2" compile "com.squareup.retrofit2:converter-gson:2.0.2" Dene ApiClient and ApiInterface to exchange data from server public class ApiClient { private static Retrofit retrofitInstance = null; private static final String BASE_URL = "https://api.github.com/"; GoalKicker.com Android Notes for Professionals 903 public static Retrofit getInstance() { if (retrofitInstance == null) { retrofitInstance = new Retrofit.Builder() .baseUrl(BASE_URL) .addCallAdapterFactory(RxJavaCallAdapterFactory.create()) .addConverterFactory(GsonConverterFactory.create()) .build(); } return retrofitInstance; } public static <T> T createRetrofitService(final Class<T> clazz, final String endPoint) { final Retrofit restAdapter = new Retrofit.Builder() .baseUrl(endPoint) .build(); return restAdapter.create(clazz); } public static String getBaseUrl() { return BASE_URL; }} public interface ApiInterface { @GET("repos/{org}/{repo}/issues") Observable<List<Issue>> getIssues(@Path("org") String organisation, @Path("repo") String repositoryName, @Query("page") int pageNumber);} Note the getRepos() is returning an Observable and not just a list of issues. Dene the models An example for this is shown. You can use free services like JsonSchema2Pojo or this. public class Comment { @SerializedName("url") @Expose private String url; @SerializedName("html_url") @Expose private String htmlUrl; //Getters and Setters } Create Retrot instance ApiInterface apiService = ApiClient.getInstance().create(ApiInterface.class); Then, Use this instance to fetch data from server Observable<List<Issue>> issueObservable = apiService.getIssues(org, repo, pageNumber); issueObservable.subscribeOn(Schedulers.newThread()) .observeOn(AndroidSchedulers.mainThread()) .map(issues -> issues) //get issues and map to issues list .subscribe(new Subscriber<List<Issue>>() { GoalKicker.com Android Notes for Professionals 904 @Override public void onCompleted() { Log.i(TAG, "onCompleted: COMPLETED!"); } @Override public void onError(Throwable e) { Log.e(TAG, "onError: ", e); } @Override public void onNext(List<Issue> issues) { recyclerView.setAdapter(new IssueAdapter(MainActivity.this, issues, apiService)); } }); Now, you have successfully fetched data from a server using Retrot and RxJava. GoalKicker.com Android Notes for Professionals 905 Chapter 177: ShortcutManager Section 177.1: Dynamic Launcher Shortcuts ShortcutManager shortcutManager = getSystemService(ShortcutManager.class); ShortcutInfo shortcut = new ShortcutInfo.Builder(this, "id1") .setShortLabel("Web site") // Shortcut Icon tab .setLongLabel("Open the web site") // Displayed When Long Pressing On App Icon .setIcon(Icon.createWithResource(context, R.drawable.icon_website)) .setIntent(new Intent(Intent.ACTION_VIEW, Uri.parse("https://www.mysite.example.com/"))) .build(); shortcutManager.setDynamicShortcuts(Arrays.asList(shortcut)); We can remove all dynamic shortcuts easily by calling: shortcutManager.removeAllDynamicShortcuts(); We can update existing Dynamic Shorcuts by Using shortcutManager.updateShortcuts(Arrays.asList(shortcut); Please note that setDynamicShortcuts(List)is used to redene the entire list of dynamic shortcuts, addDynamicShortcuts(List) is used to add dynamic shortcuts to existing list of dynamic shortcuts GoalKicker.com Android Notes for Professionals 906 Chapter 178: LruCache Section 178.1: Adding a Bitmap(Resource) to the cache To add a resource to the cache you must provide a key and the resource. First make sure that the value is not in the cache already public void addResourceToMemoryCache(String key, Bitmap resource) { if (memoryCache.get(key) == null) memoryCache.put(key, resource); } Section 178.2: Initialising the cache The Lru Cache will store all the added resources (values) for fast access until it reaches a memory limit, in which case it will drop the less used resource (value) to store the new one. To initialise the Lru cache you need to provide a maximum memory value. This value depends on your application requirements and in how critical the resource is to keep a smooth app usage. A recommended value for an image gallery, for example, would be 1/8 of your maximum available memory. Also note that the Lru Cache works on a key-value basis. In the following example, the key is a String and the value is a Bitmap: int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); int cacheSize = maxMemory / 8; LruCache<String, Bitmap> = memoryCache = new LruCache<String, Bitmap>(cacheSize) { protected int sizeOf(String key, Bitmap bitmap) { return bitmap.getByteCount(); } }; Section 178.3: Getting a Bitmap(Resouce) from the cache To get a resource from the cache simply pass the key of your resource (String in this example) public Bitmap getResourceFromMemoryCache(String key) { memoryCache.get(key); } GoalKicker.com Android Notes for Professionals 907 Chapter 179: Jenkins CI setup for Android Projects Section 179.1: Step by step approach to set up Jenkins for Android This is a step by step guide to set up the automated build process using Jenkins CI for your Android projects. The following steps assume that you have new hardware with just any avor of Linux installed. It is also taken into account that you might have a remote machine. PART I: Initial setup on your machine 1. Log in via ssh to your Ubuntu machine: ssh username@xxx.xxx.xxx 2. Download a version of the Android SDK on your machine: wget https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz 3. Unzip the downloaded tar le: sudo apt-get install tar tar -xvf android-sdk_r24.4.1-linux.tgz 4. Now you need to install Java 8 on your Ubuntu machine, which is a requirement for Android builds on Nougat. Jenkins would require you to install JDK and JRE 7 using the steps below: sudo apt-get install python-software-properties sudo add-apt-repository ppa:webupd8team/java sudo apt-get update apt-get install openjdk-8-jdk 5. Now install Jenkins on your Ubuntu machine: wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' sudo apt-get update sudo apt-get install jenkins GoalKicker.com Android Notes for Professionals 908 6. Download the latest supported Gradle version for your Android setup: wget https://services.gradle.org/distributions/gradle-2.14.1-all.zip unzip gradle-2.14.1-all.zip 7. Set up Android on your Ubuntu machine. First move to the tools folder in the Android SDK folder downloaded in step 2: cd android-sdk-linux/tools // lists available SDK android update sdk --no-ui // Updates SDK version android list sdk -a | grep "SDK Build-tools" // lists available build tools android update sdk -a -u -t 4 // updates build tools version to one listed as 4 by prev. cmd. update java 8. Install Git or any other VCS on your machine: sudo apt-get install git 9. Now log in to Jenkins using your internet browser. Type ipAddress:8080 into the address bar. 10. In order to receive the password for the rst-time login, please check the corresponding le as follows (you will need su permissions to access this le): cat /var/lib/jenkins/secrets/initialAdminPassword PART II: Set up Jenkins to build Android Jobs 1. Once logged in, go to the following path: Jenkins > Manage Jenkins > Global Tool Conguration 2. At this location, add JAVA_HOME with the following entries: Name = JAVA_HOME JAVA_HOME = /usr/lib/jvm/java-8-openjdk-amd64 3. Also add the following values to Git and save the environment variables: GoalKicker.com Android Notes for Professionals 909 Name = Default /usr/bin/git 4. Now go to the following path: Jenkins > Manage Jenkins > Conguration 5. At this location, add ANDROID_HOME to the "global properties": Name = ANDROID_HOME Value = /home/username/android-sdk-linux Part III: Create a Jenkins Job for your Android project 1. Click on New Item in the Jenkins home screen. 2. Add a Project Name and Description. 3. In the General tab, select Advanced. Then select Use custom workspace: Directory /home/user/Code/ProjectFolder 4. In the source code management select Git. I am using Bitbucket for the purpose of this example: Repository URL = https://username:password@bitbucket.org/project/projectname.git 5. Select additional behaviors for your repository: Clean Before Checkout Checkout to a sub-directory. Local subdirectory for repo /home/user/Code/ProjectFolder 6. Select a branch you want to build: */master 7. In the Build tab, select Execute Shell in Add build step. 8. In the Execute shell, add the following command: GoalKicker.com Android Notes for Professionals 910 cd /home/user/Code/ProjectFolder && gradle clean assemble --no-daemon 9. If you want to run Lint on the project, then add another build step into the Execute shell: /home/user/gradle/gradle-2.14.1/bin/gradle lint Now your system is nally set up to build Android projects using Jenkins. This setup makes your life so much easier for releasing builds to QA and UAT teams. PS: Since Jenkins is a dierent user on your Ubuntu machine, you should give it rights to create folders in your workspace by executing the following command: chown -R jenkins .git GoalKicker.com Android Notes for Professionals 911 Chapter 180: fastlane Section 180.1: Fastle lane to build and install all avors for given build type to a device Add this lane to your Fastle and run fastlane installAll type:{BUILD_TYPE} in command line. Replace BUILD_TYPE with the build type you want to build. For example: fastlane installAll type:Debug This command will build all avors of given type and install it to your device. Currently, it doesn't work if you have more than one device attached. Make sure you have only one. In the future I'm planning to add option to select target device. lane :installAll do |options| gradle(task: "clean") gradle(task: "assemble", build_type: options[:type]) lane_context[SharedValues::GRADLE_ALL_APK_OUTPUT_PATHS].each do | apk | puts "Uploading APK to Device: " + apk begin adb( command: "install -r #{apk}" ) rescue => ex puts ex end end end Section 180.2: Fastle to build and upload multiple avors to Beta by Crashlytics This is a sample Fastle setup for a multi-avor app. It gives you an option to build and deploy all avors or a single avor. After the deployment, it reports to Slack the status of the deployment, and sends a notication to testers in Beta by Crashlytics testers group. To build and deploy all avors use: fastlane android beta To build a single APK and deploy use: fastlane android beta app:flavorName Using a single Fastlane le, you can manage iOS, Android, and Mac apps. If you are using this le just for one app platform is not required. How It Works GoalKicker.com Android Notes for Professionals 912 1. android argument tells fastlane that we will use :android platform. 2. Inside :android platform you can have multiple lanes. Currently, I have only :beta lane. The second argument from the command above species the lane we want to use. 3. options[:app] 4. There are two Gradle tasks. First, it runs gradle clean. If you provided a avor with app key, fastle runs gradle assembleReleaseFlavor. Otherwise, it runs gradle assembleRelease to build all build avors. 5. If we are building for all avors, an array of generated APK le names is stored inside SharedValues::GRADLE_ALL_APK_OUTPUT_PATHS. We use this to loop through generated les and deploy them to Beta by Crashlytics. notifications and groups elds are optional. They are used to notify testers registered for the app on Beta by Crashlytics. 6. If you are familiar with Crashlytics, you might know that to activate an app in the portal, you have to run it on a device and use it rst. Otherwise, Crashlytics will assume the app inactive and throw an error. In this scenario, I capture it and report to Slack as a failure, so you will know which app is inactive. 7. If deployment is successful, fastlane will send a success message to Slack. 8. #{/([^\/]*)$/.match(apk)} this regex is used to get avor name from APK path. You can remove it if it does not work for you. 9. get_version_name and get_version_code are two Fastlane plugins to retrieve app version name and code. You have to install these gems if you want to use, or you can remove them. Read more about Plugins here. 10. The else statement will be executed if you are building and deploying a single APK. We don't have to provide apk_path to Crashlytics since we have only one app. 11. error do block at the end is used to get notied if anything else goes wrong during execution. Note Don't forget to replace SLACK_URL, API_TOKEN, GROUP_NAME and BUILD_SECRET with your own credentials. fastlane_version "1.46.1" default_platform :android platform :android do before_all do ENV["SLACK_URL"] = "https://hooks.slack.com/servic...." end lane :beta do |options| # Clean and build the Release version of the app. # Usage `fastlane android beta app:flavorName` gradle(task: "clean") gradle(task: "assemble", build_type: "Release", flavor: options[:app]) # If user calls `fastlane android beta` command, it will build all projects and push them to Crashlytics if options[:app].nil? lane_context[SharedValues::GRADLE_ALL_APK_OUTPUT_PATHS].each do | apk | puts "Uploading APK to Crashlytics: " + apk begin crashlytics( api_token: "[API_TOKEN]", build_secret: "[BUILD_SECRET]", GoalKicker.com Android Notes for Professionals 913 groups: "[GROUP_NAME]", apk_path: apk, notifications: "true" ) slack( message: "Successfully deployed new build for #{/([^\/]*)$/.match(apk)} #{get_version_name} - #{get_version_code}", success: true, default_payloads: [:git_branch, :lane, :test_result] ) rescue => ex # If the app is inactive in Crashlytics, deployment will fail. Handle it here and report to slack slack( message: "Error uploading => #{/([^\/]*)$/.match(apk)} #{get_version_name} - #{get_version_code}: #{ex}", success: false, default_payloads: [:git_branch, :lane, :test_result] ) end end after_all do |lane| # This block is called, only if the executed lane was successful slack( message: "Operation completed for #{lane_context[SharedValues::GRADLE_ALL_APK_OUTPUT_PATHS].size} app(s) for #{get_version_name} #{get_version_code}", default_payloads: [:git_branch, :lane, :test_result], success: true ) end else # Single APK upload to Beta by Crashlytics crashlytics( api_token: "[API_TOKEN]", build_secret: "[BUILD_SECRET]", groups: "[GROUP_NAME]", notifications: "true" ) after_all do |lane| # This block is called, only if the executed lane was successful slack( message: "Successfully deployed new build for #{options[:app]} #{get_version_name} - #{get_version_code}", default_payloads: [:git_branch, :lane, :test_result], success: true ) end end error do |lane, exception| slack( message: exception.message, success: false, default_payloads: [:git_branch, :lane, :test_result] ) end end end GoalKicker.com Android Notes for Professionals 914 Chapter 181: Dene step value (increment) for custom RangeSeekBar A customization of the Android RangeSeekBar proposed by <NAME> at https://github.com/anothem/android-range-seek-bar It allows to dene a step value (increment), when moving the seek bar Section 181.1: Dene a step value of 7 <RangeSeekBar android:id="@+id/barPrice" android:layout_width="fill_parent" android:layout_height="wrap_content" app:barHeight="0.2dp" app:barHeight2="4dp" app:increment="7" app:showLabels="false" /> GoalKicker.com Android Notes for Professionals 915 Chapter 182: Getting started with OpenGL ES 2.0+ This topic is about setting up and using OpenGL ES 2.0+ on Android. OpenGL ES is the standard for 2D and 3D accelerated graphics on embedded systems - including consoles, smartphones, appliances and vehicles. Section 182.1: Setting up GLSurfaceView and OpenGL ES 2.0+ To use OpenGL ES in your application you must add this to the manifest: <uses-feature android:glEsVersion="0x00020000" android:required="true"/> Create your extended GLSurfaceView: import static android.opengl.GLES20.*; // To use all OpenGL ES 2.0 methods and constants statically public class MyGLSurfaceView extends GLSurfaceView { public MyGLSurfaceView(Context context, AttributeSet attrs) { super(context, attrs); setEGLContextClientVersion(2); // OpenGL ES version 2.0 setRenderer(new MyRenderer()); setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY); } public final class MyRenderer implements GLSurfaceView.Renderer{ public final void onSurfaceCreated(GL10 unused, EGLConfig config) { // Your OpenGL ES init methods glClearColor(1f, 0f, 0f, 1f); } public final void onSurfaceChanged(GL10 unused, int width, int height) { glViewport(0, 0, width, height); } public final void onDrawFrame(GL10 unused) { // Your OpenGL ES draw methods glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } } } Add MyGLSurfaceView to your layout: <com.example.app.MyGLSurfaceView android:id="@+id/gles_renderer" android:layout_width="match_parent" android:layout_height="match_parent"/> To use newer version of OpenGL ES just change the version number in your manifest, in the static import and change setEGLContextClientVersion. Section 182.2: Compiling and Linking GLSL-ES Shaders from GoalKicker.com Android Notes for Professionals 916 asset le The Assets folder is the most common place to store your GLSL-ES shader les. To use them in your OpenGL ES application you need to load them to a string in the rst place. This functions creates a string from the asset le: private String loadStringFromAssetFile(Context myContext, String filePath){ StringBuilder shaderSource = new StringBuilder(); try { BufferedReader reader = new BufferedReader(new InputStreamReader(myContext.getAssets().open(filePath))); String line; while((line = reader.readLine()) != null){ shaderSource.append(line).append("\n"); } reader.close(); return shaderSource.toString(); } catch (IOException e) { e.printStackTrace(); Log.e(TAG, "Could not load shader file"); return null; } } Now you need to create a function that compiles a shader stored in a sting: private int compileShader(int shader_type, String shaderString){ // This compiles the shader from the string int shader = glCreateShader(shader_type); glShaderSource(shader, shaderString); glCompileShader(shader); // This checks for for compilation errors int[] compiled = new int[1]; glGetShaderiv(shader, GL_COMPILE_STATUS, compiled, 0); if (compiled[0] == 0) { String log = glGetShaderInfoLog(shader); Log.e(TAG, "Shader compilation error: "); Log.e(TAG, log); } return shader; } Now you can load, compile and link your shaders: // Load shaders from file String vertexShaderString = loadStringFromAssetFile(context, "your_vertex_shader.glsl"); String fragmentShaderString = loadStringFromAssetFile(context, "your_fragment_shader.glsl"); // Compile shaders int vertexShader = compileShader(GL_VERTEX_SHADER, vertexShaderString); int fragmentShader = compileShader(GL_FRAGMENT_SHADER, fragmentShaderString); // Link shaders and create shader program int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram , vertexShader); glAttachShader(shaderProgram , fragmentShader); glLinkProgram(shaderProgram); GoalKicker.com Android Notes for Professionals 917 // Check for linking errors: int linkStatus[] = new int[1]; glGetProgramiv(shaderProgram, GL_LINK_STATUS, linkStatus, 0); if (linkStatus[0] != GL_TRUE) { String log = glGetProgramInfoLog(shaderProgram); Log.e(TAG,"Could not link shader program: "); Log.e(TAG, log); } If there are no errors, your shader program is ready to use: glUseProgram(shaderProgram); GoalKicker.com Android Notes for Professionals 918 Chapter 183: Check Data Connection Section 183.1: Check data connection This method is to check data connection by ping certain IP or Domain name. public Boolean isDataConnected() { try { Process p1 = java.lang.Runtime.getRuntime().exec("ping -c 1 8.8.8.8"); int returnVal = p1.waitFor(); boolean reachable = (returnVal==0); return reachable; } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return false; } Section 183.2: Check connection using ConnectivityManager public static boolean isConnectedNetwork (Context context) { ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); return cm.getActiveNetworkInfo () != null && cm.getActiveNetworkInfo ().isConnectedOrConnecting (); } Section 183.3: Use network intents to perform tasks while data is allowed When your device connects to a network, an intent is sent. Many apps dont check for these intents, but to make your application work properly, you can listen to network change intents that will tell you when communication is possible. To check for network connectivity you can, for example, use the following clause: if (intent.getAction().equals(android.net.ConnectivityManager.CONNECTIVITY_ACTION)){ NetworkInfo info = intent.getParcelableExtra(ConnectivityManager.EXTRA_NETWORK_INFO); //perform your action when connected to a network } GoalKicker.com Android Notes for Professionals 919 Chapter 184: Java on Android Android supports all Java 7 language features and a subset of Java 8 language features that vary by platform version. This page describes the new language features you can use, how to properly congure your project to use them and any known issues you may encounter. Section 184.1: Java 8 features subset with Retrolambda Retrolambda lets you run Java 8 code with lambda expressions, method references and try-with-resources statements on Java 7, 6 or 5. It does this by transforming your Java 8 compiled bytecode so that it can run on an older Java runtime. Backported Language Features: Lambda expressions are backported by converting them to anonymous inner classes. This includes the optimisation of using a singleton instance for stateless lambda expressions to avoid repeated object allocation. Method references are basically just syntax sugar for lambda expressions and they are backported in the same way. Try-with-resources statements are backported by removing calls to Throwable.addSuppressed if the target bytecode version is below Java 7. If you would like the suppressed exceptions to be logged instead of swallowed, please create a feature request and we'll make it congurable. Objects.requireNonNull calls are replaced with calls to Object.getClass if the target bytecode version is below Java 7. The synthetic null checks generated by JDK 9 use Objects.requireNonNull, whereas earlier JDK versions used Object.getClass. Optionally also: 1. Default methods are backported by copying the default methods to a companion class (interface name + "$") as static methods, replacing the default methods in the interface with abstract methods, and by adding the necessary method implementations to all classes which implement that interface. 2. Static methods on interfaces are backported by moving the static methods to a companion class (interface name + "$"), and by changing all methods calls to call the new method location. Known Limitations: Does not backport Java 8 APIs Backporting default methods and static methods on interfaces requires all backported interfaces and all classes which implement them or call their static methods to be backported together, with one execution of Retrolambda. In other words, you must always do a clean build. Also, backporting default methods won't work across module or dependency boundaries. May break if a future JDK 8 build stops generating a new class for each invokedynamic call. Retrolambda works so that it captures the bytecode that java.lang.invoke.LambdaMetafactory generates dynamically, so optimisations to that mechanism may break Retrolambda. Retrolambda gradle plugin will automatically build your android project with Retrolambda. The latest version can be found on the releases page. Usage: GoalKicker.com Android Notes for Professionals 920 1. Download and install jdk8 2. Add the following to your build.gradle buildscript { repositories { mavenCentral() } dependencies { classpath 'me.tatarka:gradle-retrolambda:<latest version>' } } // Required because retrolambda is on maven central repositories { mavenCentral() } apply plugin: 'com.android.application' //or apply plugin: 'java' apply plugin: 'me.tatarka.retrolambda' android { compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } Known Issues: Lint fails on java les that have lambdas. Android's lint doesn't understand java 8 syntax and will fail silently or loudly. There is now an experimental fork that xes the issue. Using Google Play Services causes Retrolambda to fail. Version 5.0.77 contains bytecode that is incompatible with Retrolambda. This should be xed in newer versions of play services, if you can update, that should be the preferred solution. To work around this issue, you can either use an earlier version like 4.4.52 or add noverify to the jvm args. retrolambda { jvmArgs '-noverify' } GoalKicker.com Android Notes for Professionals 921 Chapter 185: Android Java Native Interface (JNI) JNI (Java Native Interface) is a powerful tool that enables Android developers to utilize the NDK and use C++ native code in their applications. This topic describes the usage of Java <-> C++ interface. Section 185.1: How to call functions in a native library via the JNI interface The Java Native Interface (JNI) allows you to call native functions from Java code, and vice versa. This example shows how to load and call a native function via JNI, it does not go into accessing Java methods and elds from native code using JNI functions. Suppose you have a native library named libjniexample.so in the project/libs/<architecture> folder, and you want to call a function from the JNITestJava class inside the com.example.jniexample package. In the JNITest class, declare the function like this: public native int testJNIfunction(int a, int b); In your native code, dene the function like this: #include <jni.h> JNIEXPORT jint JNICALL Java_com_example_jniexample_JNITest_testJNIfunction(JNIEnv *pEnv, jobject thiz, jint a, jint b) { return a + b; } The pEnv argument is a pointer to the JNI environment that you can pass to JNI functions to access methods and elds of Java objects and classes. The thiz pointer is a jobject reference to the Java object that the native method was called on (or the class if it is a static method). In your Java code, in JNITest, load the library like this: static{ System.loadLibrary("jniexample"); } Note the lib at the start, and the .so at the end of the lename are omitted. Call the native function from Java like this: JNITest test = new JNITest(); int c = test.testJNIfunction(3, 4); Section 185.2: How to call a Java method from native code The Java Native Interface (JNI) allows you to call Java functions from native code. Here is a simple example of how to do it: Java code: GoalKicker.com Android Notes for Professionals 922 package com.example.jniexample; public class JNITest { public static int getAnswer(bool) { return 42; } } Native code: int getTheAnswer() { // Get JNI environment JNIEnv *env = JniGetEnv(); // Find the Java class - provide package ('.' replaced to '/') and class name jclass jniTestClass = env->FindClass("com/example/jniexample/JNITest"); // Find the Java method - provide parameters inside () and return value (see table below for an explanation of how to encode them) jmethodID getAnswerMethod = env->GetStaticMethodID(jniTestClass, "getAnswer", "(Z)I;"); // Calling the method return (int)env->CallStaticObjectMethod(jniTestClass, getAnswerMethod, (jboolean)true); } JNI method signature to Java type: JNI Signature Z Java Type boolean B byte C char S short I int J long F oat D double L fully-qualied-class ; fully-qualied-class [ type type[] So for our example we used (Z)I - which means the function gets a boolean and returns an int. Section 185.3: Utility method in JNI layer This method will help to get the Java string from C++ string. jstring getJavaStringFromCPPString(JNIEnv *global_env, const char* cstring) { jstring nullString = global_env->NewStringUTF(NULL); if (!cstring) { return nullString; } jclass strClass = global_env->FindClass("java/lang/String"); jmethodID ctorID = global_env->GetMethodID(strClass, "<init>", "([BLjava/lang/String;)V"); GoalKicker.com Android Notes for Professionals 923 jstring encoding = global_env->NewStringUTF("UTF-8"); jbyteArray bytes = global_env->NewByteArray(strlen(cstring)); global_env->SetByteArrayRegion(bytes, 0, strlen(cstring), (jbyte*) cstring); jstring str = (jstring) global_env->NewObject(strClass, ctorID, bytes, encoding); global_env->DeleteLocalRef(strClass); global_env->DeleteLocalRef(encoding); global_env->DeleteLocalRef(bytes); return str; } This method will help you to convert jbyteArray to char char* as_unsigned_char_array(JNIEnv *env, jbyteArray array) { jsize length = env->GetArrayLength(array); jbyte* buffer = new jbyte[length + 1]; env->GetByteArrayRegion(array, 0, length, buffer); buffer[length] = '\0'; return (char*) buffer; } GoalKicker.com Android Notes for Professionals 924 Chapter 186: Notication Channel Android O Method IMPORTANCE_MAX Description unused IMPORTANCE_HIGH shows everywhere, makes noise and peeks IMPORTANCE_DEFAULT shows everywhere, makes noise, but does not visually intrude IMPORTANCE_LOW shows everywhere, but is not intrusive IMPORTANCE_MIN only shows in the shade, below the fold IMPORTANCE_NONE a notication with no importance; does not show in the shade Notication channels enable us app developers to group our notications into groupschannelswith the user having the ability to modify notication settings for the entire channel at once.In Android O this feature is introduced.Right now it is available developers preview. Section 186.1: Notication Channel What Are Notication Channels? Notication channels enable us app developers to group our notications into groupschannelswith the user having the ability to modify notication settings for the entire channel at once. For example, for each channel, users can completely block all notications, override the importance level, or allow a notication badge to be shown. This new feature helps in greatly improving the user experience of an app. Create Notication Channels import android.app.Notification; import android.app.NotificationChannel; import android.app.NotificationManager; import android.content.Context; import android.content.ContextWrapper; import android.graphics.Color; public class NotificationUtils extends ContextWrapper { private NotificationManager mManager; public static final String ANDROID_CHANNEL_ID = "com.sai.ANDROID"; public static final String IOS_CHANNEL_ID = "com.sai.IOS"; public static final String ANDROID_CHANNEL_NAME = "ANDROID CHANNEL"; public static final String IOS_CHANNEL_NAME = "IOS CHANNEL"; public NotificationUtils(Context base) { super(base); createChannels(); } public void createChannels() { // create android channel NotificationChannel androidChannel = new NotificationChannel(ANDROID_CHANNEL_ID, ANDROID_CHANNEL_NAME, NotificationManager.IMPORTANCE_DEFAULT); // Sets whether notifications posted to this channel should display notification lights androidChannel.enableLights(true); // Sets whether notification posted to this channel should vibrate. GoalKicker.com Android Notes for Professionals 925 androidChannel.enableVibration(true); // Sets the notification light color for notifications posted to this channel androidChannel.setLightColor(Color.BLUE); // Sets whether notifications posted to this channel appear on the lockscreen or not androidChannel.setLockscreenVisibility(Notification.VISIBILITY_PRIVATE); getManager().createNotificationChannel(androidChannel); // create ios channel NotificationChannel iosChannel = new NotificationChannel(IOS_CHANNEL_ID, IOS_CHANNEL_NAME, NotificationManager.IMPORTANCE_HIGH); iosChannel.enableLights(true); iosChannel.enableVibration(true); iosChannel.setLightColor(Color.GRAY); iosChannel.setLockscreenVisibility(Notification.VISIBILITY_PUBLIC); getManager().createNotificationChannel(iosChannel); } private NotificationManager getManager() { if (mManager == null) { mManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); } return mManager; }} In the code above, we created two instances of the NoticationChannel, passing uniqueid a channel name, and also an importance level in its constructor. For each notication channel, we applied following characteristics. 1. Sound 2. Lights 3. Vibration 4. Notication to show on lock screen. Finally, we got the NoticationManager from the system and then registered the channel by calling the method createNoticationChannel(), passing the channel we have created. We can create multiple notication channels all at once with createNoticationChannels(), passing a Java list of NoticationChannel instances. You can get all notication channels for an app with getNoticationChannels() and get a specic channel with getNoticationChannel(), passing only the channel id as an argument. Importance Level in Notication Channels Method IMPORTANCE_MAX Description unused IMPORTANCE_HIGH shows everywhere, makes noise and peeks IMPORTANCE_DEFAULT shows everywhere, makes noise, but does not visually intrude IMPORTANCE_LOW shows everywhere, but is not intrusive,value is 0 IMPORTANCE_MIN only shows in the shade, below the fold IMPORTANCE_NONE a notication with no importance; does not show in the shade Create Notication and Post to channel We have created two notication one using NoticationUtils another using NoticationBuilder. GoalKicker.com Android Notes for Professionals 926 public Notification.Builder getAndroidChannelNotification(String title, String body) { return new Notification.Builder(getApplicationContext(), ANDROID_CHANNEL_ID) .setContentTitle(title) .setContentText(body) .setSmallIcon(android.R.drawable.stat_notify_more) .setAutoCancel(true); } public Notification.Builder getIosChannelNotification(String title, String body) { return new Notification.Builder(getApplicationContext(), IOS_CHANNEL_ID) .setContentTitle(title) .setContentText(body) .setSmallIcon(android.R.drawable.stat_notify_more) .setAutoCancel(true); } We can also set NoticationChannel Using Notication.Builder().For that we can use setChannel(String channelId). Update Notication Channel Settings Once you create a notication channel, the user is in charge of its settings and behavior. You can call createNoticationChannel() again to rename an existing notication channel, or update its description. The following sample code describes how you can redirect a user to the settings for a notication channel by creating an intent to start an activity. In this case, the intent requires extended data including the ID of the notication channel and the package name of your app. @Override protected void onCreate(Bundle savedInstanceState) { //... Button buttonAndroidNotifSettings = (Button) findViewById(R.id.btn_android_notif_settings); buttonAndroidNotifSettings.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Intent i = new Intent(Settings.ACTION_CHANNEL_NOTIFICATION_SETTINGS); i.putExtra(Settings.EXTRA_APP_PACKAGE, getPackageName()); i.putExtra(Settings.EXTRA_CHANNEL_ID, NotificationUtils.ANDROID_CHANNEL_ID); startActivity(i); } }); } XML le: <!--...--> <Button android:id="@+id/btn_android_notif_settings" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Notification Settings"/> <!--...--> Deleting Notication Channel You can delete notication channels by calling deleteNoticationChannel(). NotificationManager mNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); GoalKicker.com Android Notes for Professionals 927 // The id of the channel. String id = "my_channel_01"; NotificationChannel mChannel = mNotificationManager.getNotificationChannel(id); mNotificationManager.deleteNotificationChannel(mChannel); Now Create MainActivity and xml activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:layout_margin="16dp" tools:context="com.chikeandroid.tutsplusalerts.MainActivity"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Tuts+ Android Channel" android:layout_gravity="center_horizontal" android:textAppearance="@style/TextAppearance.AppCompat.Title"/> <EditText android:id="@+id/et_android_title" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Title"/> <EditText android:id="@+id/et_android_author" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Author"/> <Button android:id="@+id/btn_send_android" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Send"/> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" android:layout_marginTop="20dp"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 928 android:text="Tuts+ IOS Channel" android:layout_gravity="center_horizontal" android:textAppearance="@style/TextAppearance.AppCompat.Title"/> <EditText android:id="@+id/et_ios_title" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Title" /> <EditText android:id="@+id/et_ios_author" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Author"/> <Button android:id="@+id/btn_send_ios" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Send"/> </LinearLayout> </LinearLayout> MainActivity.java we are going to edit our MainActivity so that we can get the title and author from the EditText components and then send these to the Android channel. We get the Notication.Builder for the Android channel we created in our NoticationUtils, and then notify the NoticationManager. import android.app.Notication; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.text.TextUtils; import android.view.View; import android.widget.Button; import android.widget.EditText; public class MainActivity extends AppCompatActivity { private NotificationUtils mNotificationUtils; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mNotificationUtils = new NotificationUtils(this); final EditText editTextTitleAndroid = (EditText) findViewById(R.id.et_android_title); final EditText editTextAuthorAndroid = (EditText) findViewById(R.id.et_android_author); Button buttonAndroid = (Button) findViewById(R.id.btn_send_android); buttonAndroid.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { String title = editTextTitleAndroid.getText().toString(); String author = editTextAuthorAndroid.getText().toString(); if(!TextUtils.isEmpty(title) && !TextUtils.isEmpty(author)) { Notification.Builder nb = mNotificationUtils. getAndroidChannelNotification(title, "By " + author); GoalKicker.com Android Notes for Professionals 929 mNotificationUtils.getManager().notify(107, nb.build()); } } }); } } GoalKicker.com Android Notes for Professionals 930 Chapter 187: Robolectric Unit testing is taking a piece of code and testing it independently without any other dependencies or parts of the system running (for example the database). Robolectric is a unit test framework that de-fangs the Android SDK jar so you can test-drive the development of your Android app. Tests run inside the JVM on your workstation in seconds. Combing them both allows you to run fast tests on the JVN still using the Android API's. Section 187.1: Robolectric test @RunWith(RobolectricTestRunner.class) public class MyActivityTest { @Test public void clickingButton_shouldChangeResultsViewText() throws Exception { MyActivity activity = Robolectric.setupActivity(MyActivity.class); Button button = (Button) activity.findViewById(R.id.button); TextView results = (TextView) activity.findViewById(R.id.results); button.performClick(); assertThat(results.getText().toString()).isEqualTo("Robolectric Rocks!"); } } Section 187.2: Conguration To congure robolectric add @Config annotation to test class or method. Run with custom Application class @RunWith(RobolectricTestRunner.class) @Config(application = MyApplication.class) public final class MyTest { } Set target SDK @RunWith(RobolectricTestRunner.class) @Config(sdk = Build.VERSION_CODES.LOLLIPOP) public final class MyTest { } Run with custom manifest When specied, robolectric will look relative to the current directory. Default value is AndroidManifest.xml Resources and assets will be loaded relative to the manifest. @RunWith(RobolectricTestRunner.class) @Config(manifest = "path/AndroidManifest.xml") public final class MyTest { } GoalKicker.com Android Notes for Professionals 931 Use qualiers Possible qualiers can be found in android docs. @RunWith(RobolectricTestRunner.class) public final class MyTest { @Config(qualifiers = "sw600dp") public void testForTablet() { } } GoalKicker.com Android Notes for Professionals 932 Chapter 188: Moshi Moshi is a modern JSON library for Android and Java. It makes it easy to parse JSON into Java objects and Java back into JSON. Section 188.1: JSON into Java String json = ...; Moshi moshi = new Moshi.Builder().build(); JsonAdapter<BlackjackHand> jsonAdapter = moshi.adapter(BlackjackHand.class); BlackjackHand blackjackHand = jsonAdapter.fromJson(json); System.out.println(blackjackHand); Section 188.2: serialize Java objects as JSON BlackjackHand blackjackHand = new BlackjackHand( new Card('6', SPADES), Arrays.asList(new Card('4', CLUBS), new Card('A', HEARTS))); Moshi moshi = new Moshi.Builder().build(); JsonAdapter<BlackjackHand> jsonAdapter = moshi.adapter(BlackjackHand.class); String json = jsonAdapter.toJson(blackjackHand); System.out.println(json); Section 188.3: Built in Type Adapters Moshi has built-in support for reading and writing Javas core data types: Primitives (int, oat, char...) and their boxed counterparts (Integer, Float, Character...). Arrays Collections Lists Sets Maps Strings Enums It supports your model classes by writing them out eld-by-eld. In the example above Moshi uses these classes: class BlackjackHand { public final Card hidden_card; public final List<Card> visible_cards; ... } class Card { public final char rank; public final Suit suit; ... } enum Suit { CLUBS, DIAMONDS, HEARTS, SPADES; } to read and write this JSON: GoalKicker.com Android Notes for Professionals 933 { "hidden_card": { "rank": "6", "suit": "SPADES" }, "visible_cards": [ { "rank": "4", "suit": "CLUBS" }, { "rank": "A", "suit": "HEARTS" } ] } GoalKicker.com Android Notes for Professionals 934 Chapter 189: Strict Mode Policy : A tool to catch the bug in the Compile Time. Strict Mode is a special class introduced in Android 2.3 for debugging. This developer tools detect things done accidentally and bring them to our attention so that we can x them. It is most commonly used to catch the accidental disk or network access on the applications main thread, where UI operations are received and animations takes place. StrictMode is basically a tool to catch the bug in the Compile Time mode. Section 189.1: The below Code Snippet is to setup the StrictMode for Thread Policies. This Code is to be set at the entry points to our application StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder() .detectDiskWrites() .penaltyLog() //Logs a message to LogCat .build()) Section 189.2: The below code deals with leaks of memory, like it detects when in SQLLite nalize is called or not StrictMode.setVmPolicy(new StrictMode.VmPolicy.Builder() .detectActivityLeaks() .detectLeakedClosableObjects() .penaltyLog() .build()); GoalKicker.com Android Notes for Professionals 935 Chapter 190: Internationalization and localization (I18N and L10N) Internationalization (i18n) and Localization (L10n) are used to adapt software according to dierences in languages, regional dierences and target audience. Internationalization : the process of planning for future localization i.e. making the software design exible to an extent that it can adjust and adapt to future localization eorts. Localization : the process of adapting the software to a particular region/country/market (locale). Section 190.1: Planning for localization : enable RTL support in Manifest RTL (Right-to-left) support is an essential part in planning for i18n and L10n. Unlike English language which is written from left to right, many languages like Arabic, Japanese, Hebrew, etc. are written from right to left. To appeal to a more global audience, it is a good idea to plan your layouts to support these language from the very beginning of the project, so that adding localization is easier later on. RTL support can be enabled in an Android app by adding the supportsRtl tag in the AndroidManifest, like so : <application ... android:supportsRtl="true" ...> ... </application> Section 190.2: Planning for localization : Add RTL support in Layouts Starting SDK 17 (Android 4.2), RTL support was added in Android layouts and is an essential part of localization. Going forward, the left/right notation in layouts should be replaced by start/end notation. If, however, your project has a minSdk value less than 17, then both left/right and start/end notation should be used in layouts. For relative layouts, alignParentStart and alignParentEnd should be used, like so: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" android:layout_alignParentStart="true"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_alignParentRight="true" android:layout_alignParentEnd="true"/> </RelativeLayout> GoalKicker.com Android Notes for Professionals 936 For specifying gravity and layout gravity, similar notation should be used, like so : <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="left|start" android:gravity="left|start"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|end" android:gravity="right|end"/> Paddings and margins should also be specied accordingly, like so : <include layout="@layout/notification" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginLeft="12dp" android:layout_marginStart="12dp" android:paddingLeft="128dp" android:paddingStart="128dp" android:layout_toLeftOf="@id/cancel_action" android:layout_toStartOf="@id/cancel_action"/> <include layout="@layout/notification2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginRight="12dp" android:layout_marginEnd="12dp" android:paddingRight="128dp" android:paddingEnd="128dp" android:layout_toRightOf="@id/cancel_action" android:layout_toEndOf="@id/cancel_action"/> Section 190.3: Planning for localization : Test layouts for RTL To test if the layouts that have been created are RTL compatible, do the following : Go to Settings -> Developer options -> Drawing -> Force RTL layout direction Enabling this option would force the device to use RTL locales and you can easily verify all parts of the app for RTL support. Note that you don't need to actually add any new locales/ language support up till this point. Section 190.4: Coding for Localization : Creating default strings and resources The rst step for coding for localization is to create default resources. This step is so implicit that many developers do not even think about it. However, creating default resources is important because if the device runs on an unsupported locale, it would load all of its resources from the default folders. If even one of the resources is missing from the default folders, the app would simply crash. The default set of strings should be put in the following folder at the specied location: res/values/strings.xml GoalKicker.com Android Notes for Professionals 937 This le should contain the strings in the language that majority users of the app are expected to speak. Also, default resources for the app should be placed at the following folders and locations : res/drawable/ res/layout/ If your app requires folders like anim, or xml, the default resources should be added to the following folders and locations: res/anim/ res/xml/ res/raw/ Section 190.5: Coding for localization : Providing alternative strings To provide translations in other languages (locales), we need to create a strings.xml in a separate folder by the following convention : res/values-<locale>/strings.xml An example for the same is given below: In this example, we have default English strings in the le res/values/strings.xml, French translations are provided in the folder res/values-fr/strings.xml and Japanese translations are provided in the folder res/values-ja/strings.xml Other translations for other locales can similarly be added to the app. A complete list of locale codes can be found here : ISO 639 codes Non-translatable Strings: Your project may have certain strings that are not to be translated. Strings which are used as keys for SharedPreferences or strings which are used as symbols, fall in this category. These strings should be stored only in the default strings.xml and should be marked with a translatable="false" attribute. e.g. <string name="pref_widget_display_label_hot">Hot News</string> <string name="pref_widget_display_key" translatable="false">widget_display</string> <string name="pref_widget_display_hot" translatable="false">0</string> GoalKicker.com Android Notes for Professionals 938 This attribute is important because translations are often carried out by professionals who are bilingual. This would allow these persons involved in translations to identify strings which are not to be translated, thus saving time and money. Section 190.6: Coding for localization : Providing alternate layouts Creating language specic layouts is often unnecessary if you have specied the correct start/end notation, as described in the earlier example. However, there may be situations where the defaults layouts may not work correctly for certain languages. Sometimes, left-to-right layouts may not translate for RTL languages. It is necessary to provide the correct layouts in such cases. To provide complete optimization for RTL layouts, we can use entirely separate layout les using the ldrtl resource qualier (ldrtl stands for layout-direction-right-to-left}). For example, we can save your default layout les in res/layout/ and our RTL optimized layouts in res/layout-ldrtl/. The ldrtl qualier is great for drawable resources, so that you can provide graphics that are oriented in the direction corresponding to the reading direction. Here is a great post which describes the precedence of the ldrtl layouts : Language specic layouts GoalKicker.com Android Notes for Professionals 939 Chapter 191: Fast way to setup Retrolambda on an android project. Retrolambda is a library which allows to use Java 8 lambda expressions, method references and try-with-resources statements on Java 7, 6 or 5. The Gradle Retrolambda Plug-in allows to integrate Retrolambda into a Gradle based build. This allows for example to use these constructs in an Android application, as standard Android development currently does not yet support Java 8. Section 191.1: Setup and example how to use: Setup Steps: 1. Download and install jdk8. 2. Add the following to your projects main build.gradle buildscript { repositories { mavenCentral() } dependencies { classpath 'me.tatarka:gradle-retrolambda:3.2.3' } } 3. Now add this to your application modules build.gradle apply plugin: 'com.android.application' // or apply plugin: 'java' apply plugin: 'me.tatarka.retrolambda' 4. Add these lines to your application modules build.gradle to inform the IDE of the language level: android { compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } Example: So things like this: button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { log("Clicked"); } }); Become this: GoalKicker.com Android Notes for Professionals 940 button.setOnClickListener(v -> log("Clicked")); GoalKicker.com Android Notes for Professionals 941 Chapter 192: How to use SparseArray A SparseArray is an alternative for a Map. A Map requires its keys to be objects. The phenomenon of autoboxing occurs when we want to use a primitive int value as key. The compiler automatically converts primitive values to their boxed types (e.g. int to Integer). The dierence in memory footprint is noticeable: int uses 4 bytes, Integer uses 16 bytes. A SparseArray uses int as key value. Section 192.1: Basic example using SparseArray class Person { String name; public Person(String name) { this.name = name; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Person person = (Person) o; return name != null ? name.equals(person.name) : person.name == null; } @Override public int hashCode() { return name != null ? name.hashCode() : 0; } @Override public String toString() { return "Person{" + "name='" + name + '\'' + '}'; } } final Person steve = new Person("Steve"); Person[] persons = new Person[] { new Person("John"), new Person("Gwen"), steve, new Person("Rob") }; int[] identifiers = new int[] {1234, 2345, 3456, 4567}; final SparseArray<Person> demo = new SparseArray<>(); // Mapping persons to identifiers. for (int i = 0; i < persons.length; i++) { demo.put(identifiers[i], persons[i]); } // Find the person with identifier 1234. Person id1234 = demo.get(1234); // Returns John. // Find the person with identifier 6410. Person id6410 = demo.get(6410); // Returns null. // Find the 3rd person. Person third = demo.valueAt(3); // Returns Rob. GoalKicker.com Android Notes for Professionals 942 // Find the 42th person. //Person fortysecond = demo.valueAt(42); // Throws ArrayIndexOutOfBoundsException. // Remove the last person. demo.removeAt(demo.size() - 1); // Rob removed. // Remove the person with identifier 1234. demo.delete(1234); // John removed. // Find the index of Steve. int indexOfSteve = demo.indexOfValue(steve); // Find the identifier of Steve. int identifierOfSteve = demo.keyAt(indexOfSteve); Tutorial on YouTube GoalKicker.com Android Notes for Professionals 943 Chapter 193: Shared Element Transitions Here you nd examples for transition between Activities or Fragments using a shared element. An example for this behaviour is the Google Play Store App which translates an App's icon from the list to the App's details view. Section 193.1: Shared Element Transition between two Fragments In this example, one of two dierent ImageViews should be translated from the ChooserFragment to the DetailFragment. In the ChooserFragment layout we need the unique transitionName attributes: <ImageView android:id="@+id/image_first" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/ic_first" android:transitionName="fistImage" /> <ImageView android:id="@+id/image_second" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/ic_second" android:transitionName="secondImage" /> In the ChooserFragments class, we need to pass the View which was clicked and an ID to the parent Activity wich is handling the replacement of the fragments (we need the ID to know which image resource to show in the DetailFragment). How to pass information to a parent activity in detail is surely covered in another documentation. view.findViewById(R.id.image_first).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if (mCallback != null) { mCallback.showDetailFragment(view, 1); } } }); view.findViewById(R.id.image_second).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if (mCallback != null) { mCallback.showDetailFragment(view, 2); } } }); In the DetailFragment, the ImageView of the shared element also needs the unique transitionName attribute. <ImageView android:id="@+id/image_shared" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" GoalKicker.com Android Notes for Professionals 944 android:transitionName="sharedImage" /> In the onCreateView() method of the DetailFragment, we have to decide which image resource should be shown (if we don't do that, the shared element will disappear after the transition). public static DetailFragment newInstance(Bundle args) { DetailFragment fragment = new DetailFragment(); fragment.setArguments(args); return fragment; } @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { super.onCreateView(inflater, container, savedInstanceState); View view = inflater.inflate(R.layout.fragment_detail, container, false); ImageView sharedImage = (ImageView) view.findViewById(R.id.image_shared); // Check which resource should be shown. int type = getArguments().getInt("type"); // Show image based on the type. switch (type) { case 1: sharedImage.setBackgroundResource(R.drawable.ic_first); break; case 2: sharedImage.setBackgroundResource(R.drawable.ic_second); break; } return view; } The parent Activity is receiving the callbacks and handles the replacement of the fragments. @Override public void showDetailFragment(View sharedElement, int type) { // Get the chooser fragment, which is shown in the moment. Fragment chooserFragment = getFragmentManager().findFragmentById(R.id.fragment_container); // Set up the DetailFragment and put the type as argument. Bundle args = new Bundle(); args.putInt("type", type); Fragment fragment = DetailFragment.newInstance(args); // Set up the transaction. FragmentTransaction transaction = getFragmentManager().beginTransaction(); // Define the shared element transition. fragment.setSharedElementEnterTransition(new DetailsTransition()); fragment.setSharedElementReturnTransition(new DetailsTransition()); // The rest of the views are just fading in/out. fragment.setEnterTransition(new Fade()); chooserFragment.setExitTransition(new Fade()); // Now use the image's view and the target transitionName to define the shared element. transaction.addSharedElement(sharedElement, "sharedImage"); GoalKicker.com Android Notes for Professionals 945 // Replace the fragment. transaction.replace(R.id.fragment_container, fragment, fragment.getClass().getSimpleName()); // Enable back navigation with shared element transitions. transaction.addToBackStack(fragment.getClass().getSimpleName()); // Finally press play. transaction.commit(); } Not to forget - the Transition itself. This example moves and scales the shared element. @TargetApi(Build.VERSION_CODES.LOLLIPOP) public class DetailsTransition extends TransitionSet { public DetailsTransition() { setOrdering(ORDERING_TOGETHER); addTransition(new ChangeBounds()). addTransition(new ChangeTransform()). addTransition(new ChangeImageTransform()); } } GoalKicker.com Android Notes for Professionals 946 Chapter 194: Android Things Section 194.1: Controlling a Servo Motor This example assumes you have a servo with the following characteristics, which happen to be typical: movement between 0 and 180 degrees pulse period of 20 ms minimum pulse length of 0.5 ms maximum pulse length of 2.5 ms You need to check if those values match your hardware, since forcing it to go outside its specied operating range can damage the servo. A damaged servo in turn has the potential to damage your Android Things device. The example ServoController class consists of two methods, setup() and setPosition(): public class ServoController { private double periodMs, maxTimeMs, minTimeMs; private Pwm pin; public void setup(String pinName) throws IOException { periodMs = 20; maxTimeMs = 2.5; minTimeMs = 0.5; PeripheralManagerService service = new PeripheralManagerService(); pin = service.openPwm(pinName); pin.setPwmFrequencyHz(1000.0d / periodMs); setPosition(90); pin.setEnabled(true); } public void setPosition(double degrees) { double pulseLengthMs = (degrees / 180.0 * (maxTimeMs - minTimeMs)) + minTimeMs; if (pulseLengthMs < minTimeMs) { pulseLengthMs = minTimeMs; } else if (pulseLengthMs > maxTimeMs) { pulseLengthMs = maxTimeMs; } double dutyCycle = pulseLengthMs / periodMs * 100.0; Log.i(TAG, "Duty cycle = " + dutyCycle + " pulse length = " + pulseLengthMs); try { pin.setPwmDutyCycle(dutyCycle); } catch (IOException e) { e.printStackTrace(); } } } You can discover pin names that support PWM on your device as follows: PeripheralManagerService service = new PeripheralManagerService(); for (String pinName : service.getPwmList() ) { GoalKicker.com Android Notes for Professionals 947 Log.i("ServoControlled","Pwm pin found: " + pinName); } In order to make your servo swinging forever between 80 degrees and 100 degrees, you can simply use the following code: final ServoController servoController = new ServoController(pinName); Thread th = new Thread(new Runnable() { @Override public void run() { while (true) { try { servoController.setPosition(80); Thread.sleep(500); servoController.setPosition(100); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } } } }); th.start(); You can compile and deploy all of the above code without actually hooking any servo motors to the computing device. For the wiring, refer to your computing device pinout chart (e.g. a Raspberry Pi 3 pinout chart is available here). Then you need to hook your servo to Vcc, Gnd, and signal. GoalKicker.com Android Notes for Professionals 948 Chapter 195: Library Dagger 2: Dependency Injection in Applications Dagger 2, as explained on GitHub, is a compile-time evolution approach to dependency injection. Taking the approach started in Dagger 1.x to its ultimate conclusion, Dagger 2.x eliminates all reection, and improves code clarity by removing the traditional ObjectGraph/Injector in favor of user-specied @Component interfaces. Section 195.1: Create @Module Class and @Singleton annotation for Object import javax.inject.Singleton; import dagger.Module; import dagger.Provides; @Module public class VehicleModule { @Provides @Singleton Motor provideMotor(){ return new Motor(); } @Provides @Singleton Vehicle provideVehicle(){ return new Vehicle(new Motor()); } } Every provider (or method) must have the @Provides annotation and the class must have the @Module annotation. The @Singleton annotation indicates that there will be only one instance of the object. Section 195.2: Request Dependencies in Dependent Objects Now that you have the providers for your dierent models, you need to request them. Just as Vehicle needs Motor, you have to add the @Inject annotation in the Vehicle constructor as follows: @Inject public Vehicle(Motor motor){ this.motor = motor; } You can use the @Inject annotation to request dependencies in the constructor, elds, or methods. In this example, I'm keeping the injection in the constructor. Section 195.3: Connecting @Modules with @Inject The connection between the provider of dependencies, @Module, and the classes requesting them through @Inject is made using @Component, which is an interface: import javax.inject.Singleton; import dagger.Component; @Singleton @Component(modules = {VehicleModule.class}) GoalKicker.com Android Notes for Professionals 949 public interface VehicleComponent { Vehicle provideVehicle(); } For the @Component annotation, you have to specify which modules are going to be used. In this example VehicleModule is used, which is dened in this example. If you need to use more modules, then just add them using a comma as a separator. Section 195.4: Using @Component Interface to Obtain Objects Now that you have every connection ready, you have to obtain an instance of this interface and invoke its methods to obtain the object you need: VehicleComponent component = Dagger_VehicleComponent.builder().vehicleModule(new VehicleModule()).build(); vehicle = component.provideVehicle(); Toast.makeText(this, String.valueOf(vehicle.getSpeed()), Toast.LENGTH_SHORT).show(); When you try to create a new object of the interface with the @Component annotation, you have to do it using the prex Dagger_<NameOfTheComponentInterface>, in this case Dagger_VehicleComponent, and then use the builder method to call every module within. GoalKicker.com Android Notes for Professionals 950 Chapter 196: JCodec Section 196.1: Getting Started You can get JCodec automatically with maven. For this just add below snippet to your pom.xml . <dependency> <groupId>org.jcodec</groupId> <artifactId>jcodec-javase</artifactId> <version>0.1.9</version> </dependency> Section 196.2: Getting frame from movie Getting a single frame from a movie ( supports only AVC, H.264 in MP4, ISO BMF, Quicktime container ): int frameNumber = 150; BufferedImage frame = FrameGrab.getFrame(new File("filename.mp4"), frameNumber); ImageIO.write(frame, "png", new File("frame_150.png")); Getting a sequence of frames from a movie ( supports only AVC, H.264 in MP4, ISO BMF, Quicktime container ): double startSec = 51.632; FileChannelWrapper ch = null; try { ch = NIOUtils.readableFileChannel(new File("filename.mp4")); FrameGrab fg = new FrameGrab(ch); grab.seek(startSec); for (int i = 0; i < 100; i++) { ImageIO.write(grab.getFrame(), "png", new File(System.getProperty("user.home"), String.format("Desktop/frame_%08d.png", i))); } } finally { NIOUtils.closeQuietly(ch); } GoalKicker.com Android Notes for Professionals 951 Chapter 197: Formatting phone numbers with pattern. This example show you how to format phone numbers with a patter You will need the following library in your gradle. compile 'com.googlecode.libphonenumber:libphonenumber:7.2.2' Section 197.1: Patterns + 1 (786) 1234 5678 Given a normalized phone number like +178612345678 we will get a formatted number with the provided pattern. private String getFormattedNumber(String phoneNumber) { PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance(); Phonemetadata.NumberFormat numberFormat = new Phonemetadata.NumberFormat(); numberFormat.pattern = "(\\d{3})(\\d{3})(\\d{4})"; numberFormat.format = "($1) $2-$3"; List<Phonemetadata.NumberFormat> newNumberFormats = new ArrayList<>(); newNumberFormats.add(numberFormat); Phonenumber.PhoneNumber phoneNumberPN = null; try { phoneNumberPN = phoneNumberUtil.parse(phoneNumber, Locale.US.getCountry()); phoneNumber = phoneNumberUtil.formatByPattern(phoneNumberPN, PhoneNumberUtil.PhoneNumberFormat.INTERNATIONAL, newNumberFormats); } catch (NumberParseException e) { e.printStackTrace(); } return phoneNumber; } GoalKicker.com Android Notes for Professionals 952 Chapter 198: Paint A paint is one of the four objects needed to draw, along with a Canvas (holds drawing calls), a Bitmap (holds the pixels), and a drawing primitive (Rect, Path, Bitmap...) Section 198.1: Creating a Paint You can create a new paint with one of these 3 constructors: new Paint() Create with default settings new Paint(int flags) Create with ags new Paint(Paint from) Copy settings from another paint It is generally suggested to never create a paint object, or any other object in onDraw() as it can lead to performance issues. (Android Studio will probably warn you) Instead, make it global and initialize it in your class constructor like so: public class CustomView extends View { private Paint paint; public CustomView(Context context) { super(context); paint = new Paint(); //... } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); paint.setColor(0xFF000000); // ... } } Section 198.2: Setting up Paint for text Text drawing settings setTypeface(Typeface typeface) Set the font face. See Typeface setTextSize(int size) Set the font size, in pixels. setColor(int color) Set the paint drawing color, including the text color. You can also use setARGB(int a, int r, int g, int b and setAlpha(int alpha) setLetterSpacing(float size) Set the spacing between characters, in ems. Default value is 0, a negative value will tighten the text, while a positive one will expand it. setTextAlign(Paint.Align align) Set text alignment relative to its origin. Paint.Align.LEFT will draw it to the right of the origin, RIGHT will draw it to the left, and CENTER will draw it centered on the origin (horizontally) setTextSkewX(float skewX) This could be considered as fake italic. SkewX represents the horizontal oset of the text bottom. (use -0.25 for italic) setStyle(Paint.Style style) Fill text FILL, Stroke text STROKE, or both FILL_AND_STROKE Note that you can use TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_SP, size, getResources().getDisplayMetrics()) to convert from SP or DP to pixels. GoalKicker.com Android Notes for Professionals 953 Measuring text float width = paint.measureText(String text) Measure the width of text float height = paint.ascent() Measure the height of text paint.getTextBounds(String text, int start, int end, Rect bounds Stores the text dimensions. You have allocate the Rect, it cannot be null: String text = "<NAME>!"; Rect bounds = new Rect(); paint.getTextBounds(text, 0, text.length(), bounds); There are other methods for measuring, however these three should t most purposes. Section 198.3: Setting up Paint for drawing shapes setStyle(Paint.Style style) Filled shape FILL, Stroke shape STROKE, or both FILL_AND_STROKE setColor(int color) Set the paint drawing color. You can also use setARGB(int a, int r, int g, int b and setAlpha(int alpha) setStrokeCap(Paint.Cap cap) Set line caps, either ROUND, SQUARE, or BUTT (none) See this. setStrokeJoin(Paint.Join join) Set line joins, either MITER (pointy), ROUND, or BEVEL. See this. setStrokeMiter(float miter) Set miter join limit. This can prevent miter join from going on indenitively, turning it into a bevel join after x pixels. See this. setStrokeWidth(float width) Set stroke width. 0 will draw in hairline mode, independant of the canvas matrix. (always 1 pixel) Section 198.4: Setting ags You can set the following ags in the constructor, or with setFlags(int flags) Paint.ANTI_ALIAS_FLAG Enable antialiasing, smooths the drawing. Paint.DITHER_FLAG Enable dithering. If color precision is higher than the device's, this will happen. Paint.EMBEDDED_BITMAP_TEXT_FLAG Enables the use of bitmap fonts. Paint.FAKE_BOLD_TEXT_FLAG will draw text with a fake bold eect, can be used instead of using a bold typeface. Some fonts have styled bold, fake bold won't Paint.FILTER_BITMAP_FLAG Aects the sampling of bitmaps when transformed. Paint.HINTING_OFF, Paint.HINTING_ON Toggles font hinting, see this Paint.LINEAR_TEXT_FLAG Disables font scaling, draw operations are scaled instead Paint.SUBPIXEL_TEXT_FLAG Text will be computed using subpixel accuracy. Paint.STRIKE_THRU_TEXT_FLAG Text drawn will be striked Paint.UNDERLINE_TEXT_FLAG Text drawn will be underlined You can add a ag and remove ags like this: Paint paint = new Paint(); paint.setFlags(paint.getFlags() | Paint.FLAG); paint.setFlags(paint.getFlags() & ~Paint.FLAG); // Add flag // Remove flag Trying to remove a ag that isn't there or adding a ag that is already there won't change anything. Also note that most ags can also be set using set<Flag>(boolean enabled), for example setAntialias(true). You can use paint.reset() to reset the paint to its default settings. The only default ag is EMBEDDED_BITMAP_TEXT_FLAG. It will be set even if you use new Paint(0), you will have GoalKicker.com Android Notes for Professionals 954 Chapter 199: What is ProGuard? What is use in Android? Proguard is free Java class le shrinker, optimizer, obfuscator, and preverier. It detects and removes unused classes, elds, methods, and attributes. It optimizes bytecode and removes unused instructions. It renames the remaining classes, elds, and methods using short meaningless names. Section 199.1: Shrink your code and resources with proguard To make your APK le as small as possible, you should enable shrinking to remove unused code and resources in your release build. This page describes how to do that and how to specify what code and resources to keep or discard during the build. Code shrinking is available with ProGuard, which detects and removes unused classes, elds, methods, and attributes from your packaged app, including those from included code libraries (making it a valuable tool for working around the 64k reference limit). ProGuard also optimizes the bytecode, removes unused code instructions, and obfuscates the remaining classes, elds, and methods with short names. The obfuscated code makes your APK dicult to reverse engineer, which is especially valuable when your app uses security-sensitive features, such as licensing verication. Resource shrinking is available with the Android plugin for Gradle, which removes unused resources from your packaged app, including unused resources in code libraries. It works in conjunction with code shrinking such that once unused code has been removed, any resources no longer referenced can be safely removed as well. Shrink Your Code To enable code shrinking with ProGuard, add minifyEnabled true to the appropriate build type in your build.gradle le. Be aware that code shrinking slows down the build time, so you should avoid using it on your debug build if possible. However, it's important that you do enable code shrinking on your nal APK used for testing, because it might introduce bugs if you do not suciently customize which code to keep. For example, the following snippet from a build.gradle le enables code shrinking for the release build: android { buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } ... } In addition to the minifyEnabled property, the proguardFiles property denes the ProGuard rules: The getDefaultProguardFile('proguard-android.txt') method gets the default ProGuard settings from the Android SDK tools/proguard/ folder. Tip: For even more code shrinking, try the proguard-android-optimize.txt le that's in the same location. It includes the same ProGuard rules, but with other optimizations that perform analysis at the bytecode levelinside and across methodsto reduce your APK size further and help it run faster. The proguard-rules.pro le is where you can add custom ProGuard rules. By default, this le is located at the root of GoalKicker.com Android Notes for Professionals 955 the module (next to the build.gradle le). To add more ProGuard rules that are specic to each build variant, add another proguardFiles property in the corresponding productFlavor block. For example, the following Gradle le adds avor2-rules.pro to the avor2 product avor. Now avor2 uses all three ProGuard rules because those from the release block are also applied. android { ... buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } productFlavors { flavor1 { } flavor2 { proguardFile 'flavor2-rules.pro' } } } GoalKicker.com Android Notes for Professionals 956 Chapter 200: Create Android Custom ROMs Section 200.1: Making Your Machine Ready for Building! Before you can build anything, you are required to make your machine ready for building. For this you need to install a lot of libraries and modules. The most recommended Linux distribution is Ubuntu, so this example will focus on installing everything that is needed on Ubuntu. Installing Java First, add the following Personal Package Archive (PPA): sudo apt-add-repository ppa:openjdk-r/ppa. Then, update the sources by executing: sudo apt-get update. Installing Additional Dependencies All required additional dependencies can be installed by the following command: sudo apt-get install git-core python gnupg flex bison gperf libsdl1.2-dev libesd0-dev libwxgtk2.8dev squashfs-tools build-essential zip curl libncurses5-dev zlib1g-dev openjdk-8-jre openjdk-8-jdk pngcrush schedtool libxml2 libxml2-utils xsltproc lzop libc6-dev schedtool g++-multilib lib32z1-dev lib32ncurses5-dev gcc-multilib liblz4-* pngquant ncurses-dev texinfo gcc gperf patch libtool automake g++ gawk subversion expat libexpat1-dev python-all-dev binutils-static bc libcloog-isl-dev libcap-dev autoconf libgmp-dev build-essential gcc-multilib g++-multilib pkg-config libmpc-dev libmpfr-dev lzma* liblzma* w3m android-tools-adb maven ncftp figlet Preparing the system for development Now that all the dependencies are installed, let us prepare the system for development by executing: sudo curl --create-dirs -L -o /etc/udev/rules.d/51-android.rules -O -L https://raw.githubusercontent.com/snowdream/51-android/master/51-android.rules sudo chmod 644 /etc/udev/rules.d/51-android.rules sudo chown root /etc/udev/rules.d/51-android.rules sudo service udev restart adb kill-server sudo killall adb Finally, let us set up the cache and the repo by the following commands: sudo install utils/repo /usr/bin/ sudo install utils/ccache /usr/bin/ Please note: We can also achieve this setup by running the automated scripts made by <NAME> (akhilnarang), one of the maintainers of Resurrection Remix OS. These scripts can be found on GitHub. GoalKicker.com Android Notes for Professionals 957 Chapter 201: Genymotion for android Genymotion is a fast third-party emulator that can be used instead of the default Android emulator. In some cases it's as good as or better than developing on actual devices! Section 201.1: Installing Genymotion, the free version Step 1 - installing VirtualBox Download and install VirtualBox according to your operating system. , it is required to run Genymotion. Step 2 - downloading Genymotion Go to the Genymotion download page and download Genymotion according to your operating system. Note: you will need to create a new account OR log-in with your account. Step 3 - Installing Genymotion if on Linux then refer to this answer, to install and run a .bin le. Step 4 - Installing Genymotion's emulators run Genymotion Press on the Add button (in top bar). Log-In with your account and you will be able to browse the available emulators. select and Install what you need. Step 5 - Integrating genymotion with Android Studio Genymotion, can be integrated with Android Studio via a plugin, here the steps to install it in Android Studio go to File/Settings (for Windows and Linux) or to Android Studio/Preferences (for Mac OS X) Select Plugins and click Browse Repositories. Right-click on Genymotion and click Download and install. You should now be able to see the plugin icon, see this image Note, you might want to display the toolbar by clicking View > Toolbar. Step 6 - Running Genymotion from Android Studio go to File/Settings (for Windows and Linux) or to Android Studio/Preferences (for Mac OS X) go to Other Settings/Genymotion and add the path of Genymotion's folder and apply your changes. Now you should be able to run Genymotion's emulator by pressing the plugin icon and selecting an installed emulator and then press start button! GoalKicker.com Android Notes for Professionals 958 Section 201.2: Google framework on Genymotion If developers want to test Google Maps or any other Google service like Gmail,Youtube, Google drive etc. then they rst need to install Google framework on Genymotion. Here are the steps: 4.4 Kitkat 5.0 Lollipop 5.1 Lollipop 6.0 Marshmallow 7.0 Nougat 7.1 Nougat (webview patch) 1. Download from above link 2. Just drag & drop downloaded zip le to genymotion and restart 3. Add google account and download "Google Play Music" and Run. Reference: Stack overow question on this topic GoalKicker.com Android Notes for Professionals 959 Chapter 202: ConstraintSet This class allows you to dene programmatically a set of constraints to be used with ConstraintLayout. It lets you create and save constraints, and apply them to an existing ConstraintLayout. Section 202.1: ConstraintSet with ContraintLayout Programmatically import android.content.Context; import android.os.Bundle; import android.support.constraint.ConstraintLayout; import android.support.constraint.ConstraintSet; import android.support.transition.TransitionManager; import android.support.v7.app.AppCompatActivity; import android.view.View; public class MainActivity extends AppCompatActivity { ConstraintSet mConstraintSet1 = new ConstraintSet(); // create a Constraint Set ConstraintSet mConstraintSet2 = new ConstraintSet(); // create a Constraint Set ConstraintLayout mConstraintLayout; // cache the ConstraintLayout boolean mOld = true; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Context context = this; mConstraintSet2.clone(context, R.layout.state2); // get constraints from layout setContentView(R.layout.state1); mConstraintLayout = (ConstraintLayout) findViewById(R.id.activity_main); mConstraintSet1.clone(mConstraintLayout); // get constraints from ConstraintSet } public void foo(View view) { TransitionManager.beginDelayedTransition(mConstraintLayout); if (mOld = !mOld) { mConstraintSet1.applyTo(mConstraintLayout); // set new constraints } else { mConstraintSet2.applyTo(mConstraintLayout); // set new constraints } } } GoalKicker.com Android Notes for Professionals 960 Chapter 203: CleverTap Quick hacks for the analytics and engagement SDK provided by CleverTap - Android Section 203.1: Setting the debug level In your custom application class, override the onCreate() method, add the line below: CleverTapAPI.setDebugLevel(1); Section 203.2: Get an instance of the SDK to record events CleverTapAPI cleverTap; try { cleverTap = CleverTapAPI.getInstance(getApplicationContext()); } catch (CleverTapMetaDataNotFoundException e) { // thrown if you haven't specified your CleverTap Account ID or Token in your AndroidManifest.xml } catch (CleverTapPermissionsNotSatisfied e) { // thrown if you havent requested the required permissions in your AndroidManifest.xml } GoalKicker.com Android Notes for Professionals 961 Chapter 204: Publish a library to Maven Repositories Section 204.1: Publish .aar le to Maven In order to publish to a repository in Maven format ,maven-publish plugin for gradle can be used. The plugin should be added to build.gradle le in library module. apply plugin: 'maven-publish' You should dene the publication and its identity attributes in build.gradle le too. This identity attributes will be shown in the generated pom le and in future for importing this publication you will use them.You also need to dene which artifacts you want to publish,for example i just want to publish generated .aar le after building the library. publishing { publications { myPulication(MavenPublication) { groupId 'com.example.project' version '1.0.2' artifactId 'myProject' artifact("$buildDir/outputs/aar/myProject.aar") } } } You will also need to dene your repository url publishing{ repositories { maven { url "http://www.myrepository.com" } } } Here is full library build.gradle le apply plugin: 'com.android.library' apply plugin: 'maven-publish' buildscript { ... } android { ... } publishing { publications { myPulication(MavenPublication) { groupId 'com.example.project' version '1.0.2' artifactId 'myProject' artifact("$buildDir/outputs/aar/myProject.aar") } GoalKicker.com Android Notes for Professionals 962 } repositories { maven { url "http://www.myrepository.com" } } } For publishing you can run gradle console command gradle publish or you can run from gradle tasks panel GoalKicker.com Android Notes for Professionals 963 Chapter 205: adb shell Parameter Details -e choose escape character, or "none"; default '~' -n don't read from stdin -T disable PTY allocation -t force PTY allocation -x disable remote exit codes and stdout/stderr separation adb shell opens a Linux shell in a target device or emulator. It is the most powerful and versatile way to control an Android device via adb. This topic was split from ADB (Android Debug Bridge) due to reaching the limit of examples, many of which were involving adb shell command. Section 205.1: Granting & revoking API 23+ permissions A one-liner that helps granting or revoking vulnerable permissions. granting adb shell pm grant <sample.package.id> android.permission.<PERMISSION_NAME> revoking adb shell pm revoke <sample.package.id> android.permission.<PERMISSION_NAME> Granting all run-time permissions at a time on installation (-g) adb install -g /path/to/sample_package.apk Section 205.2: Send text, key pressed and touch events to Android Device via ADB execute the following command to insert the text into a view with a focus (if it supports text input) Version 6.0 Send text on SDK 23+ adb shell "input keyboard text 'Paste text on Android Device'" If already connected to your device via adb: input text 'Paste text on Android Device' Version < 6.0 Send text prior to SDK 23 adb shell "input keyboard text 'Paste%stext%son%sAndroid%sDevice'" GoalKicker.com Android Notes for Professionals 964 Spaces are not accepted as the input, replace them with %s. Send events To simulate pressing the hardware power key adb shell input keyevent 26 or alternatively adb shell input keyevent POWER Even if you don't have a hardware key you still can use a keyevent to perform the equivalent action adb shell input keyevent CAMERA Send touch event as input adb shell input tap Xpoint Ypoint Send swipe event as input adb shell input swipe Xpoint1 Ypoint1 Xpoint2 Ypoint2 [DURATION*] *DURATION is optional, default=300ms. source Get X and Y points by enabling pointer location in developer option. ADB sample shell script To run a script in Ubuntu, Create script.sh right click the le and add read/write permission and tick allow executing le as program. Open terminal emulator and run the command ./script.sh Script.sh for (( c=1; c<=5; c++ )) do adb shell input tap X Y echo "Clicked $c times" sleep 5s done For a comprehensive list of event numbers shortlist of several interesting events ADB Shell Input Events reference documentation https://developer.android.com/reference/android/view/KeyEvent.html#KEYCODE_POWER. GoalKicker.com Android Notes for Professionals 965 Section 205.3: List packages Prints all packages, optionally only those whose package name contains the text in <FILTER>. adb shell pm list packages [options] <FILTER> All <FILTER> adb shell pm list packages Attributes: -f to see their associated le. -i See the installer for the packages. -u to also include uninstalled packages. -u Also include uninstalled packages. Attributes that lter: -d for disabled packages. -e for enabled packages. -s for system packages. -3 for third party packages. --user <USER_ID> for a specic user space to query. Section 205.4: Recording the display Version 4.4 Recording the display of devices running Android 4.4 (API level 19) and higher: adb shell screenrecord [options] <filename> adb shell screenrecord /sdcard/demo.mp4 (press Ctrl-C to stop recording) Download the le from the device: adb pull /sdcard/demo.mp4 Note: Stop the screen recording by pressing Ctrl-C, otherwise the recording stops automatically at three minutes or the time limit set by --time-limit. adb shell screenrecord --size <WIDTHxHEIGHT> Sets the video size: 1280x720. The default value is the device's native display resolution (if supported), 1280x720 if not. For best results, use a size supported by your device's Advanced Video Coding (AVC) encoder. GoalKicker.com Android Notes for Professionals 966 adb shell screenrecord --bit-rate <RATE> Sets the video bit rate for the video, in megabits per second. The default value is 4Mbps. You can increase the bit rate to improve video quality, but doing so results in larger movie les. The following example sets the recording bit rate to 5Mbps: adb shell screenrecord --bit-rate 5000000 /sdcard/demo.mp4 adb shell screenrecord --time-limit <TIME> Sets the maximum recording time, in seconds. The default and maximum value is 180 (3 minutes). adb shell screenrecord --rotate Rotates the output 90 degrees. This feature is experimental. adb shell screenrecord --verbose Displays log information on the command-line screen. If you do not set this option, the utility does not display any information while running. Note: This might not work on some devices. Version < 4.4 The screen recording command isn't compatible with android versions pre 4.4 The screenrecord command is a shell utility for recording the display of devices running Android 4.4 (API level 19) and higher. The utility records screen activity to an MPEG-4 le. Section 205.5: Open Developer Options adb shell am start -n com.android.settings/.DevelopmentSettings Will navigate your device/emulator to the Developer Options section. Section 205.6: Set Date/Time via adb Version 6.0 Default SET format is MMDDhhmm[[CC]YY][.ss], that's (2 digits each) For example, to set July 17'th 10:10am, without changing the current year, type: adb shell 'date 07171010.00' Tip 1: the date change will not be reected immediately, and a noticable change will happen only after the system clock advances to the next minute. GoalKicker.com Android Notes for Professionals 967 You can force an update by attaching a TIME_SET intent broadcast to your call, like that: adb shell 'date 07171010.00 ; am broadcast -a android.intent.action.TIME_SET' Tip 2: to synchronize Android's clock with your local machine: Linux: adb shell date `date +%m%d%H%M%G.%S` Windows (PowerShell): $currentDate = Get-Date -Format "MMddHHmmyyyy.ss" # Android's preferred format adb shell "date $currentDate" Both tips together: adb shell 'date `date +%m%d%H%M%G.%S` ; am broadcast -a android.intent.action.TIME_SET' Version < 6.0 Default SET format is 'YYYYMMDD.HHmmss' adb shell 'date -s 20160117.095930' Tip: to synchronize Android's clock with your local (linux based) machine: adb shell date -s `date +%G%m%d.%H%M%S` Section 205.7: Generating a "Boot Complete" broadcast This is relevant for apps that implement a BootListener. Test your app by killing your app and then test with: adb shell am broadcast -a android.intent.action.BOOT_COMPLETED -c android.intent.category.HOME -n your.app/your.app.BootListener (replace your.package/your.app.BootListener with proper values). Section 205.8: Print application data This command print all relevant application data: version code version name granted permissions (Android API 23+) etc.. adb shell dumpsys package <your.package.id> Section 205.9: Changing le permissions using chmod command Notice, that in order to change le prmissions, your device need to be rooted, su binary doesn't come with factory shipped devices! GoalKicker.com Android Notes for Professionals 968 Convention: adb shell su -c "chmod <numeric-permisson> <file>" Numeric permission constructed from user, group and world sections. For example, if you want to change le to be readable, writable and executable by everyone, this will be your command: adb shell su -c "chmod 777 <file-path>" Or adb shell su -c "chmod 000 <file-path>" if you intent to deny any permissions to it. 1st digit-species user permission, 2nd digit- species group permission, 3rd digit - species world (others) permission. Access permissions: --- : --x : -w- : -wx : r-- : r-x : rw- : rwx : binary value: binary value: binary value: binary value: binary value: binary value: binary value: binary value: 000, 001, 010, 011, 100, 101, 110, 111, octal value: 0 (none) octal value: 1 (execute) octal value: 2 (write) octal value: 3 (write, execute) octal value: 4 (read) octal value: 5 (read, execute) octal value: 6 (read, write) octal value: 7 (read, write, execute) Section 205.10: View external/secondary storage content View content: adb shell ls \$EXTERNAL_STORAGE adb shell ls \$SECONDARY_STORAGE View path: adb shell echo \$EXTERNAL_STORAGE adb shell echo \$SECONDARY_STORAGE Section 205.11: kill a process inside an Android device Sometimes Android's logcat is running innitely with errors coming from some process not own by you, draining battery or just making it hard to debug your code. A convenient way to x the problem without restarting the device is to locate and kill the process causing the problem. From Logcat 03-10 11:41:40.010 1550-1627/? E/SomeProcess: .... GoalKicker.com Android Notes for Professionals 969 notice the process number: 1550 Now we can open a shell and kill the process. Note that we cannot kill root process. adb shell inside the shell we can check more about the process using ps -x | grep 1550 and kill it if we want: kill -9 1550 GoalKicker.com Android Notes for Professionals 970 Chapter 206: Ping ICMP The ICMP Ping request can be performed in Android by creating a new process to run the ping request. The outcome of the request can be evaluated upon the completion of the ping request from within its process. Section 206.1: Performs a single Ping This example attempts a single Ping request. The ping command inside the runtime.exec method call can be modied to any valid ping command you might perform yourself in the command line. try { Process ipProcess = runtime.exec("/system/bin/ping -c 1 8.8.8.8"); int exitValue = ipProcess.waitFor(); ipProcess.destroy(); if(exitValue == 0){ // Success } else { // Failure } } catch (IOException | InterruptedException e) { e.printStackTrace(); } GoalKicker.com Android Notes for Professionals 971 Chapter 207: AIDL AIDL is Android interface denition language. What? Why? How ? What? It is a bounded services. This AIDL service will be active till atleast one of the client is exist. It works based on marshaling and unmarshaling concept. Why? Remote applications can access your service + Multi Threading.(Remote application request). How? Create the .aidl le Implement the interface Expose the interface to clients Section 207.1: AIDL Service ICalculator.aidl // Declare any non-default types here with import statements interface ICalculator { int add(int x,int y); int sub(int x,int y); } AidlService.java public class AidlService extends Service { private static final String TAG = "AIDLServiceLogs"; private static final String className = " AidlService"; public AidlService() { Log.i(TAG, className+" Constructor"); } @Override public IBinder onBind(Intent intent) { // TODO: Return the communication channel to the service. Log.i(TAG, className+" onBind"); return iCalculator.asBinder(); } @Override public void onCreate() { super.onCreate(); Log.i(TAG, className+" onCreate"); } @Override public void onDestroy() { super.onDestroy(); Log.i(TAG, className+" onDestroy"); } GoalKicker.com Android Notes for Professionals 972 ICalculator.Stub iCalculator = new ICalculator.Stub() { @Override public int add(int x, int y) throws RemoteException { Log.i(TAG, className+" add Thread Name: "+Thread.currentThread().getName()); int z = x+y; return z; } @Override public int sub(int x, int y) throws RemoteException { Log.i(TAG, className+" add Thread Name: "+Thread.currentThread().getName()); int z = x-y; return z; } }; } Service Connection // Return the stub as interface ServiceConnection serviceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { Log.i(TAG, className + " onServiceConnected"); iCalculator = ICalculator.Stub.asInterface(service); } @Override public void onServiceDisconnected(ComponentName name) { unbindService(serviceConnection); } }; GoalKicker.com Android Notes for Professionals 973 Chapter 208: Android game development A short introduction to creating a game on the Android platform using Java Section 208.1: Game using Canvas and SurfaceView This covers how you can create a basic 2D game using SurfaceView. First, we need an activity: public class GameLauncher extends AppCompatActivity { private Game game; @Override public void onCreate(Bundle sis){ super.onCreate(sis); game = new Game(GameLauncher.this);//Initialize the game instance setContentView(game);//setContentView to the game surfaceview //Custom XML files can also be used, and then retrieve the game instance using findViewById. } } The activity also has to be declared in the Android Manifest. Now for the game itself. First, we start by implementing a game thread: public class Game extends SurfaceView implements SurfaceHolder.Callback, Runnable{ /** * Holds the surface frame */ private SurfaceHolder holder; /** * Draw thread */ private Thread drawThread; /** * True when the surface is ready to draw */ private boolean surfaceReady = false; /** * Drawing thread flag */ private boolean drawingActive = false; /** * Time per frame for 60 FPS */ private static final int MAX_FRAME_TIME = (int) (1000.0 / 60.0); private static final String LOGTAG = "surface"; /* GoalKicker.com Android Notes for Professionals 974 * All the constructors are overridden to ensure functionality if one of the different constructors are used through an XML file or programmatically */ public Game(Context context) { super(context); init(); } public Game(Context context, AttributeSet attrs) { super(context, attrs); init(); } public Game(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); init(); } @TargetApi(21) public Game(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) { super(context, attrs, defStyleAttr, defStyleRes); init(); } public void init(Context c) { this.c = c; SurfaceHolder holder = getHolder(); holder.addCallback(this); setFocusable(true); //Initialize other stuff here later } public void render(Canvas c){ //Game rendering here } public void tick(){ //Game logic here } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { if (width == 0 || height == 0){ return; } // resize your UI } @Override public void surfaceCreated(SurfaceHolder holder){ this.holder = holder; if (drawThread != null){ Log.d(LOGTAG, "draw thread still active.."); drawingActive = false; try{ drawThread.join(); } catch (InterruptedException e){} } surfaceReady = true; startDrawThread(); GoalKicker.com Android Notes for Professionals 975 Log.d(LOGTAG, "Created"); } @Override public void surfaceDestroyed(SurfaceHolder holder){ // Surface is not used anymore - stop the drawing thread stopDrawThread(); // and release the surface holder.getSurface().release(); this.holder = null; surfaceReady = false; Log.d(LOGTAG, "Destroyed"); } @Override public boolean onTouchEvent(MotionEvent event){ // Handle touch events return true; } /** * Stops the drawing thread */ public void stopDrawThread(){ if (drawThread == null){ Log.d(LOGTAG, "DrawThread is null"); return; } drawingActive = false; while (true){ try{ Log.d(LOGTAG, "Request last frame"); drawThread.join(5000); break; } catch (Exception e) { Log.e(LOGTAG, "Could not join with draw thread"); } } drawThread = null; } /** * Creates a new draw thread and starts it. */ public void startDrawThread(){ if (surfaceReady && drawThread == null){ drawThread = new Thread(this, "Draw thread"); drawingActive = true; drawThread.start(); } } @Override public void run() { Log.d(LOGTAG, "Draw thread started"); long frameStartTime; long frameTime; /* * In order to work reliable on Nexus 7, we place ~500ms delay at the start of drawing thread * (AOSP - Issue 58385) GoalKicker.com Android Notes for Professionals 976 */ if (android.os.Build.BRAND.equalsIgnoreCase("google") && android.os.Build.MANUFACTURER.equalsIgnoreCase("asus") && android.os.Build.MODEL.equalsIgnoreCase("Nexus 7")) { Log.w(LOGTAG, "Sleep 500ms (Device: Asus Nexus 7)"); try { Thread.sleep(500); } catch (InterruptedException ignored) {} } while (drawing) { if (sf == null) { return; } frameStartTime = System.nanoTime(); Canvas canvas = sf.lockCanvas(); if (canvas != null) { try { synchronized (sf) { tick(); render(canvas); } } finally { sf.unlockCanvasAndPost(canvas); } } // calculate the time required to draw the frame in ms frameTime = (System.nanoTime() - frameStartTime) / 1000000; if (frameTime < MAX_FRAME_TIME){ try { Thread.sleep(MAX_FRAME_TIME - frameTime); } catch (InterruptedException e) { // ignore } } } Log.d(LOGTAG, "Draw thread finished"); } } That is the basic part. Now you have the ability to draw onto the screen. Now, let's start by adding to integers: public final int x = 100;//The reason for this being static will be shown when the game is runnable public int y; public int velY; For this next part, you are going to need an image. It should be about 100x100 but it can be bigger or smaller. For learning, a Rect can also be used(but that requires change in code a little bit down) Now, we declare a Bitmap: private Bitmap PLAYER_BMP = BitmapFactory.decodeResource(getResources(), R.drawable.my_player_drawable); GoalKicker.com Android Notes for Professionals 977 In render, we need to draw this bitmap. ... c.drawBitmap(PLAYER_BMP, x, y, null); ... BEFORE LAUNCHING there are still some things to be done We need a boolean rst: boolean up = false; in onTouchEvent, we add: if(ev.getAction() == MotionEvent.ACTION_DOWN){ up = true; }else if(ev.getAction() == MotionEvent.ACTION_UP){ up = false; } And in tick we need this to move the player: if(up){ velY -=1; } else{ velY +=1; } if(velY >14)velY = 14; if(velY <-14)velY = -14; y += velY *2; and now we need this in init: WindowManager wm = (WindowManager) c.getSystemService(Context.WINDOW_SERVICE); Display display = wm.getDefaultDisplay(); Point size = new Point(); display.getSize(size); WIDTH = size.x; HEIGHT = size.y; y = HEIGHT/ 2 - PLAYER_BMP.getHeight(); And we need these to variables: public static int WIDTH, HEIGHT; At this point, the game is runnable. Meaning you can launch it and test it. Now you should have a player image or rect going up and down the screen. The player can be created as a custom class if needed. Then all the player-related things can be moved into that class, and use an instance of that class to move, render and do other logic. Now, as you probably saw under testing it ies o the screen. So we need to limit it. First, we need to declare the Rect: GoalKicker.com Android Notes for Professionals 978 private Rect screen; In init, after initializing width and height, we create a new rect that is the screen. screen = new Rect(0,0,WIDTH,HEIGHT); Now we need another rect in the form of a method: private Rect getPlayerBound(){ return new Rect(x, y, x + PLAYER_BMP.getWidth(), y + PLAYER_BMP.getHeight(); } and in tick: if(!getPlayerBound().intersects(screen){ gameOver = true; } The implementation of gameOVer can also be used to show the start of a game. Other aspects of a game worth noting: Saving(currently missing in documentation) GoalKicker.com Android Notes for Professionals 979 Chapter 209: Android programming with Kotlin Using Kotlin with Android Studio is an easy task as Kotlin is developed by JetBrains. It is the same company that stands behind IntelliJ IDEA - a base IDE for Android Studio. That is why there are almost none problems with the compatibility. Section 209.1: Installing the Kotlin plugin First, you'll need to install the Kotlin plugin. For Windows: Navigate to File Settings Plugins Install JetBrains plugin For Mac: Navigate to Android Studio Preferences Plugins Install JetBrains plugin And then search for and install Kotlin. You'll need to restart the IDE after this completes. GoalKicker.com Android Notes for Professionals 980 Section 209.2: Conguring an existing Gradle project with Kotlin You can create a New Project in Android Studio and then add Kotlin support to it or modify your existing project. To do it, you have to: 1. Add dependency to a root gradle le - you have to add the dependency for kotlin-android plugin to a root build.gradle le. buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.3.1' classpath 'org.jetbrains.kotlin:kotlin-gradle-plugin:1.1.2' } } allprojects { repositories { jcenter() } } task clean(type: Delete) { delete rootProject.buildDir } 2. Apply Kotlin Android Plugin - simply add apply plugin: 'kotlin-android' to a module build.gradle le. 3. Add dependency to Kotlin stdlib - add the dependency to 'org.jetbrains.kotlin:kotlin-stdlib:1.1.2' to the dependency section in a module build.gradle le. For a new project, build.gradle le could looks like this: apply plugin: 'com.android.application' apply plugin: 'kotlin-android' android { compileSdkVersion 25 buildToolsVersion "25.0.2" defaultConfig { applicationId "org.example.example" minSdkVersion 16 targetSdkVersion 25 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { GoalKicker.com Android Notes for Professionals 981 compile 'org.jetbrains.kotlin:kotlin-stdlib:1.1.1' compile 'com.android.support.constraint:constraint-layout:1.0.2' compile 'com.android.support:appcompat-v7:25.3.1' androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) testCompile 'junit:junit:4.12' } Section 209.3: Creating a new Kotlin Activity 1. Click to File New Kotlin Activity. 2. Choose a type of the Activity. 3. Select name and other parameter for the Activity. 4. Finish. Final class could look like this: import android.support.v7.app.AppCompatActivity import android.os.Bundle class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) GoalKicker.com Android Notes for Professionals 982 } } Section 209.4: Converting existing Java code to Kotlin Kotlin Plugin for Android Studio support converting existing Java les to Kotlin les. Choose a Java le and invoke action Convert Java File to Kotlin File: Section 209.5: Starting a new Activity fun startNewActivity(){ val intent: Intent = Intent(context, Activity::class.java) startActivity(intent) } You can add extras to the intent just like in Java. fun startNewActivityWithIntents(){ val intent: Intent = Intent(context, Activity::class.java) intent.putExtra(KEY_NAME, KEY_VALUE) startActivity(intent) } GoalKicker.com Android Notes for Professionals 983 Chapter 210: Android-x86 in VirtualBox The idea of this section is to cover how to install and use the VirtualBox with Android-x86 for debugging purposes. This is a dicult task because there are dierences between versions. For the moment Im going to cover 6.0 which is the one that I had to work with and then we'll have to nd similarities. It doesn't cover VirtualBox or a Linux in detail but it shows the commands I've used to make it work. Section 210.1: Virtual hard drive Setup for SDCARD Support With the virtual hard drive just created, boot the virtual machine with the android-x86 image in the optical drive. Once you boot, you can see the grub menu of the Live CD GoalKicker.com Android Notes for Professionals 984 Choose the Debug Mode Option, then you should see the shell prompt. This is a busybox shell. You can get more shell by switching between virtual console Alt-F1/F2/F3. Create two partitions by fdisk (some other versions would use cfdisk). Format them to ext3. Then reboot: # fdisk /dev/sda Then type: "n" (new partition) "p" (primary partition) "1" (1st partition) "1" (rst cylinder) "261" (choose a cylinder, we'll leave 50% of the disk for a 2nd partition) "2" (2nd partition) "262" (262nd cylinder) "522" (choose the last cylinder) GoalKicker.com Android Notes for Professionals 985 "w" (write the partition) #mdev -s #mke2fs -j -L DATA /dev/sda1 #mke2fs -j -L SDCARD /dev/sda2 #reboot -f When you restart the virtual machine and the grub menu appears and you will be able edit the kernel boot line so you can add DATA=sda1 SDCARD=sda2 options to point to the sdcard or the data partition. Section 210.2: Installation in partition With the virtual hard drive just created, boot the virtual machine with the android-x86 image as the optical drive. In the booting options of the Live CD choose "Installation - Install Android to hard disk" Choose the sda1 partition and install android and we'll install grub. Reboot the virtual machine but make sure that the image is not in the optical drive so it can restart from the virtual hard drive. GoalKicker.com Android Notes for Professionals 986 In the grub menu we need to edit kernel like in the "Android-x86 6.0-r3" option so press e. Then we substitute "quiet" with "vga=ask" and add the option "SDCARD=sda2" In my case, the kernel line looks like this after modied: GoalKicker.com Android Notes for Professionals 987 kenel /android-6.0-r3/kernel vga=ask root=ram0 SRC=/android-6/android-6.0-r3 SDCARD=sda2 Press b to boot, then you'll be able to choose the screen size pressing ENTER (the vga=ask option) Once the installation wizard has started choose the language. I could choose English (United States) and Spanish (United States) and I had trouble choosing any other. Section 210.3: Virtual Machine setup These are my VirtualBox settings: OS Type: Linux 2.6 (I've user 64bit because my computer can support it) Virtual hard drive size: 4Gb Ram Memory: 2048 Video Memory: 8M Sound device: Sound Blaster 16. Network device: PCnet-Fast III, attached to NAT. You can also use bridged adapter, but you need a DHCP server in your environment. The image used with this conguration has been android-x86_64-6.0-r3.iso (it is 64bit) downloaded from http://www.android-x86.org/download. I suppose that it also works with 32bit version. GoalKicker.com Android Notes for Professionals 988 Chapter 211: Leakcanary Leak Canary is an Android and Java library used to detect leak in the application Section 211.1: Implementing a Leak Canary in Android Application In your build.gradle you need to add the below dependencies: debugCompile 'com.squareup.leakcanary:leakcanary-android:1.5.1' releaseCompile 'com.squareup.leakcanary:leakcanary-android-no-op:1.5.1' testCompile 'com.squareup.leakcanary:leakcanary-android-no-op:1.5.1' In your Application class you need to add the below code inside your onCreate(): LeakCanary.install(this); That's all you need to do for LeakCanary, it will automatically show notications when there is a leak in your build. GoalKicker.com Android Notes for Professionals 989 Chapter 212: Okio Section 212.1: Download / Implement Download the latest JAR or grab via Maven: <dependency> <groupId>com.squareup.okio</groupId> <artifactId>okio</artifactId> <version>1.12.0</version> </dependency> or Gradle: compile 'com.squareup.okio:okio:1.12.0' Section 212.2: PNG decoder Decoding the chunks of a PNG le demonstrates Okio in practice. private static final ByteString PNG_HEADER = ByteString.decodeHex("89504e470d0a1a0a"); public void decodePng(InputStream in) throws IOException { try (BufferedSource pngSource = Okio.buffer(Okio.source(in))) { ByteString header = pngSource.readByteString(PNG_HEADER.size()); if (!header.equals(PNG_HEADER)) { throw new IOException("Not a PNG."); } while (true) { Buffer chunk = new Buffer(); // Each chunk is a length, type, data, and CRC offset. int length = pngSource.readInt(); String type = pngSource.readUtf8(4); pngSource.readFully(chunk, length); int crc = pngSource.readInt(); decodeChunk(type, chunk); if (type.equals("IEND")) break; } } } private void decodeChunk(String type, Buffer chunk) { if (type.equals("IHDR")) { int width = chunk.readInt(); int height = chunk.readInt(); System.out.printf("%08x: %s %d x %d%n", chunk.size(), type, width, height); } else { System.out.printf("%08x: %s%n", chunk.size(), type); } } Section 212.3: ByteStrings and Buers ByteStrings and Buers GoalKicker.com Android Notes for Professionals 990 Okio is built around two types that pack a lot of capability into a straightforward API: ByteString is an immutable sequence of bytes. For character data, String is fundamental. ByteString is String's longlost brother, making it easy to treat binary data as a value. This class is ergonomic: it knows how to encode and decode itself as hex, base64, and UTF-8. Buer is a mutable sequence of bytes. Like ArrayList, you don't need to size your buer in advance. You read and write buers as a queue: write data to the end and read it from the front. There's no obligation to manage positions, limits, or capacities. Internally, ByteString and Buffer do some clever things to save CPU and memory. If you encode a UTF-8 string as a ByteString, it caches a reference to that string so that if you decode it later, there's no work to do. Buffer is implemented as a linked list of segments. When you move data from one buer to another, it reassigns ownership of the segments rather than copying the data across. This approach is particularly helpful for multithreaded programs: a thread that talks to the network can exchange data with a worker thread without any copying or ceremony. GoalKicker.com Android Notes for Professionals 991 Chapter 213: Bluetooth Low Energy This documentation is meant as an enhancement over the original documentation and it will focus on the latest Bluetooth LE API introduced in Android 5.0 (API 21). Both Central and Peripheral roles will be covered as well as how to start scanning and advertising operations. Section 213.1: Finding BLE Devices The following permissions are required to use the Bluetooth APIs: android.permission.BLUETOOTH android.permission.BLUETOOTH_ADMIN If you're targeting devices with Android 6.0 (API Level 23) or higher and want to perform scanning/advertising operations you will require a Location permission: android.permission.ACCESS_FINE_LOCATION or android.permission.ACCESS_COARSE_LOCATION Note.- Devices with Android 6.0 (API Level 23) or higher also need to have Location Services enabled. A BluetoothAdapter object is required to start scanning/advertising operations: BluetoothManager bluetoothManager = (BluetoothManager) context.getSystemService(Context.BLUETOOTH_SERVICE); bluetoothAdapter = bluetoothManager.getAdapter(); The startScan (ScanCallback callback)method of the BluetoothLeScanner class is the most basic way to start a scanning operation. A ScanCallback object is required to receive results: bluetoothAdapter.getBluetoothLeScanner().startScan(new ScanCallback() { @Override public void onScanResult(int callbackType, ScanResult result) { super.onScanResult(callbackType, result); Log.i(TAG, "Remote device name: " + result.getDevice().getName()); } }); Section 213.2: Connecting to a GATT Server Once you have discovered a desired BluetoothDevice object, you can connect to it by using its connectGatt() method which takes as parameters a Context object, a boolean indicating whether to automatically connect to the BLE device and a BluetoothGattCallback reference where connection events and client operations results will be delivered: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { device.connectGatt(context, false, bluetoothGattCallback, BluetoothDevice.TRANSPORT_AUTO); } else { device.connectGatt(context, false, bluetoothGattCallback); } Override onConnectionStateChange in BluetoothGattCallback to receive connection an disconnection events: GoalKicker.com Android Notes for Professionals 992 BluetoothGattCallback bluetoothGattCallback = new BluetoothGattCallback() { @Override public void onConnectionStateChange(BluetoothGatt gatt, int status, int newState) { if (newState == BluetoothProfile.STATE_CONNECTED) { Log.i(TAG, "Connected to GATT server."); } else if (newState == BluetoothProfile.STATE_DISCONNECTED) { Log.i(TAG, "Disconnected from GATT server."); } } }; Section 213.3: Writing and Reading from Characteristics Once you are connected to a Gatt Server, you're going to be interacting with it by writing and reading from the server's characteristics. To do this, rst you have to discover what services are available on this server and which characteristics are available in each service: @Override public void onConnectionStateChange(BluetoothGatt gatt, int status, int newState) { if (newState == BluetoothProfile.STATE_CONNECTED) { Log.i(TAG, "Connected to GATT server."); gatt.discoverServices(); } . . . @Override public void onServicesDiscovered(BluetoothGatt gatt, int status) { if (status == BluetoothGatt.GATT_SUCCESS) { List<BluetoothGattService> services = gatt.getServices(); for (BluetoothGattService service : services) { List<BluetoothGattCharacteristic> characteristics = service.getCharacteristics(); for (BluetoothGattCharacteristic characteristic : characteristics) { ///Once you have a characteristic object, you can perform read/write //operations with it } } } } A basic write operation goes like this: characteristic.setValue(newValue); characteristic.setWriteType(BluetoothGattCharacteristic.WRITE_TYPE_DEFAULT); gatt.writeCharacteristic(characteristic); When the write process has nished, the onCharacteristicWrite method of your BluetoothGattCallback will be called: @Override public void onCharacteristicWrite(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, int status) { super.onCharacteristicWrite(gatt, characteristic, status); GoalKicker.com Android Notes for Professionals 993 Log.d(TAG, "Characteristic " + characteristic.getUuid() + " written); } A basic write operation goes like this: gatt.readCharacteristic(characteristic); When the write process has nished, the onCharacteristicRead method of your BluetoothGattCallback will be called: @Override public void onCharacteristicRead(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, int status) { super.onCharacteristicRead(gatt, characteristic, status); byte[] value = characteristic.getValue(); } Section 213.4: Subscribing to Notications from the Gatt Server You can request to be notied from the Gatt Server when the value of a characteristic has been changed: gatt.setCharacteristicNotification(characteristic, true); BluetoothGattDescriptor descriptor = characteristic.getDescriptor( UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"); descriptor.setValue(BluetoothGattDescriptor.ENABLE_NOTIFICATION_VALUE); mBluetoothGatt.writeDescriptor(descriptor); All notications from the server will be received in the onCharacteristicChanged method of your BluetoothGattCallback: @Override public void onCharacteristicChanged(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic) { super.onCharacteristicChanged(gatt, characteristic); byte[] newValue = characteristic.getValue(); } Section 213.5: Advertising a BLE Device You can use Bluetooth LE Advertising to broadcast data packages to all nearby devices without having to establish a connection rst. Bear in mind that there's a strict limit of 31 bytes of advertisement data. Advertising your device is also the rst step towards letting other users connect to you. Since not all devices support Bluetooth LE Advertising, the rst step is to check that your device has all the necessary requirements to support it. Afterwards, you can initialize a BluetoothLeAdvertiser object and with it, you can start advertising operations: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP && bluetoothAdapter.isMultipleAdvertisementSupported()) { BluetoothLeAdvertiser advertiser = bluetoothAdapter.getBluetoothLeAdvertiser(); AdvertiseData.Builder dataBuilder = new AdvertiseData.Builder(); //Define a service UUID according to your needs dataBuilder.addServiceUuid(SERVICE_UUID); GoalKicker.com Android Notes for Professionals 994 dataBuilder.setIncludeDeviceName(true); AdvertiseSettings.Builder settingsBuilder = new AdvertiseSettings.Builder(); settingsBuilder.setAdvertiseMode(AdvertiseSettings.ADVERTISE_MODE_LOW_POWER); settingsBuilder.setTimeout(0); //Use the connectable flag if you intend on opening a Gatt Server //to allow remote connections to your device. settingsBuilder.setConnectable(true); AdvertiseCallback advertiseCallback=new AdvertiseCallback() { @Override public void onStartSuccess(AdvertiseSettings settingsInEffect) { super.onStartSuccess(settingsInEffect); Log.i(TAG, "onStartSuccess: "); } @Override public void onStartFailure(int errorCode) { super.onStartFailure(errorCode); Log.e(TAG, "onStartFailure: "+errorCode ); } }; advertising.startAdvertising(settingsBuilder.build(),dataBuilder.build(),advertiseCallback); } Section 213.6: Using a Gatt Server In order for your device to act as a peripheral, rst you need to open a BluetoothGattServer and populate it with at least one BluetoothGattService and one BluetoothGattCharacteristic: BluetoothGattServer server=bluetoothManager.openGattServer(context, bluetoothGattServerCallback); BluetoothGattService service = new BluetoothGattService(SERVICE_UUID, BluetoothGattService.SERVICE_TYPE_PRIMARY); This is an example of a BluetoothGattCharacteristic with full write,read and notify permissions. According to your needs, you might want to ne tune the permissions that you grant this characteristic: BluetoothGattCharacteristic characteristic = new BluetoothGattCharacteristic(CHARACTERISTIC_UUID, BluetoothGattCharacteristic.PROPERTY_READ | BluetoothGattCharacteristic.PROPERTY_WRITE | BluetoothGattCharacteristic.PROPERTY_NOTIFY, BluetoothGattCharacteristic.PERMISSION_READ | BluetoothGattCharacteristic.PERMISSION_WRITE); characteristic.addDescriptor(new BluetoothGattDescriptor(UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"), BluetoothGattCharacteristic.PERMISSION_WRITE)); service.addCharacteristic(characteristic); server.addService(service); The BluetoothGattServerCallback is responsible for receiving all events related to your BluetoothGattServer: BluetoothGattServerCallback bluetoothGattServerCallback= new BluetoothGattServerCallback() { @Override GoalKicker.com Android Notes for Professionals 995 public void onConnectionStateChange(BluetoothDevice device, int status, int newState) { super.onConnectionStateChange(device, status, newState); } @Override public void onCharacteristicReadRequest(BluetoothDevice device, int requestId, int offset, BluetoothGattCharacteristic characteristic) { super.onCharacteristicReadRequest(device, requestId, offset, characteristic); } @Override public void onCharacteristicWriteRequest(BluetoothDevice device, int requestId, BluetoothGattCharacteristic characteristic, boolean preparedWrite, boolean responseNeeded, int offset, byte[] value) { super.onCharacteristicWriteRequest(device, requestId, characteristic, preparedWrite, responseNeeded, offset, value); } @Override public void onDescriptorReadRequest(BluetoothDevice device, int requestId, int offset, BluetoothGattDescriptor descriptor) { super.onDescriptorReadRequest(device, requestId, offset, descriptor); } @Override public void onDescriptorWriteRequest(BluetoothDevice device, int requestId, BluetoothGattDescriptor descriptor, boolean preparedWrite, boolean responseNeeded, int offset, byte[] value) { super.onDescriptorWriteRequest(device, requestId, descriptor, preparedWrite, responseNeeded, offset, value); } Whenever you receive a request for a write/read to a characteristic or descriptor you must send a response to it in order for the request to be completed succesfully : @Override public void onCharacteristicReadRequest(BluetoothDevice device, int requestId, int offset, BluetoothGattCharacteristic characteristic) { super.onCharacteristicReadRequest(device, requestId, offset, characteristic); server.sendResponse(device, requestId, BluetoothGatt.GATT_SUCCESS, offset, YOUR_RESPONSE); } GoalKicker.com Android Notes for Professionals 996 Chapter 214: Looper A Looper is an Android class used to run a message loop for a thread, which usually do not have one associated with them. The most common Looper in Android is the main-loop, also commonly known as the main-thread. This instance is unique for an application and can be accessed statically with Looper.getMainLooper(). If a Looper is associated with the current thread, it can be retrieved with Looper.myLooper(). Section 214.1: Create a simple LooperThread A typical example of the implementation of a Looper thread given by the ocial documentation uses Looper.prepare() and Looper.loop() and associates a Handler with the loop between these calls. class LooperThread extends Thread { public Handler mHandler; public void run() { Looper.prepare(); mHandler = new Handler() { public void handleMessage(Message msg) { // process incoming messages here } }; Looper.loop(); } } Section 214.2: Run a loop with a HandlerThread A HandlerThread can be used to start a thread with a Looper. This looper then can be used to create a Handler for communications with it. HandlerThread thread = new HandlerThread("thread-name"); thread.start(); Handler handler = new Handler(thread.getLooper()); GoalKicker.com Android Notes for Professionals 997 Chapter 215: Annotation Processor Annotation processor is a tool build in javac for scanning and processing annotations at compile time. Annotations are a class of metadata that can be associated with classes, methods, elds, and even other annotations.There are two ways to access these annotations at runtime via reection and at compile time via annotation processors. Section 215.1: @NonNull Annotation public class Foo { private String name; public Foo(@NonNull String name){...}; ... } Here @NonNull is annotation which is processed compile time by the android studio to warn you that the particular function needs non null parameter. Section 215.2: Types of Annotations There are three types of annotations. 1. Marker Annotation - annotation that has no method @interface CustomAnnotation {} 2. Single-Value Annotation - annotation that has one method @interface CustomAnnotation { int value(); } 3. Multi-Value Annotation - annotation that has more than one method @interface CustomAnnotation{ int value1(); String value2(); String value3(); } Section 215.3: Creating and Using Custom Annotations For creating custom annotations we need to decide Target - on which these annotations will work on like eld level, method level, type level etc. Retention - to what level annotation will be available. For this, we have built in custom annotations. Check out these mostly used ones: @Target GoalKicker.com Android Notes for Professionals 998 @Retention Creating Custom Annotation @Retention(RetentionPolicy.SOURCE) // will not be available in compiled class @Target(ElementType.METHOD) // can be applied to methods only @interface CustomAnnotation{ int value(); } Using Custom Annotation class Foo{ @CustomAnnotation(value = 1) public void foo(){..} } // will be used by an annotation processor the value provided inside @CustomAnnotation will be consumed by an Annotationprocessor may be to generate code at compile time etc. GoalKicker.com Android Notes for Professionals 999 Chapter 216: SyncAdapter with periodically do sync of data The sync adapter component in your app encapsulates the code for the tasks that transfer data between the device and a server. Based on the scheduling and triggers you provide in your app, the sync adapter framework runs the code in the sync adapter component. Recently i worked on SyncAdapter i want share my knowledge with others,it may help others. Section 216.1: Sync adapter with every min requesting value from server <provider android:name=".DummyContentProvider" android:authorities="sample.map.com.ipsyncadapter" android:exported="false" /> <!-- This service implements our SyncAdapter. It needs to be exported, so that the system sync framework can access it. --> <service android:name=".SyncService" android:exported="true"> <!-- This intent filter is required. It allows the system to launch our sync service as needed. --> <intent-filter> <action android:name="android.content.SyncAdapter" /> </intent-filter> <!-- This points to a required XML file which describes our SyncAdapter. --> <meta-data android:name="android.content.SyncAdapter" android:resource="@xml/syncadapter" /> </service> <!-- This implements the account we'll use as an attachment point for our SyncAdapter. Since our SyncAdapter doesn't need to authenticate the current user (it just fetches a public RSS feed), this account's implementation is largely empty. It's also possible to attach a SyncAdapter to an existing account provided by another package. In that case, this element could be omitted here. --> <service android:name=".AuthenticatorService" > <!-- Required filter used by the system to launch our account service. --> <intent-filter> <action android:name="android.accounts.AccountAuthenticator" /> </intent-filter> <!-- This points to an XMLf ile which describes our account service. --> <meta-data android:name="android.accounts.AccountAuthenticator" android:resource="@xml/authenticator" /> </service> This code need to be add in manifest le In above code we have the syncservice and conteprovider and authenticatorservice. In app we need to create the xml package to add syncadpter and authenticator xml les. authenticator.xml <account-authenticator xmlns:android="http://schemas.android.com/apk/res/android" android:accountType="@string/R.String.accountType" android:icon="@mipmap/ic_launcher" GoalKicker.com Android Notes for Professionals 1000 android:smallIcon="@mipmap/ic_launcher" android:label="@string/app_name" /> syncadapter <sync-adapter xmlns:android="http://schemas.android.com/apk/res/android" android:contentAuthority="@string/R.String.contentAuthority" android:accountType="@string/R.String.accountType" android:userVisible="true" android:allowParallelSyncs="true" android:isAlwaysSyncable="true" android:supportsUploading="false"/> Authenticator import android.accounts.AbstractAccountAuthenticator; import android.accounts.Account; import android.accounts.AccountAuthenticatorResponse; import android.accounts.NetworkErrorException; import android.content.Context; import android.os.Bundle; public class Authenticator extends AbstractAccountAuthenticator { private Context mContext; public Authenticator(Context context) { super(context); this.mContext=context; } @Override public Bundle editProperties(AccountAuthenticatorResponse accountAuthenticatorResponse, String s) { return null; } @Override public Bundle addAccount(AccountAuthenticatorResponse accountAuthenticatorResponse, String s, String s1, String[] strings, Bundle bundle) throws NetworkErrorException { return null; } @Override public Bundle confirmCredentials(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, Bundle bundle) throws NetworkErrorException { return null; } @Override public Bundle getAuthToken(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String s, Bundle bundle) throws NetworkErrorException { return null; } @Override public String getAuthTokenLabel(String s) { return null; } @Override public Bundle updateCredentials(AccountAuthenticatorResponse accountAuthenticatorResponse, GoalKicker.com Android Notes for Professionals 1001 Account account, String s, Bundle bundle) throws NetworkErrorException { return null; } @Override public Bundle hasFeatures(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String[] strings) throws NetworkErrorException { return null; } } AuthenticatorService public class AuthenticatorService extends Service { private Authenticator authenticator; public AuthenticatorService() { super(); } @Nullable @Override public IBinder onBind(Intent intent) { IBinder ret = null; if (intent.getAction().equals(AccountManager.ACTION_AUTHENTICATOR_INTENT)) ; ret = getAuthenticator().getIBinder(); return ret; } public Authenticator getAuthenticator() { if (authenticator == null) authenticator = new Authenticator(this); return authenticator; } } IpDataDBHelper public class IpDataDBHelper extends SQLiteOpenHelper { private static final int DATABASE_VERSION=1; private static final String DATABASE_NAME="ip.db"; public static final String TABLE_IP_DATA="ip"; public static final String COLUMN_ID="_id"; public static final String COLUMN_IP="ip"; public static final String COLUMN_COUNTRY_CODE="country_code"; public static final String COLUMN_COUNTRY_NAME="country_name"; public static final String COLUMN_CITY="city"; public static final String COLUMN_LATITUDE="latitude"; public static final String COLUMN_LONGITUDE="longitude"; public IpDataDBHelper(Context context, String name, SQLiteDatabase.CursorFactory factory, int version) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase sqLiteDatabase) { String CREATE_TABLE="CREATE TABLE " + TABLE_IP_DATA + "( " + COLUMN_ID + " INTEGER PRIMARY KEY ," GoalKicker.com Android Notes for Professionals 1002 + COLUMN_IP + " INTEGER ," + COLUMN_COUNTRY_CODE + " INTEGER ," + COLUMN_COUNTRY_NAME + " TEXT ," + COLUMN_CITY + " TEXT ," + COLUMN_LATITUDE + " INTEGER ," + COLUMN_LONGITUDE + " INTEGER)"; sqLiteDatabase.execSQL(CREATE_TABLE); Log.d("SQL",CREATE_TABLE); } @Override public void onUpgrade(SQLiteDatabase sqLiteDatabase, int i, int i1) { sqLiteDatabase.execSQL("DROP TABLE IF EXISTS " + TABLE_IP_DATA); onCreate(sqLiteDatabase); } public long AddIPData(ContentValues values) { SQLiteDatabase sqLiteDatabase =getWritableDatabase(); long insertedRow=sqLiteDatabase.insert(TABLE_IP_DATA,null,values); return insertedRow; } public Cursor getAllIpData() { String[] projection={COLUMN_ID,COLUMN_IP,COLUMN_COUNTRY_CODE,COLUMN_COUNTRY_NAME,COLUMN_CITY,COLUMN_LATITUDE ,COLUMN_LONGITUDE}; SQLiteDatabase sqLiteDatabase =getReadableDatabase(); Cursor cursor = sqLiteDatabase.query(TABLE_IP_DATA,projection,null,null,null,null,null); return cursor; } public int deleteAllIpData() { SQLiteDatabase sqLiteDatabase=getWritableDatabase(); int rowDeleted=sqLiteDatabase.delete(TABLE_IP_DATA,null,null); return rowDeleted; } } MainActivity public class MainActivity extends AppCompatActivity { private static final String ACCOUNT_TYPE="sample.map.com.ipsyncadapter"; private static final String AUTHORITY="sample.map.com.ipsyncadapter"; private static final String ACCOUNT_NAME="Sync"; public TextView mIp,mCountryCod,mCountryName,mCity,mLatitude,mLongitude; CursorAdapter cursorAdapter; Account mAccount; private String TAG=this.getClass().getCanonicalName(); ListView mListView; public SharedPreferences mSharedPreferences; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mListView = (ListView) findViewById(R.id.list); mIp=(TextView)findViewById(R.id.txt_ip); mCountryCod=(TextView)findViewById(R.id.txt_country_code); mCountryName=(TextView)findViewById(R.id.txt_country_name); GoalKicker.com Android Notes for Professionals 1003 mCity=(TextView)findViewById(R.id.txt_city); mLatitude=(TextView)findViewById(R.id.txt_latitude); mLongitude=(TextView)findViewById(R.id.txt_longitude); mSharedPreferences=getSharedPreferences("MyIp",0); //Using shared preference iam displaying values in text view. String txtIp=mSharedPreferences.getString("ipAdr",""); String txtCC=mSharedPreferences.getString("CCode",""); String txtCN=mSharedPreferences.getString("CName",""); String txtC=mSharedPreferences.getString("City",""); String txtLP=mSharedPreferences.getString("Latitude",""); String txtLN=mSharedPreferences.getString("Longitude",""); mIp.setText(txtIp); mCountryCod.setText(txtCC); mCountryName.setText(txtCN); mCity.setText(txtC); mLatitude.setText(txtLP); mLongitude.setText(txtLN); mAccount=createSyncAccount(this); //In this code i am using content provider to save data. /* Cursor cursor=getContentResolver().query(MyIPContentProvider.CONTENT_URI,null,null,null,null); cursorAdapter=new SimpleCursorAdapter(this,R.layout.list_item,cursor,new String []{"ip","country_code","country_name","city","latitude","longitude"}, new int[] {R.id.txt_ip,R.id.txt_country_code,R.id.txt_country_name,R.id.txt_city,R.id.txt_latitude,R.id.txt_lon gitude},0); mListView.setAdapter(cursorAdapter); getContentResolver().registerContentObserver(MyIPContentProvider.CONTENT_URI,true,new StockContentObserver(new Handler())); */ Bundle settingBundle=new Bundle(); settingBundle.putBoolean(ContentResolver.SYNC_EXTRAS_MANUAL,true); settingBundle.putBoolean(ContentResolver.SYNC_EXTRAS_EXPEDITED,true); ContentResolver.requestSync(mAccount,AUTHORITY,settingBundle); ContentResolver.setSyncAutomatically(mAccount,AUTHORITY,true); ContentResolver.addPeriodicSync(mAccount,AUTHORITY,Bundle.EMPTY,60); } private Account createSyncAccount(MainActivity mainActivity) { Account account=new Account(ACCOUNT_NAME,ACCOUNT_TYPE); AccountManager accountManager=(AccountManager)mainActivity.getSystemService(ACCOUNT_SERVICE); if(accountManager.addAccountExplicitly(account,null,null)) { }else { } return account; } private class StockContentObserver extends ContentObserver { @Override public void onChange(boolean selfChange, Uri uri) { Log.d(TAG, "CHANGE OBSERVED AT URI: " + uri); GoalKicker.com Android Notes for Professionals 1004 cursorAdapter.swapCursor(getContentResolver().query(MyIPContentProvider.CONTENT_URI, null, null, null, null)); } public StockContentObserver(Handler handler) { super(handler); } } @Override protected void onResume() { super.onResume(); registerReceiver(syncStaredReceiver, new IntentFilter(SyncAdapter.SYNC_STARTED)); registerReceiver(syncFinishedReceiver, new IntentFilter(SyncAdapter.SYNC_FINISHED)); } @Override protected void onPause() { super.onPause(); unregisterReceiver(syncStaredReceiver); unregisterReceiver(syncFinishedReceiver); } private BroadcastReceiver syncFinishedReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.d(TAG, "Sync finished!"); Toast.makeText(getApplicationContext(), "Sync Finished", Toast.LENGTH_SHORT).show(); } }; private BroadcastReceiver syncStaredReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.d(TAG, "Sync started!"); Toast.makeText(getApplicationContext(), "Sync started...", Toast.LENGTH_SHORT).show(); } }; } MyIPContentProvider public class MyIPContentProvider extends ContentProvider { public static final int IP_DATA=1; private static final String AUTHORITY="sample.map.com.ipsyncadapter"; private static final String TABLE_IP_DATA="ip_data"; public static final Uri CONTENT_URI=Uri.parse("content://" + AUTHORITY + '/' + TABLE_IP_DATA); private static final UriMatcher URI_MATCHER= new UriMatcher(UriMatcher.NO_MATCH); static { URI_MATCHER.addURI(AUTHORITY,TABLE_IP_DATA,IP_DATA); } private IpDataDBHelper myDB; @Override public boolean onCreate() { GoalKicker.com Android Notes for Professionals 1005 myDB=new IpDataDBHelper(getContext(),null,null,1); return false; } @Nullable @Override public Cursor query(Uri uri, String[] strings, String s, String[] strings1, String s1) { int uriType=URI_MATCHER.match(uri); Cursor cursor=null; switch (uriType) { case IP_DATA: cursor=myDB.getAllIpData(); break; default: throw new IllegalArgumentException("UNKNOWN URL"); } cursor.setNotificationUri(getContext().getContentResolver(), uri); return cursor; } @Nullable @Override public String getType(Uri uri) { return null; } @Nullable @Override public Uri insert(Uri uri, ContentValues contentValues) { int uriType=URI_MATCHER.match(uri); long id=0; switch (uriType) { case IP_DATA: id=myDB.AddIPData(contentValues); break; default: throw new IllegalArgumentException("UNKNOWN URI :" +uri); } getContext().getContentResolver().notifyChange(uri,null); return Uri.parse(contentValues + "/" + id); } @Override public int delete(Uri uri, String s, String[] strings) { int uriType=URI_MATCHER.match(uri); int rowsDeleted=0; switch (uriType) { case IP_DATA: rowsDeleted=myDB.deleteAllIpData(); break; default: throw new IllegalArgumentException("UNKNOWN URI :" +uri); } getContext().getContentResolver().notifyChange(uri,null); return rowsDeleted; } @Override GoalKicker.com Android Notes for Professionals 1006 public int update(Uri uri, ContentValues contentValues, String s, String[] strings) { return 0; } } SyncAdapter public class SyncAdapter extends AbstractThreadedSyncAdapter { ContentResolver mContentResolver; Context mContext; public static final String SYNC_STARTED="Sync Started"; public static final String SYNC_FINISHED="Sync Finished"; private static final String TAG=SyncAdapter.class.getCanonicalName(); public SharedPreferences mSharedPreferences; public SyncAdapter(Context context, boolean autoInitialize) { super(context, autoInitialize); this.mContext=context; mContentResolver=context.getContentResolver(); Log.i("SyncAdapter","SyncAdapter"); } @Override public void onPerformSync(Account account, Bundle bundle, String s, ContentProviderClient contentProviderClient, SyncResult syncResult) { Intent intent = new Intent(SYNC_STARTED); mContext.sendBroadcast(intent); Log.i(TAG, "onPerformSync"); intent = new Intent(SYNC_FINISHED); mContext.sendBroadcast(intent); mSharedPreferences =mContext.getSharedPreferences("MyIp",0); SharedPreferences.Editor editor=mSharedPreferences.edit(); mContentResolver.delete(MyIPContentProvider.CONTENT_URI,null,null); String data=""; try { URL url =new URL("https://freegeoip.net/json/"); Log.d(TAG, "URL :"+url); HttpURLConnection connection=(HttpURLConnection)url.openConnection(); Log.d(TAG,"Connection :"+connection); connection.connect(); Log.d(TAG,"Connection 1:"+connection); InputStream inputStream=connection.getInputStream(); data=getInputData(inputStream); Log.d(TAG,"Data :"+data); if (data != null || !data.equals("null")) { JSONObject jsonObject = new JSONObject(data); String ipa = jsonObject.getString("ip"); String country_code = jsonObject.getString("country_code"); String country_name = jsonObject.getString("country_name"); String region_code=jsonObject.getString("region_code"); String region_name=jsonObject.getString("region_name"); GoalKicker.com Android Notes for Professionals 1007 String zip_code=jsonObject.getString("zip_code"); String time_zone=jsonObject.getString("time_zone"); String metro_code=jsonObject.getString("metro_code"); String city = jsonObject.getString("city"); String latitude = jsonObject.getString("latitude"); String longitude = jsonObject.getString("longitude"); /* ContentValues values = new ContentValues(); values.put("ip", ipa); values.put("country_code", country_code); values.put("country_name", country_name); values.put("city", city); values.put("latitude", latitude); values.put("longitude", longitude);*/ //Using cursor adapter for results. //mContentResolver.insert(MyIPContentProvider.CONTENT_URI, values); //Using Shared preference for results. editor.putString("ipAdr",ipa); editor.putString("CCode",country_code); editor.putString("CName",country_name); editor.putString("City",city); editor.putString("Latitude",latitude); editor.putString("Longitude",longitude); editor.commit(); } }catch(Exception e){ e.printStackTrace(); } } private String getInputData(InputStream inputStream) throws IOException { StringBuilder builder=new StringBuilder(); BufferedReader bufferedReader=new BufferedReader(new InputStreamReader(inputStream)); //String data=null; /*Log.d(TAG,"Builder 2:"+ bufferedReader.readLine()); while ((data=bufferedReader.readLine())!= null); { builder.append(data); Log.d(TAG,"Builder :"+data); } Log.d(TAG,"Builder 1 :"+data); bufferedReader.close();*/ String data=bufferedReader.readLine(); bufferedReader.close(); return data.toString(); } } SyncService public class SyncService extends Service { private static SyncAdapter syncAdapter=null; private static final Object syncAdapterLock=new Object(); @Override public void onCreate() { synchronized (syncAdapterLock) GoalKicker.com Android Notes for Professionals 1008 { if(syncAdapter==null) { syncAdapter =new SyncAdapter(getApplicationContext(),true); } } } @Nullable @Override public IBinder onBind(Intent intent) { return syncAdapter.getSyncAdapterBinder(); } } GoalKicker.com Android Notes for Professionals 1009 Chapter 217: Fastjson Fastjson is a Java library that can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object. Fastjson Features: Provide best performance in server side and android client Provide simple toJSONString() and parseObject() methods to convert Java objects to JSON and vice-versa Allow pre-existing unmodiable objects to be converted to and from JSON Extensive support of Java Generics Section 217.1: Parsing JSON with Fastjson You can look at example in Fastjson library Encode import com.alibaba.fastjson.JSON; Group group = new Group(); group.setId(0L); group.setName("admin"); User guestUser = new User(); guestUser.setId(2L); guestUser.setName("guest"); User rootUser = new User(); rootUser.setId(3L); rootUser.setName("root"); group.addUser(guestUser); group.addUser(rootUser); String jsonString = JSON.toJSONString(group); System.out.println(jsonString); Output {"id":0,"name":"admin","users":[{"id":2,"name":"guest"},{"id":3,"name":"root"}]} Decode String jsonString = ...; Group group = JSON.parseObject(jsonString, Group.class); Group.java public class Group { private Long id; GoalKicker.com Android Notes for Professionals 1010 private String name; private List<User> users = new ArrayList<User>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public List<User> getUsers() { return users; } public void setUsers(List<User> users) { this.users = users; } public void addUser(User user) { users.add(user); } } User.java public class User { private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Section 217.2: Convert the data of type Map to JSON String Code GoalKicker.com Android Notes for Professionals 1011 Group group = new Group(); group.setId(1); group.setName("Ke"); User user1 = new User(); user1.setId(2); user1.setName("Liu"); User user2 = new User(); user2.setId(3); user2.setName("Yue"); group.getList().add(user1); group.getList().add(user2); Map<Integer, Object> map = new HashMap<Integer,Object>(); map.put(1, "No.1"); map.put(2, "No.2"); map.put(3, group.getList()); String jsonString = JSON.toJSONString(map); System.out.println(jsonString); Output {1:"No.1",2:"No.2",3:[{"id":2,"name":"Liu"},{"id":3,"name":"Yue"}]} GoalKicker.com Android Notes for Professionals 1012 Chapter 218: JSON in Android with org.json Section 218.1: Creating a simple JSON object Create the JSONObject using the empty constructor and add elds using the put() method, which is overloaded so that it can be used with dierent types: try { // Create a new instance of a JSONObject final JSONObject object = new JSONObject(); // With put you can add a name/value pair to the JSONObject object.put("name", "test"); object.put("content", "Hello World!!!1"); object.put("year", 2016); object.put("value", 3.23); object.put("member", true); object.put("null_value", JSONObject.NULL); // Calling toString() on the JSONObject returns the JSON in string format. final String json = object.toString(); } catch (JSONException e) { Log.e(TAG, "Failed to create JSONObject", e); } The resulting JSON string looks like this: { "name":"test", "content":"Hello World!!!1", "year":2016, "value":3.23, "member":true, "null_value":null } Section 218.2: Create a JSON String with null value If you need to produce a JSON string with a value of null like this: { "name":null } Then you have to use the special constant JSONObject.NULL. Functioning example: jsonObject.put("name", JSONObject.NULL); Section 218.3: Add JSONArray to JSONObject // Create a new instance of a JSONArray JSONArray array = new JSONArray(); GoalKicker.com Android Notes for Professionals 1013 // With put() you can add a value to the array. array.put("ASDF"); array.put("QWERTY"); // Create a new instance of a JSONObject JSONObject obj = new JSONObject(); try { // Add the JSONArray to the JSONObject obj.put("the_array", array); } catch (JSONException e) { e.printStackTrace(); } String json = obj.toString(); The resulting JSON string looks like this: { "the_array":[ "ASDF", "QWERTY" ] } Section 218.4: Parse simple JSON object Consider the following JSON string: { "title": "test", "content": "Hello World!!!", "year": 2016, "names" : [ "Hannah", "David", "Steve" ] } This JSON object can be parsed using the following code: try { // create a new instance from a string JSONObject jsonObject = new JSONObject(jsonAsString); String title = jsonObject.getString("title"); String content = jsonObject.getString("content"); int year = jsonObject.getInt("year"); JSONArray names = jsonObject.getJSONArray("names"); //for an array of String objects } catch (JSONException e) { Log.w(TAG,"Could not parse JSON. Error: " + e.getMessage()); } Here is another example with a JSONArray nested inside JSONObject: { "books":[ { "title":"Android JSON Parsing", GoalKicker.com Android Notes for Professionals 1014 "times_sold":186 } ] } This can be parsed with the following code: JSONObject root = new JSONObject(booksJson); JSONArray booksArray = root.getJSONArray("books"); JSONObject firstBook = booksArray.getJSONObject(0); String title = firstBook.getString("title"); int timesSold = firstBook.getInt("times_sold"); Section 218.5: Check for the existence of elds on JSON Sometimes it's useful to check if a eld is present or absent on your JSON to avoid some JSONException on your code. To achieve that, use the JSONObject#has(String) or the method, like on the following example: Sample JSON { "name":"James" } Java code String jsonStr = " { \"name\":\"James\" }"; JSONObject json = new JSONObject(jsonStr); // Check if the field "name" is present String name, surname; // This will be true, since the field "name" is present on our JSON. if (json.has("name")) { name = json.getString("name"); } else { name = "John"; } // This will be false, since our JSON doesn't have the field "surname". if (json.has("surname")) { surname = json.getString("surname"); } else { surname = "Doe"; } // Here name == "James" and surname == "Doe". Section 218.6: Create nested JSON object To produce nested JSON object, you need to simply add one JSON object to another: JSONObject mainObject = new JSONObject(); JSONObject requestObject = new JSONObject(); // Host object // Included object try { GoalKicker.com Android Notes for Professionals 1015 requestObject.put("lastname", lastname); requestObject.put("phone", phone); requestObject.put("latitude", lat); requestObject.put("longitude", lon); requestObject.put("theme", theme); requestObject.put("text", message); mainObject.put("claim", requestObject); } catch (JSONException e) { return "JSON Error"; } Now mainObject contains a key called claim with the whole requestObject as a value. Section 218.7: Updating the elements in the JSON sample json to update { "student":{"name":"Rahul", "lastname":"sharma"}, "marks":{"maths":"88"} } To update the elements value in the json we need to assign the value and update. try { // Create a new instance of a JSONObject final JSONObject object = new JSONObject(jsonString); JSONObject studentJSON = object.getJSONObject("student"); studentJSON.put("name","Kumar"); object.remove("student"); object.put("student",studentJSON); // Calling toString() on the JSONObject returns the JSON in string format. final String json = object.toString(); } catch (JSONException e) { Log.e(TAG, "Failed to create JSONObject", e); } updated value { "student":{"name":"Kumar", "lastname":"sharma"}, "marks":{"maths":"88"} } Section 218.8: Using JsonReader to read JSON from a stream JsonReader reads a JSON encoded value as a stream of tokens. public List<Message> readJsonStream(InputStream in) throws IOException { JsonReader reader = new JsonReader(new InputStreamReader(in, "UTF-8")); try { return readMessagesArray(reader); GoalKicker.com Android Notes for Professionals 1016 } finally { reader.close(); } } public List<Message> readMessagesArray(JsonReader reader) throws IOException { List<Message> messages = new ArrayList<Message>(); reader.beginArray(); while (reader.hasNext()) { messages.add(readMessage(reader)); } reader.endArray(); return messages; } public Message readMessage(JsonReader reader) throws IOException { long id = -1; String text = null; User user = null; List<Double> geo = null; reader.beginObject(); while (reader.hasNext()) { String name = reader.nextName(); if (name.equals("id")) { id = reader.nextLong(); } else if (name.equals("text")) { text = reader.nextString(); } else if (name.equals("geo") && reader.peek() != JsonToken.NULL) { geo = readDoublesArray(reader); } else if (name.equals("user")) { user = readUser(reader); } else { reader.skipValue(); } } reader.endObject(); return new Message(id, text, user, geo); } public List<Double> readDoublesArray(JsonReader reader) throws IOException { List<Double> doubles = new ArrayList<Double>(); reader.beginArray(); while (reader.hasNext()) { doubles.add(reader.nextDouble()); } reader.endArray(); return doubles; } public User readUser(JsonReader reader) throws IOException { String username = null; int followersCount = -1; reader.beginObject(); while (reader.hasNext()) { String name = reader.nextName(); if (name.equals("name")) { username = reader.nextString(); } else if (name.equals("followers_count")) { GoalKicker.com Android Notes for Professionals 1017 followersCount = reader.nextInt(); } else { reader.skipValue(); } } reader.endObject(); return new User(username, followersCount); } Section 218.9: Working with null-string when parsing json { "some_string": null, "ather_string": "something" } If we will use this way: JSONObject json = new JSONObject(jsonStr); String someString = json.optString("some_string"); We will have output: someString = "null"; So we need to provide this workaround: /** * According to http://stackoverflow.com/questions/18226288/json-jsonobject-optstring-returns-string-null * we need to provide a workaround to opt string from json that can be null. * <strong></strong> */ public static String optNullableString(JSONObject jsonObject, String key) { return optNullableString(jsonObject, key, ""); } /** * According to http://stackoverflow.com/questions/18226288/json-jsonobject-optstring-returns-string-null * we need to provide a workaround to opt string from json that can be null. * <strong></strong> */ public static String optNullableString(JSONObject jsonObject, String key, String fallback) { if (jsonObject.isNull(key)) { return fallback; } else { return jsonObject.optString(key, fallback); } } And then call: JSONObject json = new JSONObject(jsonStr); String someString = optNullableString(json, "some_string"); String someString2 = optNullableString(json, "some_string", ""); And we will have Output as we expected: GoalKicker.com Android Notes for Professionals 1018 someString = null; //not "null" someString2 = ""; Section 218.10: Handling dynamic key for JSON response This is an example for how to handle dynamic key for response. Here A and B are dynamic keys it can be anything Response { "response": [ { "A": [ { "name": "Tango" }, { "name": "Ping" } ], "B": [ { "name": "Jon" }, { "name": "Mark" } ] } ] } Java code // ResponseData is raw string of response JSONObject responseDataObj = new JSONObject(responseData); JSONArray responseArray = responseDataObj.getJSONArray("response"); for (int i = 0; i < responseArray.length(); i++) { // Nodes ArrayList<ArrayList<String>> declared globally nodes = new ArrayList<ArrayList<String>>(); JSONObject obj = responseArray.getJSONObject(i); Iterator keys = obj.keys(); while(keys.hasNext()) { // Loop to get the dynamic key String currentDynamicKey = (String)keys.next(); // Get the value of the dynamic key JSONArray currentDynamicValue = obj.getJSONArray(currentDynamicKey); int jsonArraySize = currentDynamicValue.length(); if(jsonArraySize > 0) { for (int ii = 0; ii < jsonArraySize; ii++) { // NameList ArrayList<String> declared globally nameList = new ArrayList<String>(); if(ii == 0) { JSONObject nameObj = currentDynamicValue.getJSONObject(ii); String name = nameObj.getString("name"); System.out.print("Name = " + name); // Store name in an array list nameList.add(name); } } GoalKicker.com Android Notes for Professionals 1019 } nodes.add(nameList); } } GoalKicker.com Android Notes for Professionals 1020 Chapter 219: Gson Gson is a Java library that can be used to convert Java Objects into their JSON representation. Gson considers both of these as very important design goals. Gson Features: Provide simple toJson() and fromJson() methods to convert Java objects to JSON and vice-versa Allow pre-existing unmodiable objects to be converted to and from JSON Extensive support of Java Generics Support arbitrarily complex objects (with deep inheritance hierarchies and extensive use of generic types) Section 219.1: Parsing JSON with Gson The example shows parsing a JSON object using the Gson library from Google. Parsing objects: class Robot { //OPTIONAL - this annotation allows for the key to be different from the field name, and can be omitted if key and field name are same . Also this is good coding practice as it decouple your variable names with server keys name @SerializedName("version") private String version; @SerializedName("age") private int age; @SerializedName("robotName") private String name; // optional : Benefit it allows to set default values and retain them, even if key is missing from Json response. Not required for primitive data types. public Robot{ version = ""; name = ""; } } Then where parsing needs to occur, use the following: String robotJson = "{ \"version\": \"JellyBean\", \"age\": 3, \"robotName\": \"Droid\" }"; Gson gson = new Gson(); Robot robot = gson.fromJson(robotJson, Robot.class); Parsing a list: GoalKicker.com Android Notes for Professionals 1021 When retrieving a list of JSON objects, often you will want to parse them and convert them into Java objects. The JSON string that we will try to convert is the following: { "owned_dogs": [ { "name": "Ron", "age": 12, "breed": "terrier" }, { "name": "Bob", "age": 4, "breed": "bulldog" }, { "name": "Johny", "age": 3, "breed": "golden retriever" } ] } This particular JSON array contains three objects. In our Java code we'll want to map these objects to Dog objects. A Dog object would look like this: private class Dog { public String name; public int age; @SerializedName("breed") public String breedName; } To convert the JSON array to a Dog[]: Dog[] arrayOfDogs = gson.fromJson(jsonArrayString, Dog[].class); Converting a Dog[] to a JSON string: String jsonArray = gson.toJson(arrayOfDogs, Dog[].class); To convert the JSON array to an ArrayList<Dog> we can do the following: Type typeListOfDogs = new TypeToken<List<Dog>>(){}.getType(); List<Dog> listOfDogs = gson.fromJson(jsonArrayString, typeListOfDogs); The Type object typeListOfDogs denes what a list of Dog objects would look like. GSON can use this type object to map the JSON array to the right values. Alternatively, converting a List<Dog> to a JSON array can be done in a similar manner. String jsonArray = gson.toJson(listOfDogs, typeListOfDogs); GoalKicker.com Android Notes for Professionals 1022 Section 219.2: Adding a custom Converter to Gson Sometimes you need to serialize or deserialize some elds in a desired format, for example your backend may use the format "YYYY-MM-dd HH:mm" for dates and you want your POJOS to use the DateTime class in Joda Time. In order to automatically convert these strings into DateTimes object, you can use a custom converter. /** * Gson serialiser/deserialiser for converting Joda {@link DateTime} objects. */ public class DateTimeConverter implements JsonSerializer<DateTime>, JsonDeserializer<DateTime> { private final DateTimeFormatter dateTimeFormatter; @Inject public DateTimeConverter() { this.dateTimeFormatter = DateTimeFormat.forPattern("YYYY-MM-dd HH:mm"); } @Override public JsonElement serialize(DateTime src, Type typeOfSrc, JsonSerializationContext context) { return new JsonPrimitive(dateTimeFormatter.print(src)); } @Override public DateTime deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { if (json.getAsString() == null || json.getAsString().isEmpty()) { return null; } return dateTimeFormatter.parseDateTime(json.getAsString()); } } To make Gson use the newly created converter you need to assign it when creating the Gson object: DateTimeConverter dateTimeConverter = new DateTimeConverter(); Gson gson = new GsonBuilder().registerTypeAdapter(DateTime.class, dateTimeConverter) .create(); String s = gson.toJson(DateTime.now()); // this will show the date in the desired format In order to deserialize the date in that format you only have to dene a eld in the DateTime format: public class SomePojo { private DateTime someDate; } When Gson encounters a eld of type DateTime, it will call your converter in order to deserialize the eld. Section 219.3: Parsing a List<String> with Gson Method 1 Gson gson = new Gson(); GoalKicker.com Android Notes for Professionals 1023 String json = "[ \"Adam\", \"John\", \"Mary\" ]"; Type type = new TypeToken<List<String>>(){}.getType(); List<String> members = gson.fromJson(json, type); Log.v("Members", members.toString()); This is useful for most generic container classes, since you can't get the class of a parameterized type (ie: you can't call List<String>.class). Method 2 public class StringList extends ArrayList<String> { } ... List<String> members = gson.fromJson(json, StringList.class); Alternatively, you can always subclass the type you want, and then pass in that class. However this isn't always best practice, since it will return to you an object of type StringList; Section 219.4: Adding Gson to your project dependencies { compile 'com.google.code.gson:gson:2.8.1' } To use latest version of Gson The below line will compile latest version of gson library every time you compile, you do not have to change version. Pros: You can use latest features, speed and less bugs. Cons: It might break compatibility with your code. compile 'com.google.code.gson:gson:+' Section 219.5: Parsing JSON to Generic Class Object with Gson Suppose we have a JSON string ["first","second","third"] We can parse this JSON string into a String array : Gson gson = new Gson(); String jsonArray = "[\"first\",\"second\",\"third\"]"; String[] strings = gson.fromJson(jsonArray, String[].class); But if we want parse it into a List<String> object, we must use TypeToken. Here is the sample : Gson gson = new Gson(); String jsonArray = "[\"first\",\"second\",\"third\"]"; List<String> stringList = gson.fromJson(jsonArray, new TypeToken<List<String>>() {}.getType()); GoalKicker.com Android Notes for Professionals 1024 Suppose we have two classes below: public class Outer<T> { public int index; public T data; } public class Person { public String firstName; public String lastName; } and we have a JSON string that should be parsed to a Outer<Person> object. This example shows how to parse this JSON string to the related generic class object: String json = "......"; Type userType = new TypeToken<Outer<Person>>(){}.getType(); Result<User> userResult = gson.fromJson(json,userType); If the JSON string should be parsed to a Outer<List<Person>> object : Type userListType = new TypeToken<Outer<List<Person>>>(){}.getType(); Result<List<User>> userListResult = gson.fromJson(json,userListType); Section 219.6: Using Gson with inheritance Gson does not support inheritance out of the box. Let's say we have the following class hierarchy: public class BaseClass { int a; public int getInt() { return a; } } public class DerivedClass1 extends BaseClass { int b; @Override public int getInt() { return b; } } public class DerivedClass2 extends BaseClass { int c; @Override public int getInt() { return c; } } And now we want to serialize an instance of DerivedClass1 to a JSON string GoalKicker.com Android Notes for Professionals 1025 DerivedClass1 derivedClass1 = new DerivedClass1(); derivedClass1.b = 5; derivedClass1.a = 10; Gson gson = new Gson(); String derivedClass1Json = gson.toJson(derivedClass1); Now, in another place, we receive this json string and want to deserialize it - but in compile time we only know it is supposed to be an instance of BaseClass: BaseClass maybeDerivedClass1 = gson.fromJson(derivedClass1Json, BaseClass.class); System.out.println(maybeDerivedClass1.getInt()); But GSON does not know derivedClass1Json was originally an instance of DerivedClass1, so this will print out 10. How to solve this? You need to build your own JsonDeserializer, that handles such cases. The solution is not perfectly clean, but I could not come up with a better one. First, add the following eld to your base class @SerializedName("type") private String typeName; And initialize it in the base class constructor public BaseClass() { typeName = getClass().getName(); } Now add the following class: public class JsonDeserializerWithInheritance<T> implements JsonDeserializer<T> { @Override public T deserialize( JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { JsonObject jsonObject = json.getAsJsonObject(); JsonPrimitive classNamePrimitive = (JsonPrimitive) jsonObject.get("type"); String className = classNamePrimitive.getAsString(); Class<?> clazz; try { clazz = Class.forName(className); } catch (ClassNotFoundException e) { throw new JsonParseException(e.getMessage()); } return context.deserialize(jsonObject, clazz); } } All there is left to do is hook everything up GsonBuilder builder = new GsonBuilder(); builder GoalKicker.com Android Notes for Professionals 1026 .registerTypeAdapter(BaseClass.class, new JsonDeserializerWithInheritance<BaseClass>()); Gson gson = builder.create(); And now, running the following codeDerivedClass1 derivedClass1 = new DerivedClass1(); derivedClass1.b = 5; derivedClass1.a = 10; String derivedClass1Json = gson.toJson(derivedClass1); BaseClass maybeDerivedClass1 = gson.fromJson(derivedClass1Json, BaseClass.class); System.out.println(maybeDerivedClass1.getInt()); Will print out 5. Section 219.7: Parsing JSON property to enum with Gson If you want to parse a String to enum with Gson: {"status" : "open"} public enum Status { @SerializedName("open") OPEN, @SerializedName("waiting") WAITING, @SerializedName("confirm") CONFIRM, @SerializedName("ready") READY } Section 219.8: Using Gson to load a JSON le from disk This will load a JSON le from disk and convert it to the given type. public static <T> T getFile(String fileName, Class<T> type) throws FileNotFoundException { Gson gson = new GsonBuilder() .create(); FileReader json = new FileReader(fileName); return gson.fromJson(json, type); } Section 219.9: Using Gson as serializer with Retrot First of all you need to add the GsonConverterFactory to your build.gradle le compile 'com.squareup.retrofit2:converter-gson:2.1.0' Then, you have to add the converter factory when creating the Retrot Service: Gson gson = new GsonBuilder().create(); new Retrofit.Builder() .baseUrl(someUrl) .addConverterFactory(GsonConverterFactory.create(gson)) GoalKicker.com Android Notes for Professionals 1027 .build() .create(RetrofitService.class); You can add custom converters when creating the Gson object that you are passing to the factory. Allowing you to create custom type conversions. Section 219.10: Parsing json array to generic class using Gson Suppose we have a json { "total_count": 132, "page_size": 2, "page_index": 1, "twitter_posts": [ { "created_on": 1465935152, "tweet_id": 210462857140252672, "tweet": "Along with our new #Twitterbird, we've also updated our Display Guidelines", "url": "https://twitter.com/twitterapi/status/210462857140252672" }, { "created_on": 1465995741, "tweet_id": 735128881808691200, "tweet": "Information on the upcoming changes to Tweets is now on the developer site", "url": "https://twitter.com/twitterapi/status/735128881808691200" } ] } We can parse this array into a Custom Tweets (tweets list container) object manually, but it is easier to do it with fromJson method: Gson gson = new Gson(); String jsonArray = "...."; Tweets tweets = gson.fromJson(jsonArray, Tweets.class); Suppose we have two classes below: class Tweets { @SerializedName("total_count") int totalCount; @SerializedName("page_size") int pageSize; @SerializedName("page_index") int pageIndex; // all you need to do it is just define List variable with correct name @SerializedName("twitter_posts") List<Tweet> tweets; } class Tweet { @SerializedName("created_on") long createdOn; @SerializedName("tweet_id") String tweetId; @SerializedName("tweet") String tweetBody; @SerializedName("url") GoalKicker.com Android Notes for Professionals 1028 String url; } and if you need just parse a json array you can use this code in your parsing: String tweetsJsonArray = "[{.....},{.....}]" List<Tweet> tweets = gson.fromJson(tweetsJsonArray, new TypeToken<List<Tweet>>() {}.getType()); Section 219.11: Custom JSON Deserializer using Gson Imagine you have all dates in all responses in some custom format, for instance /Date(1465935152)/ and you want apply general rule to deserialize all Json dates to java Date instances. In this case you need to implement custom Json Deserializer. Example of json: { "id": 1, "created_on": "Date(1465935152)", "updated_on": "Date(1465968945)", "name": "Oleksandr" } Suppose we have this class below: class User { @SerializedName("id") long id; @SerializedName("created_on") Date createdOn; @SerializedName("updated_on") Date updatedOn; @SerializedName("name") String name; } Custom deserializer: class DateDeSerializer implements JsonDeserializer<Date> { private static final String DATE_PREFIX = "/Date("; private static final String DATE_SUFFIX = ")/"; @Override public Date deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { String dateString = json.getAsString(); if (dateString.startsWith(DATE_PREFIX) && dateString.endsWith(DATE_SUFFIX)) { dateString = dateString.substring(DATE_PREFIX.length(), dateString.length() DATE_SUFFIX.length()); } else { throw new JsonParseException("Wrong date format: " + dateString); } return new Date(Long.parseLong(dateString) - TimeZone.getDefault().getRawOffset()); } } And the usage: GoalKicker.com Android Notes for Professionals 1029 Gson gson = new GsonBuilder() .registerTypeAdapter(Date.class, new DateDeSerializer()) .create(); String json = "...."; User user = gson.fromJson(json, User.class); Serialize and deserialize Jackson JSON strings with Date types This also applies to the case where you want to make Gson Date conversion compatible with Jackson, for example. Jackson usually serializes Date to "milliseconds since epoch" whereas Gson uses a readable format like Aug 31, 2016 10:26:17 to represent Date. This leads to JsonSyntaxExceptions in Gson when you try to deserialize a Jackson format Date. To circumvent this, you can add a custom serializer and a custom deserializer: JsonSerializer<Date> ser = new JsonSerializer<Date>() { @Override public JsonElement serialize(Date src, Type typeOfSrc, JsonSerializationContext context) { return src == null ? null : new JsonPrimitive(src.getTime()); } }; JsonDeserializer<Date> deser = new JsonDeserializer<Date>() { @Override public Date deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { return json == null ? null : new Date(json.getAsLong()); } }; Gson gson = new GsonBuilder() .registerTypeAdapter(Date.class, ser) .registerTypeAdapter(Date.class, deser) .create(); Section 219.12: JSON Serialization/Deserialization with AutoValue and Gson Import in your gradle root le classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8' Import in your gradle app le apt 'com.google.auto.value:auto-value:1.2' apt 'com.ryanharter.auto.value:auto-value-gson:0.3.1' provided 'com.jakewharton.auto.value:auto-value-annotations:1.2-update1' provided 'org.glassfish:javax.annotation:10.0-b28' Create object with autovalue: @AutoValue public abstract class SignIn { @SerializedName("signin_token") public abstract String signinToken(); public abstract String username(); GoalKicker.com Android Notes for Professionals 1030 public static TypeAdapter<SignIn> typeAdapter(Gson gson) { return new AutoValue_SignIn.GsonTypeAdapter(gson); } public static SignIn create(String signin, String username) { return new AutoValue_SignIn(signin, username); } } Create your Gson converter with your GsonBuilder Gson gson = new GsonBuilder() .registerTypeAdapterFactory( new AutoValueGsonTypeAdapterFactory()) .create()); Deserialize String myJsonData = "{ \"signin_token\": \"mySigninToken\", \"username\": \"myUsername\" }"; SignIn signInData = gson.fromJson(myJsonData, Signin.class); Serialize Signin myData = SignIn.create("myTokenData", "myUsername"); String myJsonData = gson.toJson(myData); Using Gson is a great way to simplify Serialization and Deserialization code by using POJO objects. The side eect is that reection is costly performance wise. That's why using AutoValue-Gson to generate CustomTypeAdapter will avoid this reection cost while staying very simple to update when an api change is happening. GoalKicker.com Android Notes for Professionals 1031 Chapter 220: Android Architecture Components Android Architecture Components is new collection of libraries that help you design robust, testable, and maintainable apps. Main parts are: Lifecycles, ViewModel, LiveData, Room. Section 220.1: Using Lifecycle in AppCompatActivity Extend your activity from this activity public abstract class BaseCompatLifecycleActivity extends AppCompatActivity implements LifecycleRegistryOwner { // We need this class, because LifecycleActivity extends FragmentActivity not AppCompatActivity @NonNull private final LifecycleRegistry lifecycleRegistry = new LifecycleRegistry(this); @NonNull @Override public LifecycleRegistry getLifecycle() { return lifecycleRegistry; } } Section 220.2: Add Architecture Components Project build.gradle allprojects { repositories { jcenter() // Add this if you use Gradle 4.0+ google() // Add this if you use Gradle < 4.0 maven { url 'https://maven.google.com' } } } ext { archVersion = '1.0.0-alpha5' } Application build gradle // For Lifecycles, LiveData, and ViewModel compile "android.arch.lifecycle:runtime:$archVersion" compile "android.arch.lifecycle:extensions:$archVersion" annotationProcessor "android.arch.lifecycle:compiler:$archVersion" // For Room compile "android.arch.persistence.room:runtime:$archVersion" annotationProcessor "android.arch.persistence.room:compiler:$archVersion" // For testing Room migrations testCompile "android.arch.persistence.room:testing:$archVersion" // For Room RxJava support GoalKicker.com Android Notes for Professionals 1032 compile "android.arch.persistence.room:rxjava2:$archVersion" Section 220.3: ViewModel with LiveData transformations public class BaseViewModel extends ViewModel { private static final int TAG_SEGMENT_INDEX = 2; private static final int VIDEOS_LIMIT = 100; // We save input params here private final MutableLiveData<Pair<String, String>> urlWithReferrerLiveData = new MutableLiveData<>(); // transform specific uri param to "tag" private final LiveData<String> currentTagLiveData = Transformations.map(urlWithReferrerLiveData, pair -> { Uri uri = Uri.parse(pair.first); List<String> segments = uri.getPathSegments(); if (segments.size() > TAG_SEGMENT_INDEX) return segments.get(TAG_SEGMENT_INDEX); return null; }); // transform "tag" to videos list private final LiveData<List<VideoItem>> videoByTagData = Transformations.switchMap(currentTagLiveData, tag -> contentRepository.getVideoByTag(tag, VIDEOS_LIMIT)); ContentRepository contentRepository; public BaseViewModel() { // some inits } public void setUrlWithReferrer(String url, String referrer) { // set value activates observers and transformations urlWithReferrerLiveData.setValue(new Pair<>(url, referrer)); } public LiveData<List<VideoItem>> getVideoByTagData() { return videoByTagData; } } Somewhere in UI: public class VideoActivity extends BaseCompatLifecycleActivity { private VideoViewModel viewModel; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Get ViewModel viewModel = ViewModelProviders.of(this).get(BaseViewModel.class); // Add observer viewModel.getVideoByTagData().observe(this, data -> { // some checks adapter.updateData(data); }); GoalKicker.com Android Notes for Professionals 1033 ... if (savedInstanceState == null) { // init loading only at first creation // you just set params and viewModel.setUrlWithReferrer(url, referrer); } } Section 220.4: Room peristence Room require four parts: Database class, DAO classes, Entity classes and Migration classes (now you may use only DDL methods): Entity classes // Set custom table name, add indexes @Entity(tableName = "videos", indices = {@Index("title")} ) public final class VideoItem { @PrimaryKey // required public long articleId; public String title; public String url; } // Use ForeignKey for setup table relation @Entity(tableName = "tags", indices = {@Index("score"), @Index("videoId"), @Index("value")}, foreignKeys = @ForeignKey(entity = VideoItem.class, parentColumns = "articleId", childColumns = "videoId", onDelete = ForeignKey.CASCADE) ) public final class VideoTag { @PrimaryKey public long id; public long videoId; public String displayName; public String value; public double score; } DAO classes @Dao public interface VideoDao { // Create insert with custom conflict strategy @Insert(onConflict = OnConflictStrategy.REPLACE) void saveVideos(List<VideoItem> videos); // Simple update @Update void updateVideos(VideoItem... videos); @Query("DELETE FROM tags WHERE videoId = :videoId") void deleteTagsByVideoId(long videoId); // Custom query, you may use select/delete here @Query("SELECT v.* FROM tags t LEFT JOIN videos v ON v.articleId = t.videoId WHERE t.value = GoalKicker.com Android Notes for Professionals 1034 :tag ORDER BY updatedAt DESC LIMIT :limit") LiveData<List<VideoItem>> getVideosByTag(String tag, int limit); } Database class // register your entities and DAOs @Database(entities = {VideoItem.class, VideoTag.class}, version = 2) public abstract class ContentDatabase extends RoomDatabase { public abstract VideoDao videoDao(); } Migrations public final class Migrations { private static final Migration MIGRATION_1_2 = new Migration(1, 2) { @Override public void migrate(SupportSQLiteDatabase database) { final String[] sqlQueries = { "CREATE TABLE IF NOT EXISTS `tags` (`id` INTEGER PRIMARY KEY AUTOINCREMENT," + " `videoId` INTEGER, `displayName` TEXT, `value` TEXT, `score` REAL," + " FOREIGN KEY(`videoId`) REFERENCES `videos`(`articleId`)" + " ON UPDATE NO ACTION ON DELETE CASCADE )", "CREATE INDEX `index_tags_score` ON `tags` (`score`)", "CREATE INDEX `index_tags_videoId` ON `tags` (`videoId`)"}; for (String query : sqlQueries) { database.execSQL(query); } } }; public static final Migration[] ALL = {MIGRATION_1_2}; private Migrations() { } } Use in Application class or provide via Dagger ContentDatabase provideContentDatabase() { return Room.databaseBuilder(context, ContentDatabase.class, "data.db") .addMigrations(Migrations.ALL).build(); } Write your repository: public final class ContentRepository { private final ContentDatabase db; private final VideoDao videoDao; public ContentRepository(ContentDatabase contentDatabase, VideoDao videoDao) { this.db = contentDatabase; this.videoDao = videoDao; } public LiveData<List<VideoItem>> getVideoByTag(@Nullable String tag, int limit) { // you may fetch from network, save to database .... return videoDao.getVideosByTag(tag, limit); } GoalKicker.com Android Notes for Professionals 1035 } Use in ViewModel: ContentRepository contentRepository = ...; contentRepository.getVideoByTag(tag, limit); Section 220.5: Custom LiveData You may write custom LiveData, if you need custom logic. Don't write custom class, if you only need to transform data (use Transformations class) public class LocationLiveData extends LiveData<Location> { private LocationManager locationManager; private LocationListener listener = new LocationListener() { @Override public void onLocationChanged(Location location) { setValue(location); } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // Do something } @Override public void onProviderEnabled(String provider) { // Do something } @Override public void onProviderDisabled(String provider) { // Do something } }; public LocationLiveData(Context context) { locationManager = (LocationManager) context.getSystemService(Context.LOCATION_SERVICE); } @Override protected void onActive() { // We have observers, start working locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, listener); } @Override protected void onInactive() { // We have no observers, stop working locationManager.removeUpdates(listener); } } Section 220.6: Custom Lifecycle-aware component Each UI component lifecycle changed as shown at image. GoalKicker.com Android Notes for Professionals 1036 You may create component, that will be notied on lifecycle state change: public class MyLocationListener implements LifecycleObserver { private boolean enabled = false; private Lifecycle lifecycle; public MyLocationListener(Context context, Lifecycle lifecycle, Callback callback) { ... } @OnLifecycleEvent(Lifecycle.Event.ON_START) void start() { if (enabled) { // connect } } public void enable() { enabled = true; if (lifecycle.getState().isAtLeast(STARTED)) { // connect if not connected } } @OnLifecycleEvent(Lifecycle.Event.ON_STOP) void stop() { // disconnect if connected } } GoalKicker.com Android Notes for Professionals 1037 Chapter 221: Jackson Jackson is a multi-purpose Java library for processing JSON. Jackson aims to be the best possible combination of fast, correct, lightweight, and ergonomic for developers. Jackson features Multi processing mode, and very good collaboration Not only annotations, but also mixed annotations Fully support generic types Support polymorphic types Section 221.1: Full Data Binding Example JSON data { "name" : { "first" : "Joe", "last" : "Sixpack" }, "gender" : "MALE", "verified" : false, "userImage" : "keliuyue" } It takes two lines of Java to turn it into a User instance: ObjectMapper mapper = new ObjectMapper(); // can reuse, share globally User user = mapper.readValue(new File("user.json"), User.class); User.class public class User { public enum Gender {MALE, FEMALE}; public static class Name { private String _first, _last; public String getFirst() { return _first; } public String getLast() { return _last; } public void setFirst(String s) { _first = s; } public void setLast(String s) { _last = s; } } GoalKicker.com Android Notes for Professionals 1038 private Gender _gender; private Name _name; private boolean _isVerified; private byte[] _userImage; public Name getName() { return _name; } public boolean isVerified() { return _isVerified; } public Gender getGender() { return _gender; } public byte[] getUserImage() { return _userImage; } public void setName(Name n) { _name = n; } public void setVerified(boolean b) { _isVerified = b; } public void setGender(Gender g) { _gender = g; } public void setUserImage(byte[] b) { _userImage = b; } } Marshalling back to JSON is similarly straightforward: mapper.writeValue(new File("user-modified.json"), user); GoalKicker.com Android Notes for Professionals 1039 Chapter 222: Smartcard Section 222.1: Smart card send and receive For connection, here is a snippet to help you understand: //Allows you to enumerate and communicate with connected USB devices. UsbManager mUsbManager = (UsbManager) getSystemService(Context.USB_SERVICE); //Explicitly asking for permission final String ACTION_USB_PERMISSION = "com.android.example.USB_PERMISSION"; PendingIntent mPermissionIntent = PendingIntent.getBroadcast(this, 0, new Intent(ACTION_USB_PERMISSION), 0); HashMap<String, UsbDevice> deviceList = mUsbManager.getDeviceList(); UsbDevice device = deviceList.get("//the device you want to work with"); if (device != null) { mUsbManager.requestPermission(device, mPermissionIntent); } Now you have to understand that in java the communication takes place using package javax.smarcard which is not available for Android so take a look here for getting an idea as to how you can communicate or send/receive APDU (smartcard command). Now as told in the answer mentioned above You cannot simply send an APDU (smartcard command) over the bulk-out endpoint and expect to receive a response APDU over the bulk-in endpoint. For getting the endpoints see the code snippet below : UsbEndpoint epOut = null, epIn = null; UsbInterface usbInterface; UsbDeviceConnection connection = mUsbManager.openDevice(device); for (int i = 0; i < device.getInterfaceCount(); i++) { usbInterface = device.getInterface(i); connection.claimInterface(usbInterface, true); for (int j = 0; j < usbInterface.getEndpointCount(); j++) { UsbEndpoint ep = usbInterface.getEndpoint(j); if (ep.getType() == UsbConstants.USB_ENDPOINT_XFER_BULK) { if (ep.getDirection() == UsbConstants.USB_DIR_OUT) { // from host to device epOut = ep; } else if (ep.getDirection() == UsbConstants.USB_DIR_IN) { // from device to host epIn = ep; } } } } Now you have the bulk-in and bulk-out endpoints to send and receive APDU command and APDU response blocks: For sending commands, see the code snippet below: public void write(UsbDeviceConnection connection, UsbEndpoint epOut, byte[] command) { GoalKicker.com Android Notes for Professionals 1040 result = new StringBuilder(); connection.bulkTransfer(epOut, command, command.length, TIMEOUT); //For Printing logs you can use result variable for (byte bb : command) { result.append(String.format(" %02X ", bb)); } } And for receive/ read a response see the code snippet below : public int read(UsbDeviceConnection connection, UsbEndpoint epIn) { result = new StringBuilder(); final byte[] buffer = new byte[epIn.getMaxPacketSize()]; int byteCount = 0; byteCount = connection.bulkTransfer(epIn, buffer, buffer.length, TIMEOUT); //For Printing logs you can use result variable if (byteCount >= 0) { for (byte bb : buffer) { result.append(String.format(" %02X ", bb)); } //Buffer received was : result.toString() } else { //Something went wrong as count was : " + byteCount } return byteCount; } Now if you see this answer here the 1st command to be sent is : PC_to_RDR_IccPowerOn command to activate the card. which you can create by reading section 6.1.1 of the USB Device Class Specications doc here. Now let's take an example of this command like the one here: 62000000000000000000 How you can send this is : write(connection, epOut, "62000000000000000000"); Now after you have successfully sent the APDU command, you can read the response using : read(connection, epIn); And receive something like 80 18000000 00 00 00 00 00 3BBF11008131FE45455041000000000000000000000000F1 Now the response received in the code here will be in the result variable of read() method from code GoalKicker.com Android Notes for Professionals 1041 Chapter 223: Security Section 223.1: Verifying App Signature - Tamper Detection This technique details how to ensure that your .apk has been signed with your developer certicate, and leverages the fact that the certicate remains consistent and that only you have access to it. We can break this technique into 3 simple steps: Find your developer certicate signature. Embed your signature in a String constant in your app. Check that the signature at runtime matches our embedded developer signature. Here's the code snippet: private static final int VALID = 0; private static final int INVALID = 1; public static int checkAppSignature(Context context) { try { PackageInfo packageInfo = context.getPackageManager().getPackageInfo(context.getPackageName(), PackageManager.GET_SIGNATURES); for (Signature signature : packageInfo.signatures) { byte[] signatureBytes = signature.toByteArray(); MessageDigest md = MessageDigest.getInstance("SHA"); md.update(signature.toByteArray()); final String currentSignature = Base64.encodeToString(md.digest(), Base64.DEFAULT); Log.d("REMOVE_ME", "Include this string as a value for SIGNATURE:" + currentSignature); //compare signatures if (SIGNATURE.equals(currentSignature)){ return VALID; }; } } catch (Exception e) { //assumes an issue in checking signature., but we let the caller decide on what to do. } return INVALID; } GoalKicker.com Android Notes for Professionals 1042 Chapter 224: How to store passwords securely Section 224.1: Using AES for salted password encryption This examples uses the AES algorithm for encrypting passwords. The salt length can be up to 128 bit. We are using the SecureRandom class to generate a salt, which is combined with the password to generate a secret key. The classes used are already existing in Android packages javax.crypto and java.security. Once a key is generated, we have to preserve this key in a variable or store it. We are storing it among the shared preferences in the value S_KEY. Then, a password is encrypted using the doFinal method of the Cipher class once it is initialised in ENCRYPT_MODE. Next, the encrypted password is converted from a byte array into a string and stored among the shared preferences. The key used to generate an encrypted password can be used to decrypt the password in a similar way: public class MainActivity extends AppCompatActivity { public static final String PROVIDER = "BC"; public static final int SALT_LENGTH = 20; public static final int IV_LENGTH = 16; public static final int PBE_ITERATION_COUNT = 100; private static final String RANDOM_ALGORITHM = "SHA1PRNG"; private static final String HASH_ALGORITHM = "SHA-512"; private static final String PBE_ALGORITHM = "PBEWithSHA256And256BitAES-CBC-BC"; private static final String CIPHER_ALGORITHM = "AES/CBC/PKCS5Padding"; public static final String SECRET_KEY_ALGORITHM = "AES"; private static final String TAG = "EncryptionPassword"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); String originalPassword = "ThisIsAndroidStudio%$"; Log.e(TAG, "originalPassword => " + originalPassword); String encryptedPassword = encryptAndStorePassword(originalPassword); Log.e(TAG, "encryptedPassword => " + encryptedPassword); String decryptedPassword = decryptAndGetPassword(); Log.e(TAG, "decryptedPassword => " + decryptedPassword); } private String decryptAndGetPassword() { SharedPreferences prefs = getSharedPreferences("pswd", MODE_PRIVATE); String encryptedPasswrd = prefs.getString("token", ""); String passwrd = ""; if (encryptedPasswrd!=null && !encryptedPasswrd.isEmpty()) { try { String output = prefs.getString("S_KEY", ""); byte[] encoded = hexStringToByteArray(output); SecretKey aesKey = new SecretKeySpec(encoded, SECRET_KEY_ALGORITHM); passwrd = decrypt(aesKey, encryptedPasswrd); } catch (Exception e) { e.printStackTrace(); } } return passwrd; } public String encryptAndStorePassword(String password) { GoalKicker.com Android Notes for Professionals 1043 SharedPreferences.Editor editor = getSharedPreferences("pswd", MODE_PRIVATE).edit(); String encryptedPassword = ""; if (password!=null && !password.isEmpty()) { SecretKey secretKey = null; try { secretKey = getSecretKey(password, generateSalt()); byte[] encoded = secretKey.getEncoded(); String input = byteArrayToHexString(encoded); editor.putString("S_KEY", input); encryptedPassword = encrypt(secretKey, password); } catch (Exception e) { e.printStackTrace(); } editor.putString("token", encryptedPassword); editor.commit(); } return encryptedPassword; } public static String encrypt(SecretKey secret, String cleartext) throws Exception { try { byte[] iv = generateIv(); String ivHex = byteArrayToHexString(iv); IvParameterSpec ivspec = new IvParameterSpec(iv); Cipher encryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM, PROVIDER); encryptionCipher.init(Cipher.ENCRYPT_MODE, secret, ivspec); byte[] encryptedText = encryptionCipher.doFinal(cleartext.getBytes("UTF-8")); String encryptedHex = byteArrayToHexString(encryptedText); return ivHex + encryptedHex; } catch (Exception e) { Log.e("SecurityException", e.getCause().getLocalizedMessage()); throw new Exception("Unable to encrypt", e); } } public static String decrypt(SecretKey secret, String encrypted) throws Exception { try { Cipher decryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM, PROVIDER); String ivHex = encrypted.substring(0, IV_LENGTH * 2); String encryptedHex = encrypted.substring(IV_LENGTH * 2); IvParameterSpec ivspec = new IvParameterSpec(hexStringToByteArray(ivHex)); decryptionCipher.init(Cipher.DECRYPT_MODE, secret, ivspec); byte[] decryptedText = decryptionCipher.doFinal(hexStringToByteArray(encryptedHex)); String decrypted = new String(decryptedText, "UTF-8"); return decrypted; } catch (Exception e) { Log.e("SecurityException", e.getCause().getLocalizedMessage()); throw new Exception("Unable to decrypt", e); } } public static String generateSalt() throws Exception { try { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] salt = new byte[SALT_LENGTH]; random.nextBytes(salt); String saltHex = byteArrayToHexString(salt); return saltHex; GoalKicker.com Android Notes for Professionals 1044 } catch (Exception e) { throw new Exception("Unable to generate salt", e); } } public static String byteArrayToHexString(byte[] b) { StringBuffer sb = new StringBuffer(b.length * 2); for (int i = 0; i < b.length; i++) { int v = b[i] & 0xff; if (v < 16) { sb.append('0'); } sb.append(Integer.toHexString(v)); } return sb.toString().toUpperCase(); } public static byte[] hexStringToByteArray(String s) { byte[] b = new byte[s.length() / 2]; for (int i = 0; i < b.length; i++) { int index = i * 2; int v = Integer.parseInt(s.substring(index, index + 2), 16); b[i] = (byte) v; } return b; } public static SecretKey getSecretKey(String password, String salt) throws Exception { try { PBEKeySpec pbeKeySpec = new PBEKeySpec(password.toCharArray(), hexStringToByteArray(salt), PBE_ITERATION_COUNT, 256); SecretKeyFactory factory = SecretKeyFactory.getInstance(PBE_ALGORITHM, PROVIDER); SecretKey tmp = factory.generateSecret(pbeKeySpec); SecretKey secret = new SecretKeySpec(tmp.getEncoded(), SECRET_KEY_ALGORITHM); return secret; } catch (Exception e) { throw new Exception("Unable to get secret key", e); } } private static byte[] generateIv() throws NoSuchAlgorithmException, NoSuchProviderException { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] iv = new byte[IV_LENGTH]; random.nextBytes(iv); return iv; } } GoalKicker.com Android Notes for Professionals 1045 Chapter 225: Secure SharedPreferences Parameter Denition input String value to encrypt or decrypt. Shared Preferences are key-value based XML les. It is located under /data/data/package_name/shared_prefs/<filename.xml>. So a user with root privileges can navigate to this location and can change its values. If you want to protect values in your shared preferences, you can write a simple encryption and decryption mechanism. You should know tough, that Shared Preferences were never built to be secure, it's just a simple way to persist data. Section 225.1: Securing a Shared Preference Simple Codec Here to illustrate the working principle we can use simple encryption and decryption as follows. public static String encrypt(String input) { // Simple encryption, not very strong! return Base64.encodeToString(input.getBytes(), Base64.DEFAULT); } public static String decrypt(String input) { return new String(Base64.decode(input, Base64.DEFAULT)); } Implementation Technique public static String pref_name = "My_Shared_Pref"; // To Write SharedPreferences preferences = getSharedPreferences(pref_name, MODE_PRIVATE); SharedPreferences.Editor editor = preferences.edit(); editor.putString(encrypt("password"), encrypt("my_dummy_pass")); editor.apply(); // Or commit if targeting old devices // To Read SharedPreferences preferences = getSharedPreferences(pref_name, MODE_PRIVATE); String passEncrypted = preferences.getString(encrypt("password"), encrypt("default_value")); String password = decrypt(passEncrypted); GoalKicker.com Android Notes for Professionals 1046 Chapter 226: Secure SharedPreferences Parameter Denition input String value to encrypt or decrypt. Shared Preferences are key-value based XML les. It is located under /data/data/package_name/shared_prefs/<lename.xml>. So a user with root privileges can navigate to this location and can change its values. If you want to protect values in your shared preferences, you can write a simple encryption and decryption mechanism. You should know tough, that Shared Preferences were never built to be secure, it's just a simple way to persist data. Section 226.1: Securing a Shared Preference Simple Codec Here to illustrate the working principle we can use simple encryption and decryption as follows. public static String encrypt(String input) { // Simple encryption, not very strong! return Base64.encodeToString(input.getBytes(), Base64.DEFAULT); } public static String decrypt(String input) { return new String(Base64.decode(input, Base64.DEFAULT)); } Implementation Technique public static String pref_name = "My_Shared_Pref"; // To Write SharedPreferences preferences = getSharedPreferences(pref_name, MODE_PRIVATE); SharedPreferences.Editor editor = preferences.edit(); editor.putString(encrypt("password"), encrypt("my_dummy_pass")); editor.apply(); // Or commit if targeting old devices // To Read SharedPreferences preferences = getSharedPreferences(pref_name, MODE_PRIVATE); String passEncrypted = preferences.getString(encrypt("password"), encrypt("default_value")); String password = decrypt(passEncrypted); GoalKicker.com Android Notes for Professionals 1047 Chapter 227: SQLite SQLite is a relational database management system written in C. To begin working with SQLite databases within the Android framework, dene a class that extends SQLiteOpenHelper, and customize as needed. Section 227.1: onUpgrade() method SQLiteOpenHelper is a helper class to manage database creation and version management. In this class, the onUpgrade() method is responsible for upgrading the database when you make changes to the schema. It is called when the database le already exists, but its version is lower than the one specied in the current version of the app. For each database version, the specic changes you made have to be applied. @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // Loop through each version when an upgrade occurs. for (int version = oldVersion + 1; version <= newVersion; version++) { switch (version) { case 2: // Apply changes made in version 2 db.execSQL( "ALTER TABLE " + TABLE_PRODUCTS + " ADD COLUMN " + COLUMN_DESCRIPTION + " TEXT;" ); break; case 3: // Apply changes made in version 3 db.execSQL(CREATE_TABLE_TRANSACTION); break; } } } Section 227.2: Reading data from a Cursor Here is an example of a method that would live inside a SQLiteOpenHelper subclass. It uses the searchTerm String to lter the results, iterates through the Cursor's contents, and returns those contents in a List of Product Objects. First, dene the Product POJO class that will be the container for each row retrieved from the database: public class Product { long mId; String mName; String mDescription; float mValue; public Product(long id, String name, String description, float value) { mId = id; mName = name; mDescription = description; mValue = value; } } GoalKicker.com Android Notes for Professionals 1048 Then, dene the method that will query the database, and return a List of Product Objects: public List<Product> searchForProducts(String searchTerm) { // When reading data one should always just get a readable database. final SQLiteDatabase database = this.getReadableDatabase(); final Cursor cursor = database.query( // Name of the table to read from TABLE_NAME, // String array of the columns which are supposed to be read new String[]{COLUMN_NAME, COLUMN_DESCRIPTION, COLUMN_VALUE}, // The selection argument which specifies which row is read. // ? symbols are parameters. COLUMN_NAME + " LIKE ?", // The actual parameters values for the selection as a String array. // ? above take the value from here new String[]{"%" + searchTerm + "%"}, // GroupBy clause. Specify a column name to group similar values // in that column together. null, // Having clause. When using the GroupBy clause this allows you to // specify which groups to include. null, // OrderBy clause. Specify a column name here to order the results // according to that column. Optionally append ASC or DESC to specify // an ascending or descending order. null ); // To increase performance first get the index of each column in the cursor final int idIndex = cursor.getColumnIndex(COLUMN_ID); final int nameIndex = cursor.getColumnIndex(COLUMN_NAME); final int descriptionIndex = cursor.getColumnIndex(COLUMN_DESCRIPTION); final int valueIndex = cursor.getColumnIndex(COLUMN_VALUE); try { // If moveToFirst() returns false then cursor is empty if (!cursor.moveToFirst()) { return new ArrayList<>(); } final List<Product> products = new ArrayList<>(); do { // Read the values of a row in the table using the indexes acquired above final long id = cursor.getLong(idIndex); final String name = cursor.getString(nameIndex); final String description = cursor.getString(descriptionIndex); final float value = cursor.getFloat(valueIndex); products.add(new Product(id, name, description, value)); } while (cursor.moveToNext()); GoalKicker.com Android Notes for Professionals 1049 return products; } finally { // Don't forget to close the Cursor once you are done to avoid memory leaks. // Using a try/finally like in this example is usually the best way to handle this cursor.close(); // close the database database.close(); } } Section 227.3: Using the SQLiteOpenHelper class public class DatabaseHelper extends SQLiteOpenHelper { private static final String DATABASE_NAME = "Example.db"; private static final int DATABASE_VERSION = 3; // For all Primary Keys _id should be used as column name public static final String COLUMN_ID = "_id"; // Definition of table and column names of Products table public static final String TABLE_PRODUCTS = "Products"; public static final String COLUMN_NAME = "Name"; public static final String COLUMN_DESCRIPTION = "Description"; public static final String COLUMN_VALUE = "Value"; // Definition of table and column names of Transactions table public static final String TABLE_TRANSACTIONS = "Transactions"; public static final String COLUMN_PRODUCT_ID = "ProductId"; public static final String COLUMN_AMOUNT = "Amount"; // Create Statement for Products Table private static final String CREATE_TABLE_PRODUCT = "CREATE TABLE " + TABLE_PRODUCTS + " COLUMN_ID + " INTEGER PRIMARY KEY, " + COLUMN_DESCRIPTION + " TEXT, " + COLUMN_NAME + " TEXT, " + COLUMN_VALUE + " REAL" + ");"; (" + // Create Statement for Transactions Table private static final String CREATE_TABLE_TRANSACTION = "CREATE TABLE " + TABLE_TRANSACTIONS + " (" + COLUMN_ID + " INTEGER PRIMARY KEY," + COLUMN_PRODUCT_ID + " INTEGER," + COLUMN_AMOUNT + " INTEGER," + " FOREIGN KEY (" + COLUMN_PRODUCT_ID + ") REFERENCES " + TABLE_PRODUCTS + "(" + COLUMN_ID + ")" + ");"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { // onCreate should always create your most up to date database // This method is called when the app is newly installed db.execSQL(CREATE_TABLE_PRODUCT); db.execSQL(CREATE_TABLE_TRANSACTION); } GoalKicker.com Android Notes for Professionals 1050 @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // onUpgrade is responsible for upgrading the database when you make // changes to the schema. For each version the specific changes you made // in that version have to be applied. for (int version = oldVersion + 1; version <= newVersion; version++) { switch (version) { case 2: db.execSQL("ALTER TABLE " + TABLE_PRODUCTS + " ADD COLUMN " + COLUMN_DESCRIPTION + " TEXT;"); break; case 3: db.execSQL(CREATE_TABLE_TRANSACTION); break; } } } } Section 227.4: Insert data into database // You need a writable database to insert data final SQLiteDatabase database = openHelper.getWritableDatabase(); // Create a ContentValues instance which contains the data for each column // You do not need to specify a value for the PRIMARY KEY column. // Unique values for these are automatically generated. final ContentValues values = new ContentValues(); values.put(COLUMN_NAME, model.getName()); values.put(COLUMN_DESCRIPTION, model.getDescription()); values.put(COLUMN_VALUE, model.getValue()); // This call performs the update // The return value is the rowId or primary key value for the new row! // If this method returns -1 then the insert has failed. final int id = database.insert( TABLE_NAME, // The table name in which the data will be inserted null, // String: optional; may be null. If your provided values is empty, // no column names are known and an empty row can't be inserted. // If not set to null, this parameter provides the name // of nullable column name to explicitly insert a NULL values // The ContentValues instance which contains the data ); Section 227.5: Bulk insert Here is an example of inserting large chunks of data at once. All the data you want to insert is gathered inside of a ContentValues array. @Override public int bulkInsert(Uri uri, ContentValues[] values) { int count = 0; String table = null; int uriType = IChatContract.MessageColumns.uriMatcher.match(uri); switch (uriType) { case IChatContract.MessageColumns.MESSAGES: GoalKicker.com Android Notes for Professionals 1051 table = IChatContract.MessageColumns.TABLE_NAME; break; } mDatabase.beginTransaction(); try { for (ContentValues cv : values) { long rowID = mDatabase.insert(table, " ", cv); if (rowID <= 0) { throw new SQLException("Failed to insert row into " + uri); } } mDatabase.setTransactionSuccessful(); getContext().getContentResolver().notifyChange(uri, null); count = values.length; } finally { mDatabase.endTransaction(); } return count; } And here is an example of how to use it: ContentResolver resolver = mContext.getContentResolver(); ContentValues[] valueList = new ContentValues[object.size()]; //add whatever you like to the valueList resolver.bulkInsert(IChatContract.MessageColumns.CONTENT_URI, valueList); Section 227.6: Create a Contract, Helper and Provider for SQLite in Android DBContract.java //Define the tables and columns of your local database public final class DBContract { /*Content Authority its a name for the content provider, is convenient to use the package app name to be unique on the device */ public static final String CONTENT_AUTHORITY = "com.yourdomain.yourapp"; //Use CONTENT_AUTHORITY to create all the database URI's that the app will use to link the content provider. public static final Uri BASE_CONTENT_URI = Uri.parse("content://" + CONTENT_AUTHORITY); /*the name of the uri that can be the same as the name of your table. this will translate to content://com.yourdomain.yourapp/user/ as a valid URI */ public static final String PATH_USER = "User"; // To prevent someone from accidentally instantiating the contract class, // give it an empty constructor. public DBContract () {} //Intern class that defines the user table public static final class UserEntry implements BaseColumns { public static final URI CONTENT_URI = BASE_CONTENT_URI.buildUpon().appendPath(PATH_USER).build(); public static final String CONTENT_TYPE = ContentResolver.CURSOR_DIR_BASE_TYPE+"/"+CONTENT_AUTHORITY+"/"+PATH_USER; GoalKicker.com Android Notes for Professionals 1052 //Name of the table public static final String TABLE_NAME="User"; //Columns of the user table public static final String COLUMN_Name="Name"; public static final String COLUMN_Password="Password"; public static Uri buildUri(long id){ return ContentUris.withAppendedId(CONTENT_URI,id); } } DBHelper.java public class DBHelper extends SQLiteOpenHelper{ //if you change the schema of the database, you must increment this number private static final int DATABASE_VERSION=1; static final String DATABASE_NAME="mydatabase.db"; private static DBHelper mInstance=null; public static DBHelper getInstance(Context ctx){ if(mInstance==null){ mInstance= new DBHelper(ctx.getApplicationContext()); } return mInstance; } public DBHelper(Context context){ super(context,DATABASE_NAME,null,DATABASE_VERSION); } public int GetDatabase_Version() { return DATABASE_VERSION; } @Override public void onCreate(SQLiteDatabase sqLiteDatabase){ //Create the table users final String SQL_CREATE_TABLE_USERS="CREATE TABLE "+UserEntry.TABLE_NAME+ " ("+ UserEntry._ID+" INTEGER PRIMARY KEY, "+ UserEntry.COLUMN_Name+" TEXT , "+ UserEntry.COLUMN_Password+" TEXT "+ " ); "; sqLiteDatabase.execSQL(SQL_CREATE_TABLE_USERS); } @Override public void onUpgrade(SQLiteDatabase sqLiteDatabase, int oldVersion, int newVersion) { sqLiteDatabase.execSQL("DROP TABLE IF EXISTS " + UserEntry.TABLE_NAME); } } DBProvider.java public class DBProvider extends ContentProvider { private static final UriMatcher sUriMatcher = buildUriMatcher(); private DBHelper mDBHelper; GoalKicker.com Android Notes for Professionals 1053 private Context mContext; static final int USER = 100; static UriMatcher buildUriMatcher() { final UriMatcher matcher = new UriMatcher(UriMatcher.NO_MATCH); final String authority = DBContract.CONTENT_AUTHORITY; matcher.addURI(authority, DBContract.PATH_USER, USER); return matcher; } @Override public boolean onCreate() { mDBHelper = new DBHelper(getContext()); return false; } public PeaberryProvider(Context context) { mDBHelper = DBHelper.getInstance(context); mContext = context; } @Override public String getType(Uri uri) { // determine what type of Uri is final int match = sUriMatcher.match(uri); switch (match) { case USER: return DBContract.UserEntry.CONTENT_TYPE; default: throw new UnsupportedOperationException("Uri unknown: " + uri); } } @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { Cursor retCursor; try { switch (sUriMatcher.match(uri)) { case USER: { retCursor = mDBHelper.getReadableDatabase().query( DBContract.UserEntry.TABLE_NAME, projection, selection, selectionArgs, null, null, sortOrder ); break; } default: throw new UnsupportedOperationException("Uri unknown: " + uri); } } catch (Exception ex) { Log.e("Cursor", ex.toString()); GoalKicker.com Android Notes for Professionals 1054 } finally { mDBHelper.close(); } return null; } @Override public Uri insert(Uri uri, ContentValues values) { final SQLiteDatabase db = mDBHelper.getWritableDatabase(); final int match = sUriMatcher.match(uri); Uri returnUri; try { switch (match) { case USER: { long _id = db.insert(DBContract.UserEntry.TABLE_NAME, null, values); if (_id > 0) returnUri = DBContract.UserEntry.buildUri(_id); else throw new android.database.SQLException("Error at inserting row in " + uri); break; } default: throw new UnsupportedOperationException("Uri unknown: " + uri); } mContext.getContentResolver().notifyChange(uri, null); return returnUri; } catch (Exception ex) { Log.e("Insert", ex.toString()); db.close(); } finally { db.close(); } return null; } @Override public int delete(Uri uri, String selection, String[] selectionArgs) { final SQLiteDatabase db = DBHelper.getWritableDatabase(); final int match = sUriMatcher.match(uri); int deletedRows; if (null == selection) selection = "1"; try { switch (match) { case USER: deletedRows = db.delete( DBContract.UserEntry.TABLE_NAME, selection, selectionArgs); break; default: throw new UnsupportedOperationException("Uri unknown: " + uri); } if (deletedRows != 0) { mContext.getContentResolver().notifyChange(uri, null); } return deletedRows; } catch (Exception ex) { Log.e("Insert", ex.toString()); } finally { db.close(); } return 0; GoalKicker.com Android Notes for Professionals 1055 } @Override public int update(Uri uri, ContentValues values, String selection, String[] selectionArgs) { final SQLiteDatabase db = mDBHelper.getWritableDatabase(); final int match = sUriMatcher.match(uri); int updatedRows; try { switch (match) { case USER: updatedRows = db.update(DBContract.UserEntry.TABLE_NAME, values, selection, selectionArgs); break; default: throw new UnsupportedOperationException("Uri unknown: " + uri); } if (updatedRows != 0) { mContext.getContentResolver().notifyChange(uri, null); } return updatedRows; } catch (Exception ex) { Log.e("Update", ex.toString()); } finally { db.close(); } return -1; } } How to Use: public void InsertUser() { try { ContentValues userValues = getUserData("Jhon","XXXXX"); DBProvider dbProvider = new DBProvider(mContext); dbProvider.insert(UserEntry.CONTENT_URI, userValues); } catch (Exception ex) { Log.e("Insert", ex.toString()); } } public ContentValues getUserData(String name, String pass) { ContentValues userValues = new ContentValues(); userValues.put(UserEntry.COLUMN_Name, name); userValues.put(UserEntry.COLUMN_Password, pass); return userValues; } Section 227.7: Delete row(s) from the table To delete all rows from the table //get writable database SQLiteDatabase db = openHelper.getWritableDatabase(); db.delete(TABLE_NAME, null, null); db.close(); GoalKicker.com Android Notes for Professionals 1056 To delete all rows from the table and get the count of the deleted row in return value //get writable database SQLiteDatabase db = openHelper.getWritableDatabase(); int numRowsDeleted = db.delete(TABLE_NAME, String.valueOf(1), null); db.close(); To delete row(s) with WHERE condition //get writable database SQLiteDatabase db = openHelper.getWritableDatabase(); String whereClause = KEY_NAME + " = ?"; String[] whereArgs = new String[]{String.valueOf(KEY_VALUE)}; //for multiple condition, join them with AND //String whereClause = KEY_NAME1 + " = ? AND " + KEY_NAME2 + " = ?"; //String[] whereArgs = new String[]{String.valueOf(KEY_VALUE1), String.valueOf(KEY_VALUE2)}; int numRowsDeleted = db.delete(TABLE_NAME, whereClause, whereArgs); db.close(); Section 227.8: Updating a row in a table // You need a writable database to update a row final SQLiteDatabase database = openHelper.getWritableDatabase(); // Create a ContentValues instance which contains the up to date data for each column // Unlike when inserting data you need to specify the value for the PRIMARY KEY column as well final ContentValues values = new ContentValues(); values.put(COLUMN_ID, model.getId()); values.put(COLUMN_NAME, model.getName()); values.put(COLUMN_DESCRIPTION, model.getDescription()); values.put(COLUMN_VALUE, model.getValue()); // This call performs the update // The return value tells you how many rows have been updated. final int count = database.update( TABLE_NAME, // The table name in which the data will be updated values, // The ContentValues instance with the new data COLUMN_ID + " = ?", // The selection which specifies which row is updated. ? symbols are parameters. new String[] { // The actual parameters for the selection as a String[]. String.valueOf(model.getId()) } ); Section 227.9: Performing a Transaction Transactions can be used to make multiple changes to the database atomically. Any normal transaction follows this pattern: // You need a writable database to perform transactions final SQLiteDatabase database = openHelper.getWritableDatabase(); // This call starts a transaction database.beginTransaction(); GoalKicker.com Android Notes for Professionals 1057 // Using try/finally is essential to reliably end transactions even // if exceptions or other problems occur. try { // Here you can make modifications to the database database.insert(TABLE_CARS, null, productValues); database.update(TABLE_BUILDINGS, buildingValues, COLUMN_ID + " = ?", new String[] { String.valueOf(buildingId) }); // This call marks a transaction as successful. // This causes the changes to be written to the database once the transaction ends. database.setTransactionSuccessful(); } finally { // This call ends a transaction. // If setTransactionSuccessful() has not been called then all changes // will be rolled back and the database will not be modified. database.endTransaction(); } Calling beginTransaction() inside of an active transactions has no eect. Section 227.10: Create Database from assets folder Put your dbname.sqlite or dbname.db le in assets folder of your project. public class Databasehelper extends SQLiteOpenHelper { public static final String TAG = Databasehelper.class.getSimpleName(); public static int flag; // Exact Name of you db file that you put in assets folder with extension. static String DB_NAME = "dbname.sqlite"; private final Context myContext; String outFileName = ""; private String DB_PATH; private SQLiteDatabase db; public Databasehelper(Context context) { super(context, DB_NAME, null, 1); this.myContext = context; ContextWrapper cw = new ContextWrapper(context); DB_PATH = cw.getFilesDir().getAbsolutePath() + "/databases/"; Log.e(TAG, "Databasehelper: DB_PATH " + DB_PATH); outFileName = DB_PATH + DB_NAME; File file = new File(DB_PATH); Log.e(TAG, "Databasehelper: " + file.exists()); if (!file.exists()) { file.mkdir(); } } /** * Creates a empty database on the system and rewrites it with your own database. */ public void createDataBase() throws IOException { boolean dbExist = checkDataBase(); if (dbExist) { //do nothing - database already exist } else { //By calling this method and empty database will be created into the default system path //of your application so we are gonna be able to overwrite that database with our database. GoalKicker.com Android Notes for Professionals 1058 this.getReadableDatabase(); try { copyDataBase(); } catch (IOException e) { throw new Error("Error copying database"); } } } /** * Check if the database already exist to avoid re-copying the file each time you open the application. * * @return true if it exists, false if it doesn't */ private boolean checkDataBase() { SQLiteDatabase checkDB = null; try { checkDB = SQLiteDatabase.openDatabase(outFileName, null, SQLiteDatabase.OPEN_READWRITE); } catch (SQLiteException e) { try { copyDataBase(); } catch (IOException e1) { e1.printStackTrace(); } } if (checkDB != null) { checkDB.close(); } return checkDB != null ? true : false; } /** * Copies your database from your local assets-folder to the just created empty database in the * system folder, from where it can be accessed and handled. * This is done by transfering bytestream. */ private void copyDataBase() throws IOException { Log.i("Database", "New database is being copied to device!"); byte[] buffer = new byte[1024]; OutputStream myOutput = null; int length; // Open your local db as the input stream InputStream myInput = null; try { myInput = myContext.getAssets().open(DB_NAME); // transfer bytes from the inputfile to the // outputfile myOutput = new FileOutputStream(DB_PATH + DB_NAME); while ((length = myInput.read(buffer)) > 0) { myOutput.write(buffer, 0, length); } myOutput.close(); myOutput.flush(); myInput.close(); Log.i("Database", GoalKicker.com Android Notes for Professionals 1059 "New database has been copied to device!"); } catch (IOException e) { e.printStackTrace(); } } public void openDataBase() throws SQLException { //Open the database String myPath = DB_PATH + DB_NAME; db = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READWRITE); Log.e(TAG, "openDataBase: Open " + db.isOpen()); } @Override public synchronized void close() { if (db != null) db.close(); super.close(); } public void onCreate(SQLiteDatabase arg0) { } @Override public void onUpgrade(SQLiteDatabase arg0, int arg1, int arg2) { } } Here is How you can access database object to your activity. // Create Databasehelper class object in your activity. private Databasehelper db; Then in onCreate Method initialize it and call createDatabase() Method as show below. db = new Databasehelper(MainActivity.this); try { db.createDataBase(); } catch (Exception e) { e.printStackTrace(); } Perform all of your insert, update, delete and select operation as shown below. String query = "select Max(Id) as Id from " + TABLE_NAME; db.openDataBase(); int count = db.getId(query); db.close(); Section 227.11: Store image into SQLite Setting Up the database public class DatabaseHelper extends SQLiteOpenHelper { // Database Version private static final int DATABASE_VERSION = 1; GoalKicker.com Android Notes for Professionals 1060 // Database Name private static final String DATABASE_NAME = "database_name"; // Table Names private static final String DB_TABLE = "table_image"; // column names private static final String KEY_NAME = "image_name"; private static final String KEY_IMAGE = "image_data"; // Table create statement private static final String CREATE_TABLE_IMAGE = "CREATE TABLE " + DB_TABLE + "("+ KEY_NAME + " TEXT," + KEY_IMAGE + " BLOB);"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { // creating table db.execSQL(CREATE_TABLE_IMAGE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // on upgrade drop older tables db.execSQL("DROP TABLE IF EXISTS " + DB_TABLE); // create new table onCreate(db); } } Insert in the Database: public void addEntry( String name, byte[] image) throws SQLiteException{ SQLiteDatabase database = this.getWritableDatabase(); ContentValues cv = new ContentValues(); cv.put(KEY_NAME, name); cv.put(KEY_IMAGE, image); database.insert( DB_TABLE, null, cv ); } Retrieving data: byte[] image = cursor.getBlob(1); Note: 1. Before inserting into database, you need to convert your Bitmap image into byte array rst then apply it using database query. 2. When retrieving from database, you certainly have a byte array of image, what you need to do is to convert byte array back to original image. So, you have to make use of BitmapFactory to decode. Below is an Utility class which I hope could help you: GoalKicker.com Android Notes for Professionals 1061 public class DbBitmapUtility { // convert from bitmap to byte array public static byte[] getBytes(Bitmap bitmap) { ByteArrayOutputStream stream = new ByteArrayOutputStream(); bitmap.compress(CompressFormat.PNG, 0, stream); return stream.toByteArray(); } // convert from byte array to bitmap public static Bitmap getImage(byte[] image) { return BitmapFactory.decodeByteArray(image, 0, image.length); } } Section 227.12: Exporting and importing a database You might want to import and export your database for bacukups for example. Don't forget about the permissions. public void exportDatabase(){ try { File sd = Environment.getExternalStorageDirectory(); File data = Environment.getDataDirectory(); String currentDBPath = "//data//MY.PACKAGE.NAME//databases//MY_DATABASE_NAME"; String backupDBPath = "MY_DATABASE_FILE.db"; File currentDB = new File(data, currentDBPath); File backupDB = new File(sd, backupDBPath); FileChannel src = new FileInputStream(currentDB).getChannel(); FileChannel dst = new FileOutputStream(backupDB).getChannel(); dst.transferFrom(src, 0, src.size()); src.close(); dst.close(); Toast.makeText(c, c.getResources().getString(R.string.exporterenToast), Toast.LENGTH_SHORT).show(); } catch (Exception e) { Toast.makeText(c, c.getResources().getString(R.string.portError), Toast.LENGTH_SHORT).show(); Log.d("Main", e.toString()); } } public void importDatabase(){ try { File sd = Environment.getExternalStorageDirectory(); File data = Environment.getDataDirectory(); String currentDBPath = "//data//" + "MY.PACKAGE.NAME" + "//databases//" + "MY_DATABASE_NAME"; String backupDBPath = "MY_DATABASE_FILE.db"; File backupDB = new File(data, currentDBPath); File currentDB = new File(sd, backupDBPath); FileChannel src = new FileInputStream(currentDB).getChannel(); FileChannel dst = new FileOutputStream(backupDB).getChannel(); dst.transferFrom(src, 0, src.size()); GoalKicker.com Android Notes for Professionals 1062 src.close(); dst.close(); Toast.makeText(c, c.getResources().getString(R.string.importerenToast), Toast.LENGTH_LONG).show(); } catch (Exception e) { Toast.makeText(c, c.getResources().getString(R.string.portError), Toast.LENGTH_SHORT).show(); } } GoalKicker.com Android Notes for Professionals 1063 Chapter 228: Accessing SQLite databases using the ContentValues class Section 228.1: Inserting and updating rows in a SQLite database First, you need to open your SQLite database, which can be done as follows: SQLiteDatabase myDataBase; String mPath = dbhelper.DATABASE_PATH + dbhelper.DATABASE_NAME; myDataBase = SQLiteDatabase.openDatabase(mPath, null, SQLiteDatabase.OPEN_READWRITE); After opening the database, you can easily insert or update rows by using the ContentValues class. The following examples assume that a rst name is given by str_edtfname and a last nameby str_edtlname. You also need to replace table_name by the name of your table that you want to modify. Inserting data ContentValues values = new ContentValues(); values.put("First_Name", str_edtfname); values.put("Last_Name", str_edtlname); myDataBase.insert("table_name", null, values); Updating data ContentValues values = new ContentValues(); values.put("First_Name", str_edtfname); values.put("Last_Name", str_edtlname); myDataBase.update("table_name", values, "id" + " = ?", new String[] {id}); GoalKicker.com Android Notes for Professionals 1064 Chapter 229: Firebase Firebase is a mobile and web application platform with tools and infrastructure designed to help developers build high-quality apps. Features Firebase Cloud Messaging, Firebase Auth, Realtime Database, Firebase Storage, Firebase Hosting, Firebase Test Lab for Android, Firebase Crash Reporting. Section 229.1: Add Firebase to Your Android Project Here are simplied steps (based on the ocial documentation) required to create a Firebase project and connect it with an Android app. Add Firebase to your app 1. Create a Firebase project in the Firebase console and click Create New Project. 2. Click Add Firebase to your Android app and follow the setup steps. 3. When prompted, enter your app's package name. It's important to enter the fully qualied package name your app is using; this can only be set when you add an app to your Firebase project. 4. At the end, you'll download a google-services.json le. You can download this le again at any time. 5. If you haven't done so already, copy the google-services.json le into your project's module folder, typically app/. The next step is to Add the SDK to integrate the Firebase libraries in the project. Add the SDK To integrate the Firebase libraries into one of your own projects, you need to perform a few basic tasks to prepare your Android Studio project. You may have already done this as part of adding Firebase to your app. 1. Add rules to your root-level build.gradle le, to include the google-services plugin: buildscript { // ... dependencies { // ... classpath 'com.google.gms:google-services:3.1.0' } } Then, in your module Gradle le (usually the app/build.gradle), add the apply plugin line at the bottom of the le to enable the Gradle plugin: apply plugin: 'com.android.application' android { // ... } GoalKicker.com Android Notes for Professionals 1065 dependencies { // ... compile 'com.google.firebase:firebase-core:11.0.4' } // ADD THIS AT THE BOTTOM apply plugin: 'com.google.gms.google-services' The nal step is to add the dependencies for the Firebase SDK using one or more libraries available for the dierent Firebase features. Gradle Dependency Line com.google.rebase:rebase-core:11.0.4 Service Analytics com.google.rebase:rebase-database:11.0.4 Realtime Database com.google.rebase:rebase-storage:11.0.4 Storage com.google.rebase:rebase-crash:11.0.4 Crash Reporting com.google.rebase:rebase-auth:11.0.4 Authentication com.google.rebase:rebase-messaging:11.0.4 Cloud Messaging / Notications com.google.rebase:rebase-cong:11.0.4 Remote Cong com.google.rebase:rebase-invites:11.0.4 Invites / Dynamic Links com.google.rebase:rebase-ads:11.0.4 AdMob com.google.android.gms:play-services-appindexing:11.0.4 App Indexing Section 229.2: Updating a Firebase users's email public class ChangeEmailActivity extends BaseAppCompatActivity implements ReAuthenticateDialogFragment.OnReauthenticateSuccessListener { @BindView(R.id.et_change_email) EditText mEditText; private FirebaseUser mFirebaseUser; @OnClick(R.id.btn_change_email) void onChangeEmailClick() { FormValidationUtils.clearErrors(mEditText); if (FormValidationUtils.isBlank(mEditText)) { FormValidationUtils.setError(null, mEditText, "Please enter email"); return; } if (!FormValidationUtils.isEmailValid(mEditText)) { FormValidationUtils.setError(null, mEditText, "Please enter valid email"); return; } changeEmail(mEditText.getText().toString()); } @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); getSupportActionBar().setDisplayHomeAsUpEnabled(true); mFirebaseUser = mFirebaseAuth.getCurrentUser(); } GoalKicker.com Android Notes for Professionals 1066 private void changeEmail(String email) { DialogUtils.showProgressDialog(this, "Changing Email", "Please wait...", false); mFirebaseUser.updateEmail(email) .addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { DialogUtils.dismissProgressDialog(); if (task.isSuccessful()) { showToast("Email updated successfully."); return; } if (task.getException() instanceof FirebaseAuthRecentLoginRequiredException) { FragmentManager fm = getSupportFragmentManager(); ReAuthenticateDialogFragment reAuthenticateDialogFragment = new ReAuthenticateDialogFragment(); reAuthenticateDialogFragment.show(fm, reAuthenticateDialogFragment.getClass().getSimpleName()); } } }); } @Override protected int getLayoutResourceId() { return R.layout.activity_change_email; } @Override public void onReauthenticateSuccess() { changeEmail(mEditText.getText().toString()); } } Section 229.3: Create a Firebase user public class SignUpActivity extends BaseAppCompatActivity { @BindView(R.id.tIETSignUpEmail) EditText mEditEmail; @BindView(R.id.tIETSignUpPassword) EditText mEditPassword; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); getSupportActionBar().setDisplayHomeAsUpEnabled(true); } @OnClick(R.id.btnSignUpSignUp) void signUp() { FormValidationUtils.clearErrors(mEditEmail, mEditPassword); if (FormValidationUtils.isBlank(mEditEmail)) { mEditEmail.setError("Please enter email"); return; } if (!FormValidationUtils.isEmailValid(mEditEmail)) { GoalKicker.com Android Notes for Professionals 1067 mEditEmail.setError("Please enter valid email"); return; } if (TextUtils.isEmpty(mEditPassword.getText())) { mEditPassword.setError("Please enter password"); return; } createUserWithEmailAndPassword(mEditEmail.getText().toString(), mEditPassword.getText().toString()); } private void createUserWithEmailAndPassword(String email, String password) { DialogUtils.showProgressDialog(this, "", getString(R.string.str_creating_account), false); mFirebaseAuth .createUserWithEmailAndPassword(email, password) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (!task.isSuccessful()) { Toast.makeText(SignUpActivity.this, task.getException().getMessage(), Toast.LENGTH_SHORT).show(); DialogUtils.dismissProgressDialog(); } else { Toast.makeText(SignUpActivity.this, R.string.str_registration_successful, Toast.LENGTH_SHORT).show(); DialogUtils.dismissProgressDialog(); startActivity(new Intent(SignUpActivity.this, HomeActivity.class)); } } }); } @Override protected int getLayoutResourceId() { return R.layout.activity_sign_up; } } Section 229.4: Change Password public class ChangePasswordActivity extends BaseAppCompatActivity implements ReAuthenticateDialogFragment.OnReauthenticateSuccessListener { @BindView(R.id.et_change_password) EditText mEditText; private FirebaseUser mFirebaseUser; @OnClick(R.id.btn_change_password) void onChangePasswordClick() { FormValidationUtils.clearErrors(mEditText); if (FormValidationUtils.isBlank(mEditText)) { FormValidationUtils.setError(null, mEditText, "Please enter password"); return; } changePassword(mEditText.getText().toString()); } GoalKicker.com Android Notes for Professionals 1068 private void changePassword(String password) { DialogUtils.showProgressDialog(this, "Changing Password", "Please wait...", false); mFirebaseUser.updatePassword(password) .addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { DialogUtils.dismissProgressDialog(); if (task.isSuccessful()) { showToast("Password updated successfully."); return; } if (task.getException() instanceof FirebaseAuthRecentLoginRequiredException) { FragmentManager fm = getSupportFragmentManager(); ReAuthenticateDialogFragment reAuthenticateDialogFragment = new ReAuthenticateDialogFragment(); reAuthenticateDialogFragment.show(fm, reAuthenticateDialogFragment.getClass().getSimpleName()); } } }); } @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); getSupportActionBar().setDisplayHomeAsUpEnabled(true); mFirebaseUser = mFirebaseAuth.getCurrentUser(); } @Override protected int getLayoutResourceId() { return R.layout.activity_change_password; } @Override public void onReauthenticateSuccess() { changePassword(mEditText.getText().toString()); } } Section 229.5: Firebase Cloud Messaging First of all you need to setup your project adding Firebase to your Android project following the steps described in this topic. Set up Firebase and the FCM SDK Add the FCM dependency to your app-level build.gradle le dependencies { compile 'com.google.firebase:firebase-messaging:11.0.4' } And at the very bottom (this is important) add: // ADD THIS AT THE BOTTOM apply plugin: 'com.google.gms.google-services' GoalKicker.com Android Notes for Professionals 1069 Edit your app manifest Add the following to your app's manifest: A service that extends FirebaseMessagingService. This is required if you want to do any message handling beyond receiving notications on apps in the background. A service that extends FirebaseInstanceIdService to handle the creation, rotation, and updating of registration tokens. For example: <service android:name=".MyInstanceIdListenerService"> <intent-filter> <action android:name="com.google.firebase.INSTANCE_ID_EVENT"/> </intent-filter> </service> <service android:name=".MyFcmListenerService"> <intent-filter> <action android:name="com.google.firebase.MESSAGING_EVENT" /> </intent-filter> </service> Here are simple implementations of the 2 services. To retrieve the current registration token extend the FirebaseInstanceIdService class and override the onTokenRefresh() method: public class MyInstanceIdListenerService extends FirebaseInstanceIdService { // Called if InstanceID token is updated. Occurs if the security of the previous token had been // compromised. This call is initiated by the InstanceID provider. @Override public void onTokenRefresh() { // Get updated InstanceID token. String refreshedToken = FirebaseInstanceId.getInstance().getToken(); // Send this token to your server or store it locally } } To receive messages, use a service that extends FirebaseMessagingService and override the onMessageReceived method. public class MyFcmListenerService extends FirebaseMessagingService { /** * Called when message is received. * * @param remoteMessage Object representing the message received from Firebase Cloud Messaging. */ @Override public void onMessageReceived(RemoteMessage remoteMessage) { String from = remoteMessage.getFrom(); // Check if message contains a data payload. if (remoteMessage.getData().size() > 0) { GoalKicker.com Android Notes for Professionals 1070 Log.d(TAG, "Message data payload: " + remoteMessage.getData()); Map<String, String> data = remoteMessage.getData(); } // Check if message contains a notification payload. if (remoteMessage.getNotification() != null) { Log.d(TAG, "Message Notification Body: " + remoteMessage.getNotification().getBody()); } // do whatever you want with this, post your own notification, or update local state } in Firebase can grouped user by their behavior like "AppVersion,free user,purchase user,or any specic rules" and then send notication to specic group by send Topic Feature in reBase. to register user in topic use FirebaseMessaging.getInstance().subscribeToTopic("Free"); then in reBase console, send notication by topic name More info in the dedicated topic Firebase Cloud Messaging. Section 229.6: Firebase Storage Operations With this example, you will be able to perform following operations: 1. Connect to Firebase Storage 2. Create a directory named images 3. Upload a le in images directory 4. Download a le from images directory 5. Delete a le from images directory public class MainActivity extends AppCompatActivity { private static final int REQUEST_CODE_PICK_IMAGE = 1; private static final int PERMISSION_READ_WRITE_EXTERNAL_STORAGE = 2; private FirebaseStorage mFirebaseStorage; private StorageReference mStorageReference; private StorageReference mStorageReferenceImages; private Uri mUri; private ImageView mImageView; private ProgressDialog mProgressDialog; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); mImageView = (ImageView) findViewById(R.id.imageView); setSupportActionBar(toolbar); // Create an instance of Firebase Storage mFirebaseStorage = FirebaseStorage.getInstance(); } private void pickImage() { Intent intent = new Intent(Intent.ACTION_PICK, GoalKicker.com Android Notes for Professionals 1071 android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); intent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION); intent.addFlags(Intent.FLAG_GRANT_WRITE_URI_PERMISSION); startActivityForResult(intent, REQUEST_CODE_PICK_IMAGE); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode == RESULT_OK) { if (requestCode == REQUEST_CODE_PICK_IMAGE) { String filePath = FileUtil.getPath(this, data.getData()); mUri = Uri.fromFile(new File(filePath)); uploadFile(mUri); } } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); if (requestCode == PERMISSION_READ_WRITE_EXTERNAL_STORAGE) { if (grantResults[0] == PackageManager.PERMISSION_GRANTED) { pickImage(); } } } private void showProgressDialog(String title, String message) { if (mProgressDialog != null && mProgressDialog.isShowing()) mProgressDialog.setMessage(message); else mProgressDialog = ProgressDialog.show(this, title, message, true, false); } private void hideProgressDialog() { if (mProgressDialog != null && mProgressDialog.isShowing()) { mProgressDialog.dismiss(); } } private void showToast(String message) { Toast.makeText(this, message, Toast.LENGTH_SHORT).show(); } public void showHorizontalProgressDialog(String title, String body) { if (mProgressDialog != null && mProgressDialog.isShowing()) { mProgressDialog.setTitle(title); mProgressDialog.setMessage(body); } else { mProgressDialog = new ProgressDialog(this); mProgressDialog.setTitle(title); mProgressDialog.setMessage(body); mProgressDialog.setIndeterminate(false); mProgressDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); mProgressDialog.setProgress(0); mProgressDialog.setMax(100); mProgressDialog.setCancelable(false); mProgressDialog.show(); } } GoalKicker.com Android Notes for Professionals 1072 public void updateProgress(int progress) { if (mProgressDialog != null && mProgressDialog.isShowing()) { mProgressDialog.setProgress(progress); } } /** * Step 1: Create a Storage * * @param view */ public void onCreateReferenceClick(View view) { mStorageReference = mFirebaseStorage.getReferenceFromUrl("gs://**something**.appspot.com"); showToast("Reference Created Successfully."); findViewById(R.id.button_step_2).setEnabled(true); } /** * Step 2: Create a directory named "Images" * * @param view */ public void onCreateDirectoryClick(View view) { mStorageReferenceImages = mStorageReference.child("images"); showToast("Directory 'images' created Successfully."); findViewById(R.id.button_step_3).setEnabled(true); } /** * Step 3: Upload an Image File and display it on ImageView * * @param view */ public void onUploadFileClick(View view) { if (ContextCompat.checkSelfPermission(MainActivity.this, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED || ActivityCompat.checkSelfPermission(MainActivity.this, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) ActivityCompat.requestPermissions(MainActivity.this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE}, PERMISSION_READ_WRITE_EXTERNAL_STORAGE); else { pickImage(); } } /** * Step 4: Download an Image File and display it on ImageView * * @param view */ public void onDownloadFileClick(View view) { downloadFile(mUri); } /** * Step 5: Delete am Image File and remove Image from ImageView * * @param view */ public void onDeleteFileClick(View view) { deleteFile(mUri); GoalKicker.com Android Notes for Professionals 1073 } private void showAlertDialog(Context ctx, String title, String body, DialogInterface.OnClickListener okListener) { if (okListener == null) { okListener = new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }; } AlertDialog.Builder builder = new AlertDialog.Builder(ctx).setMessage(body).setPositiveButton("OK", okListener).setCancelable(false); if (!TextUtils.isEmpty(title)) { builder.setTitle(title); } builder.show(); } private void uploadFile(Uri uri) { mImageView.setImageResource(R.drawable.placeholder_image); StorageReference uploadStorageReference = mStorageReferenceImages.child(uri.getLastPathSegment()); final UploadTask uploadTask = uploadStorageReference.putFile(uri); showHorizontalProgressDialog("Uploading", "Please wait..."); uploadTask .addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() { @Override public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) { hideProgressDialog(); Uri downloadUrl = taskSnapshot.getDownloadUrl(); Log.d("MainActivity", downloadUrl.toString()); showAlertDialog(MainActivity.this, "Upload Complete", downloadUrl.toString(), new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { findViewById(R.id.button_step_3).setEnabled(false); findViewById(R.id.button_step_4).setEnabled(true); } }); Glide.with(MainActivity.this) .load(downloadUrl) .into(mImageView); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception exception) { exception.printStackTrace(); // Handle unsuccessful uploads hideProgressDialog(); } }) .addOnProgressListener(MainActivity.this, new OnProgressListener<UploadTask.TaskSnapshot>() { GoalKicker.com Android Notes for Professionals 1074 @Override public void onProgress(UploadTask.TaskSnapshot taskSnapshot) { int progress = (int) (100 * (float) taskSnapshot.getBytesTransferred() / taskSnapshot.getTotalByteCount()); Log.i("Progress", progress + ""); updateProgress(progress); } }); } private void downloadFile(Uri uri) { mImageView.setImageResource(R.drawable.placeholder_image); final StorageReference storageReferenceImage = mStorageReferenceImages.child(uri.getLastPathSegment()); File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_PICTURES), "Firebase Storage"); if (!mediaStorageDir.exists()) { if (!mediaStorageDir.mkdirs()) { Log.d("MainActivity", "failed to create Firebase Storage directory"); } } final File localFile = new File(mediaStorageDir, uri.getLastPathSegment()); try { localFile.createNewFile(); } catch (IOException e) { e.printStackTrace(); } showHorizontalProgressDialog("Downloading", "Please wait..."); storageReferenceImage.getFile(localFile).addOnSuccessListener(new OnSuccessListener<FileDownloadTask.TaskSnapshot>() { @Override public void onSuccess(FileDownloadTask.TaskSnapshot taskSnapshot) { hideProgressDialog(); showAlertDialog(MainActivity.this, "Download Complete", localFile.getAbsolutePath(), new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { findViewById(R.id.button_step_4).setEnabled(false); findViewById(R.id.button_step_5).setEnabled(true); } }); Glide.with(MainActivity.this) .load(localFile) .into(mImageView); } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception exception) { // Handle any errors hideProgressDialog(); exception.printStackTrace(); } }).addOnProgressListener(new OnProgressListener<FileDownloadTask.TaskSnapshot>() { @Override public void onProgress(FileDownloadTask.TaskSnapshot taskSnapshot) { int progress = (int) (100 * (float) taskSnapshot.getBytesTransferred() / taskSnapshot.getTotalByteCount()); Log.i("Progress", progress + ""); updateProgress(progress); GoalKicker.com Android Notes for Professionals 1075 } }); } private void deleteFile(Uri uri) { showProgressDialog("Deleting", "Please wait..."); StorageReference storageReferenceImage = mStorageReferenceImages.child(uri.getLastPathSegment()); storageReferenceImage.delete().addOnSuccessListener(new OnSuccessListener<Void>() { @Override public void onSuccess(Void aVoid) { hideProgressDialog(); showAlertDialog(MainActivity.this, "Success", "File deleted successfully.", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { mImageView.setImageResource(R.drawable.placeholder_image); findViewById(R.id.button_step_3).setEnabled(true); findViewById(R.id.button_step_4).setEnabled(false); findViewById(R.id.button_step_5).setEnabled(false); } }); File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_PICTURES), "Firebase Storage"); if (!mediaStorageDir.exists()) { if (!mediaStorageDir.mkdirs()) { Log.d("MainActivity", "failed to create Firebase Storage directory"); } } deleteFiles(mediaStorageDir); } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception exception) { hideProgressDialog(); exception.printStackTrace(); } }); } private void deleteFiles(File directory) { if (directory.isDirectory()) for (File child : directory.listFiles()) child.delete(); } } By default, Firebase Storage rules applies Authentication restriction. If user is authenticated, only then, he can perform operations on Firebase Storage, else he cannot. I have disabled the authentication part in this demo by updating Storage rules. Previously, rules were looking like: service firebase.storage { match /b/**something**.appspot.com/o { match /{allPaths=**} { allow read, write: if request.auth != null; } } } But I changed to skip the authentication: GoalKicker.com Android Notes for Professionals 1076 service firebase.storage { match /b/**something**.appspot.com/o { match /{allPaths=**} { allow read, write; } } } Section 229.7: Firebase Realtime Database: how to set/get data Note: Let's setup some anonymous authentication for the example { "rules": { ".read": "auth != null", ".write": "auth != null" } } Once it is done, create a child by editing your database address. For example: https://your-project.rebaseio.com/ to https://your-project.rebaseio.com/chat We will put data to this location from our Android device. You don't have to create the database structure (tabs, elds... etc), it will be automatically created when you'll send Java object to Firebase! Create a Java object that contains all the attributes you want to send to the database: public class ChatMessage { private String username; private String message; public ChatMessage(String username, String message) { this.username = username; this.message = message; } public ChatMessage() {} // you MUST have an empty constructor public String getUsername() { return username; } public String getMessage() { return message; } } Then in your activity: if (FirebaseAuth.getInstance().getCurrentUser() == null) { FirebaseAuth.getInstance().signInAnonymously().addOnCompleteListener(new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isComplete() && task.isSuccessful()){ FirebaseDatabase database = FirebaseDatabase.getInstance(); GoalKicker.com Android Notes for Professionals 1077 DatabaseReference reference = database.getReference("chat"); // reference is 'chat' because we created the database at /chat } } }); } To send a value: ChatMessage msg = new ChatMessage("user1", "Hello World!"); reference.push().setValue(msg); To receive changes that occurs in the database: reference.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { ChatMessage msg = dataSnapshot.getValue(ChatMessage.class); Log.d(TAG, msg.getUsername()+" "+msg.getMessage()); } public void onChildChanged(DataSnapshot dataSnapshot, String s) {} public void onChildRemoved(DataSnapshot dataSnapshot) {} public void onChildMoved(DataSnapshot dataSnapshot, String s) {} public void onCancelled(DatabaseError databaseError) {} }); Section 229.8: Demo of FCM based notications This example shows how to use the Firebase Cloud Messaging(FCM) platform. FCM is a successor of Google Cloud Messaging(GCM). It does not require C2D_MESSAGE permissions from the app users. Steps to integrate FCM are as follows. 1. Create sample hello world project in Android Studio Your Android studio screen would look like the following picture. GoalKicker.com Android Notes for Professionals 1078 2. Next step is to set up rebase project. Visit https://console.rebase.google.com and create a project with an identical name, so that you can track it easily. GoalKicker.com Android Notes for Professionals 1079 3. Now it is time to add rebase to your sample android project you have just created. You will need package name of your project and Debug signing certicate SHA-1(optional). a. Package name - It can be found from the android manifest XML le. b. Debug signing SHA-1 certicate - It can be found by running following command in the terminal. keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android keypass android Enter this information in the rebase console and add the app to rebase project. Once you click on add app button, your browser would automatically download a JSON le named "google-services.json". 4. Now copy the google-services.json le you have just downloaded into your Android app module root directory. GoalKicker.com Android Notes for Professionals 1080 5. Follow the instructions given on the rebase console as you proceed ahead. a. Add following code line to GoalKicker.com Android Notes for Professionals 1081 your project level build.gradle dependencies{ classpath 'com.google.gms:google-services:3.1.0' ..... b. Add following code line at the end of your app level build.gradle. //following are the dependencies to be added compile 'com.google.firebase:firebase-messaging:11.0.4' compile 'com.android.support:multidex:1.0.1' } // this line goes to the end of the file apply plugin: 'com.google.gms.google-services' c. Android studio would ask you to sync project. Click on Sync now. 6. Next task is to add two services. a. One extending FirebaseMessagingService with intent-lter as following <intent-filter> <action android:name="com.google.firebase.MESSAGING_EVENT"/> </intent-filter> b. One extending FirebaseInstanceIDService. <intent-filter> <action android:name="com.google.firebase.INSTANCE_ID_EVENT"/> </intent-filter> 7. FirebaseMessagingService code should look like this. import android.app.Service; import android.content.Intent; import android.os.IBinder; import com.google.firebase.messaging.FirebaseMessagingService; public class MyFirebaseMessagingService extends FirebaseMessagingService { public MyFirebaseMessagingService() { } } 8. FirebaseInstanceIdService should look like this. import android.app.Service; import android.content.Intent; import android.os.IBinder; import com.google.firebase.iid.FirebaseInstanceIdService; public class MyFirebaseInstanceIDService extends FirebaseInstanceIdService { public MyFirebaseInstanceIDService() { } } 9. Now it is time to capture the device registration token. Add following line of code to MainActivity's onCreate method. GoalKicker.com Android Notes for Professionals 1082 String token = FirebaseInstanceId.getInstance().getToken(); Log.d("FCMAPP", "Token is "+token); 10. Once we have the access token, we can use rebase console to send out the notication. Run the app on GoalKicker.com Android Notes for Professionals 1083 GoalKicker.com Android Notes for Professionals 1084 GoalKicker.com Android Notes for Professionals 1085 your android handset. GoalKicker.com Android Notes for Professionals 1086 GoalKicker.com Android Notes for Professionals 1087 Click on Notication in Firebase console and UI will help you to send out your rst message. Firebase oers functionality to send messages to single device(By using the device token id we captured) or all the users using our app or to specic group of users. Once you send your rst message, your mobile screen should look like following. Thank you Section 229.9: Sign In Firebase user with email and password public class LoginActivity extends BaseAppCompatActivity { @BindView(R.id.tIETLoginEmail) EditText mEditEmail; @BindView(R.id.tIETLoginPassword) EditText mEditPassword; @Override protected void onResume() { super.onResume(); FirebaseUser firebaseUser = mFirebaseAuth.getCurrentUser(); if (firebaseUser != null) startActivity(new Intent(this, HomeActivity.class)); } @Override protected int getLayoutResourceId() { return R.layout.activity_login; } @OnClick(R.id.btnLoginLogin) void onSignInClick() { FormValidationUtils.clearErrors(mEditEmail, mEditPassword); if (FormValidationUtils.isBlank(mEditEmail)) { FormValidationUtils.setError(null, mEditEmail, "Please enter email"); return; } GoalKicker.com Android Notes for Professionals 1088 if (!FormValidationUtils.isEmailValid(mEditEmail)) { FormValidationUtils.setError(null, mEditEmail, "Please enter valid email"); return; } if (TextUtils.isEmpty(mEditPassword.getText())) { FormValidationUtils.setError(null, mEditPassword, "Please enter password"); return; } signInWithEmailAndPassword(mEditEmail.getText().toString(), mEditPassword.getText().toString()); } private void signInWithEmailAndPassword(String email, String password) { DialogUtils.showProgressDialog(this, "", getString(R.string.sign_in), false); mFirebaseAuth .signInWithEmailAndPassword(email, password) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { DialogUtils.dismissProgressDialog(); if (task.isSuccessful()) { Toast.makeText(LoginActivity.this, "Login Successful", Toast.LENGTH_SHORT).show(); startActivity(new Intent(LoginActivity.this, HomeActivity.class)); finish(); } else { Toast.makeText(LoginActivity.this, task.getException().getMessage(), Toast.LENGTH_SHORT).show(); } } }); } @OnClick(R.id.btnLoginSignUp) void onSignUpClick() { startActivity(new Intent(this, SignUpActivity.class)); } @OnClick(R.id.btnLoginForgotPassword) void forgotPassword() { startActivity(new Intent(this, ForgotPasswordActivity.class)); } } Section 229.10: Send Firebase password reset email public class ForgotPasswordActivity extends AppCompatActivity { @BindView(R.id.tIETForgotPasswordEmail) EditText mEditEmail; private FirebaseAuth mFirebaseAuth; private FirebaseAuth.AuthStateListener mAuthStateListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 1089 setContentView(R.layout.activity_forgot_password); ButterKnife.bind(this); mFirebaseAuth = FirebaseAuth.getInstance(); mAuthStateListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { FirebaseUser firebaseUser = firebaseAuth.getCurrentUser(); if (firebaseUser != null) { // Do whatever you want with the UserId by firebaseUser.getUid() } else { } } }; } @Override protected void onStart() { super.onStart(); mFirebaseAuth.addAuthStateListener(mAuthStateListener); } @Override protected void onStop() { super.onStop(); if (mAuthStateListener != null) { mFirebaseAuth.removeAuthStateListener(mAuthStateListener); } } @OnClick(R.id.btnForgotPasswordSubmit) void onSubmitClick() { if (FormValidationUtils.isBlank(mEditEmail)) { FormValidationUtils.setError(null, mEditEmail, "Please enter email"); return; } if (!FormValidationUtils.isEmailValid(mEditEmail)) { FormValidationUtils.setError(null, mEditEmail, "Please enter valid email"); return; } DialogUtils.showProgressDialog(this, "", "Please wait...", false); mFirebaseAuth.sendPasswordResetEmail(mEditEmail.getText().toString()) .addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { DialogUtils.dismissProgressDialog(); if (task.isSuccessful()) { Toast.makeText(ForgotPasswordActivity.this, "An email has been sent to you.", Toast.LENGTH_SHORT).show(); finish(); } else { Toast.makeText(ForgotPasswordActivity.this, task.getException().getMessage(), Toast.LENGTH_SHORT).show(); } } }); } GoalKicker.com Android Notes for Professionals 1090 } Section 229.11: Re-Authenticate Firebase user public class ReAuthenticateDialogFragment extends DialogFragment { @BindView(R.id.et_dialog_reauthenticate_email) EditText mEditTextEmail; @BindView(R.id.et_dialog_reauthenticate_password) EditText mEditTextPassword; private OnReauthenticateSuccessListener mOnReauthenticateSuccessListener; @OnClick(R.id.btn_dialog_reauthenticate) void onReauthenticateClick() { FormValidationUtils.clearErrors(mEditTextEmail, mEditTextPassword); if (FormValidationUtils.isBlank(mEditTextEmail)) { FormValidationUtils.setError(null, mEditTextEmail, "Please enter email"); return; } if (!FormValidationUtils.isEmailValid(mEditTextEmail)) { FormValidationUtils.setError(null, mEditTextEmail, "Please enter valid email"); return; } if (TextUtils.isEmpty(mEditTextPassword.getText())) { FormValidationUtils.setError(null, mEditTextPassword, "Please enter password"); return; } reauthenticateUser(mEditTextEmail.getText().toString(), mEditTextPassword.getText().toString()); } private void reauthenticateUser(String email, String password) { DialogUtils.showProgressDialog(getActivity(), "Re-Authenticating", "Please wait...", false); FirebaseUser firebaseUser = FirebaseAuth.getInstance().getCurrentUser(); AuthCredential authCredential = EmailAuthProvider.getCredential(email, password); firebaseUser.reauthenticate(authCredential) .addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { DialogUtils.dismissProgressDialog(); if (task.isSuccessful()) { mOnReauthenticateSuccessListener.onReauthenticateSuccess(); dismiss(); } else { ((BaseAppCompatActivity) getActivity()).showToast(task.getException().getMessage()); } } }); } @Override public void onAttach(Context context) { super.onAttach(context); mOnReauthenticateSuccessListener = (OnReauthenticateSuccessListener) context; GoalKicker.com Android Notes for Professionals 1091 } @OnClick(R.id.btn_dialog_reauthenticate_cancel) void onCancelClick() { dismiss(); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.dialog_reauthenticate, container); ButterKnife.bind(this, view); return view; } @Override public void onResume() { super.onResume(); Window window = getDialog().getWindow(); window.setLayout(WindowManager.LayoutParams.MATCH_PARENT, WindowManager.LayoutParams.WRAP_CONTENT); } interface OnReauthenticateSuccessListener { void onReauthenticateSuccess(); } } Section 229.12: Firebase Sign Out Initialization of variable private GoogleApiClient mGoogleApiClient; You must have to Write this Code in onCreate() method of all that when u put signout button. mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API) .build(); Put below code on signout button. Auth.GoogleSignInApi.signOut(mGoogleApiClient).setResultCallback( new ResultCallback<Status>() { @Override public void onResult(Status status) { FirebaseAuth.getInstance().signOut(); Intent i1 = new Intent(MainActivity.this, GoogleSignInActivity.class); startActivity(i1); Toast.makeText(MainActivity.this, "Logout Successfully!", Toast.LENGTH_SHORT).show(); } }); GoalKicker.com Android Notes for Professionals 1092 Chapter 230: Firebase Cloud Messaging Firebase Cloud Messaging (FCM) is a cross-platform messaging solution that lets you reliably deliver messages at no cost. Using FCM, you can notify a client app that new email or other data is available to sync. You can send notication messages to drive user reengagement and retention. For use cases such as instant messaging, a message can transfer a payload of up to 4KB to a client app. Section 230.1: Set Up a Firebase Cloud Messaging Client App on Android 1. Complete the Installation and setup part to connect your app to Firebase. This will create the project in Firebase. 2. Add the dependency for Firebase Cloud Messaging to your module-level build.gradle le: dependencies { compile 'com.google.firebase:firebase-messaging:10.2.1' } Now you are ready to work with the FCM in Android. FCM clients require devices running Android 2.3 or higher that also have the Google Play Store app installed, or an emulator running Android 2.3 with Google APIs. Edit your AndroidManifest.xml le <service android:name=".MyFirebaseMessagingService"> <intent-filter> <action android:name="com.google.firebase.MESSAGING_EVENT"/> </intent-filter> </service> <service android:name=".MyFirebaseInstanceIDService"> <intent-filter> <action android:name="com.google.firebase.INSTANCE_ID_EVENT"/> </intent-filter> </service> Section 230.2: Receive Messages To receive messages, use a service that extends FirebaseMessagingService and override the onMessageReceived method. public class MyFcmListenerService extends FirebaseMessagingService { /** * Called when message is received. * * @param remoteMessage Object representing the message received from Firebase Cloud Messaging. */ @Override public void onMessageReceived(RemoteMessage message) { GoalKicker.com Android Notes for Professionals 1093 String from = message.getFrom(); // Check if message contains a data payload. if (remoteMessage.getData().size() > 0) { Log.d(TAG, "Message data payload: " + remoteMessage.getData()); Map<String, String> data = message.getData(); } // Check if message contains a notification payload. if (remoteMessage.getNotification() != null) { Log.d(TAG, "Message Notification Body: " + remoteMessage.getNotification().getBody()); } //..... } When the app is in the background, Android directs notication messages to the system tray. A user tap on the notication opens the app launcher by default. This includes messages that contain both notication and data payload (and all messages sent from the Notications console). In these cases, the notication is delivered to the device's system tray, and the data payload is delivered in the extras of the intent of your launcher Activity. Here a short recap: App state Notication Data Both onMessageReceived onMessageReceived onMessageReceived Foreground Background System tray onMessageReceived Notication: system tray Data: in extras of the intent. Section 230.3: This code that i have implemnted in my app for pushing image,message and also link for opening in your webView This is my FirebaseMessagingService public class MyFirebaseMessagingService extends FirebaseMessagingService { Bitmap bitmap; @Override public void onMessageReceived(RemoteMessage remoteMessage) { String message = remoteMessage.getData().get("message"); //imageUri will contain URL of the image to be displayed with Notification String imageUri = remoteMessage.getData().get("image"); String link=remoteMessage.getData().get("link"); //To get a Bitmap image from the URL received bitmap = getBitmapfromUrl(imageUri); sendNotification(message, bitmap,link); } /** * Create and show a simple notification containing the received FCM message. */ private void sendNotification(String messageBody, Bitmap image, String link) { Intent intent = new Intent(this, NewsListActivity.class); GoalKicker.com Android Notes for Professionals 1094 intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); intent.putExtra("LINK",link); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0 /* Request code */, intent, PendingIntent.FLAG_ONE_SHOT); Uri defaultSoundUri = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION); NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this) .setLargeIcon(image)/*Notification icon image*/ .setSmallIcon(R.drawable.hindi) .setContentTitle(messageBody) .setStyle(new NotificationCompat.BigPictureStyle() .bigPicture(image))/*Notification with Image*/ .setAutoCancel(true) .setSound(defaultSoundUri) .setContentIntent(pendingIntent); NotificationManager notificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); notificationManager.notify(0 /* ID of notification */, notificationBuilder.build()); } public Bitmap getBitmapfromUrl(String imageUrl) { try { URL url = new URL(imageUrl); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setDoInput(true); connection.connect(); InputStream input = connection.getInputStream(); Bitmap bitmap = BitmapFactory.decodeStream(input); return bitmap; } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); return null; } }} And this is MainActivity to open link in my WebView or other browser depand on your requirement through intents. if (getIntent().getExtras() != null) { if (getIntent().getStringExtra("LINK")!=null) { Intent i=new Intent(this,BrowserActivity.class); i.putExtra("link",getIntent().getStringExtra("LINK")); i.putExtra("PUSH","yes"); NewsListActivity.this.startActivity(i); finish(); }} Section 230.4: Registration token On initial startup of your app, the FCM SDK generates a registration token for the client app instance. If you want to target single devices or create device groups, you'll need to access this token by extending FirebaseInstanceIdService. The onTokenRefresh callback res whenever a new token is generated and you can use the method FirebaseInstanceID.getToken() to retrieve the current token. Example: GoalKicker.com Android Notes for Professionals 1095 public class MyFirebaseInstanceIDService extends FirebaseInstanceIdService { /** * Called if InstanceID token is updated. This may occur if the security of * the previous token had been compromised. Note that this is called when the InstanceID token * is initially generated so this is where you would retrieve the token. */ @Override public void onTokenRefresh() { // Get updated InstanceID token. String refreshedToken = FirebaseInstanceId.getInstance().getToken(); Log.d(TAG, "Refreshed token: " + refreshedToken); } } Section 230.5: Subscribe to a topic Client apps can subscribe to any existing topic, or they can create a new topic. When a client app subscribes to a new topic name, a new topic of that name is created in FCM and any client can subsequently subscribe to it. To subscribe to a topic use the subscribeToTopic() method specifying the topic name: FirebaseMessaging.getInstance().subscribeToTopic("myTopic"); GoalKicker.com Android Notes for Professionals 1096 Chapter 231: Firebase Realtime DataBase Section 231.1: Quick setup 1. Complete the Installation and setup part to connect your app to Firebase. This will create the project in Firebase. 2. Add the dependency for Firebase Realtime Database to your module-level build.gradle le: compile 'com.google.firebase:firebase-database:10.2.1' 3. Congure Firebase Database Rules Now you are ready to work with the Realtime Database in Android. For example you write a Hello World message to the database under the message key. // Write a message to the database FirebaseDatabase database = FirebaseDatabase.getInstance(); DatabaseReference myRef = database.getReference("message"); myRef.setValue("Hello, World!"); Section 231.2: Firebase Realtime DataBase event handler First Initialize FirebaseDatabase: FirebaseDatabase database = FirebaseDatabase.getInstance(); Write to your database: // Write a message to the database FirebaseDatabase database = FirebaseDatabase.getInstance(); DatabaseReference myRef = database.getReference("message"); myRef.setValue("Hello, World!"); Read from your database: // Read from the database myRef.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { // This method is called once with the initial value and again // whenever data at this location is updated. String value = dataSnapshot.getValue(String.class); Log.d(TAG, "Value is: " + value); } @Override public void onCancelled(DatabaseError error) { // Failed to read value Log.w(TAG, "Failed to read value.", error.toException()); } }); GoalKicker.com Android Notes for Professionals 1097 Retrieve Data on Android events: ChildEventListener childEventListener = new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String previousChildName) { Log.d(TAG, "onChildAdded:" + dataSnapshot.getKey()); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String previousChildName) { Log.d(TAG, "onChildChanged:" + dataSnapshot.getKey()); } @Override public void onChildRemoved(DataSnapshot dataSnapshot) { Log.d(TAG, "onChildRemoved:" + dataSnapshot.getKey()); } @Override public void onChildMoved(DataSnapshot dataSnapshot, String previousChildName) { Log.d(TAG, "onChildMoved:" + dataSnapshot.getKey()); } @Override public void onCancelled(DatabaseError databaseError) { Log.w(TAG, "postComments:onCancelled", databaseError.toException()); Toast.makeText(mContext, "Failed to load comments.", Toast.LENGTH_SHORT).show(); } }; ref.addChildEventListener(childEventListener); Section 231.3: Understanding rebase JSON database Before we get our hands dirty with code, I feel it is necessary to understand how data is stored in rebase. Unlike relational databases, rebase stores data in JSON format. Think of each row in a relational database as a JSON object (which is basically unordered key-value pair). So the column name becomes key and the value stored in that column for one particular row is the value. This way the entire row is represented as a JSON object and a list of these represent an entire database table. The immediate benet that I see for this is schema modication becomes much more cheaper operation compared to old RDBMS. It is easier to add a couple of more attributes to a JSON than altering a table structure. here is a sample JSON to show how data is stored in rebase: { "user_base" : { "342343" : { "email" : "<EMAIL>", "authToken" : "some string", "name" : "Kaushal", "phone" : "+919916xxxxxx", "serviceProviderId" : "firebase", "signInServiceType" : "google", }, "354895" : { "email" : "<EMAIL>", "authToken" : "some string", GoalKicker.com Android Notes for Professionals 1098 "name" : "devil", "phone" : "+919685xxxxxx", "serviceProviderId" : "firebase", "signInServiceType" : "github" }, "371298" : { "email" : "<EMAIL>", "authToken" : "I am batman", "name" : "<NAME>", "phone" : "+14085xxxxxx", "serviceProviderId" : "firebase", "signInServiceType" : "shield" } }, "user_prefs": { "key1":{ "data": "for key one" }, "key2":{ "data": "for key two" }, "key3":{ "data": "for key three" } }, //other structures } This clearly shows how data that we used to store in relational databases can be stored in JSON format. Next let's see how to read this data in android devices. Section 231.4: Retrieving data from rebase I am gonna assume you already know about adding gradle dependencies rebase in android studio. If you don't just follow the guide from here. Add your app in rebase console, gradle sync android studio after adding dependencies. All dependencies are not needed just rebase database and rebase auth. Now that we know how data is stored and how to add gradle dependencies let's see how to use the imported rebase android SDK to retrieve data. create a rebase database reference DatabaseReference userDBRef = FirebaseDatabase.getInstance().getReference(); // above statement point to base tree userDBRef = DatabaseReference.getInstance().getReference().child("user_base") // points to user_base table JSON (see previous section) from here you can chain multiple child() method calls to point to the data you are interested in. For example if data is stored as depicted in previous section and you want to point to Bruce Wayne user you can use: DatabaseReference bruceWayneRef = userDBRef.child("371298"); // 371298 is key of bruce wayne user in JSON structure (previous section) Or simply pass the whole reference to the JSON object: DatabaseReference bruceWayneRef = DatabaseReference.getInstance().getReference() .child("user_base/371298"); // deeply nested data can also be referenced this way, just put the fully GoalKicker.com Android Notes for Professionals 1099 // qualified path in pattern shown in above code "blah/blah1/blah1-2/blah1-2-3..." Now that we have the reference of the data we want to fetch, we can use listeners to fetch data in android apps. Unlike the traditional calls where you re REST API calls using retrot or volley, here a simple callback listener is required to get the data. Firebase sdk calls the callback methods and you are done. There are basically two types of listeners you can attach, one is ValueEventListener and the other one is ChildEventListener (described in next section). For any change in data under the node we have references and added listeners to, value event listeners return the entire JSON structure and child event listener returns specic child where the change has happened. Both of these are useful in their own way. To fetch the data from rebase we can add one or more listeners to a rebase database reference (list userDBRef we created earlier). Here is some sample code (code explanation after code): userDBRef.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { User bruceWayne = dataSnapshot.child("371298").getValue(User.class); // Do something with the retrieved data or Bruce Wayne } @Override public void onCancelled(DatabaseError databaseError) { Log.e("UserListActivity", "Error occurred"); // Do something about the error }); Did you notice the Class type passed. DataSnapshot can convert JSON data into our dened POJOs, simple pass the right class type. If your use case does not require the entire data (in our case user_base table) every time some little change occurs or say you want to fetch the data only once, you can use addListenerForSingleValueEvent() method of Database reference. This res the callback only once. userDBRef.addListenerForSingleValueEvent(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { // Do something } @Override public void onCancelled(DatabaseError databaseError) { // Do something about the error }); Above samples will give you the value of the JSON node. To get the key simply call: String myKey = dataSnapshot.getKey(); Section 231.5: Listening for child updates Take a use case, like a chat app or a collaborative grocery list app (that basically requires a list of objects to be synced across users). If you use rebase database and add a value event listener to the chat parent node or grocery list parent node, you will end with entire chat structure from the beginning of time (i meant beginning of your chat) every time a chat node is added (i.e. anyone says hi). That we don't want to do, what we are interested in is only the new node or only the old node that got deleted or modied, the unchanged ones should not be returned. GoalKicker.com Android Notes for Professionals 1100 In this case we can use ChildEvenListener. Without any further adieu, here is code sample (see prev sections for sample JSON data): userDBRef.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) { } @Override public void onChildRemoved(DataSnapshot dataSnapshot) { } @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) { //If not dealing with ordered data forget about this } @Override public void onCancelled(DatabaseError databaseError) { }); Method names are self explanatory. As you can see whenever a new user is added or some property of existing user is modied or user is deleted or removed appropriate callback method of child event listener is called with relevant data. So if you are keeping UI refreshed for say chat app, get the JSON from onChildAdded() parse into POJO and t it in your UI. Just remember to remove your listener when user leaves the screen. onChildChanged() gives the entire child value with changed properties (new ones). onChiledRemoved() returns the removed child node. Section 231.6: Retrieving data with pagination When you have a huge JSON database, adding a value event listener doesn't make sense. It will return the huge JSON and parsing it would be time consuming. In such cases we can use pagination and fetch part of data and display or process it. Kind of like lazy loading or like fetching old chats when user clicks on show older chat. In this case Query can used. Let's take the our old example in previous sections. The user base contains 3 users, if it grows to say 3 hundred thousand user and you want to fetch the user list in batches of 50: // class level final int limit = 50; int start = 0; // event level Query userListQuery = userDBRef.orderByChild("email").limitToFirst(limit) .startAt(start) userListQuery.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { // Do something start += (limit+1); } GoalKicker.com Android Notes for Professionals 1101 @Override public void onCancelled(DatabaseError databaseError) { // Do something about the error }); Here value or child events can be added and listened to. Call query again to fetch next 50. Make sure to add the orderByChild() method, this will not work without that. Firebase needs to know the order by which you are paginating. Section 231.7: Denormalization: Flat Database Structure Denormalization and a at database structure is neccessary to eciently download separate calls. With the following structure, it is also possible to maintain two-way relationships. The disadvantage of this approach is, that you always need to update the data in multiple places. For an example, imagine an app which allows the user to store messages to himself (memos). Desired at database structure: |--database |-- memos |-- memokey1 |-- title: "Title" |-- content: "Message" |-- memokey2 |-- title: "Important Title" |-- content: "Important Message" |-- users |-- userKey1 |-- name: "<NAME>" |-- memos |-- memokey1 : true //The values here don't matter, we only need the keys. |-- memokey2 : true |-- userKey2 |-- name: "<NAME>" The used memo class public class Memo { private String title, content; //getters and setters ... //toMap() is necessary for the push process private Map<String, Object> toMap() { HashMap<String, Object> result = new HashMap<>(); result.put("title", title); result.put("content", content); return result; } } Retrieving the memos of a user //We need to store the keys and the memos separately private ArrayList<String> mKeys = new ArrayList<>(); private ArrayList<Memo> mMemos = new ArrayList<>(); //The user needs to be logged in to retrieve the uid GoalKicker.com Android Notes for Professionals 1102 String currentUserId = FirebaseAuth.getInstance().getCurrentUser().getUid(); //This is the reference to the list of memos a user has DatabaseReference currentUserMemoReference = FirebaseDatabase.getInstance().getReference() .child("users").child(currentUserId).child("memos"); //This is a reference to the list of all memos DatabaseReference memoReference = FirebaseDatabase.getInstance().getReference() .child("memos"); //We start to listen to the users memos, //this will also retrieve the memos initially currentUserMemoReference.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { //Here we retrieve the key of the memo the user has. String key = dataSnapshot.getKey(); //for example memokey1 //For later manipulations of the lists, we need to store the key in a list mKeys.add(key); //Now that we know which message belongs to the user, //we request it from our memos: memoReference.child(key).addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { //Here we retrieve our memo: Memo memo = dataSnapshot.getValue(Memo.class); mMemos.add(memo); } @Override public void onCancelled(DatabaseError databaseError) { } }); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) { } @Override public void onChildRemoved(DataSnapshot dataSnapshot) { } @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) { } @Override public void onCancelled(DatabaseError databaseError) { } } Creating a memo //The user needs to be logged in to retrieve the uid String currentUserUid = FirebaseAuth.getInstance().getCurrentUser().getUid(); //This is the path to the list of memos a user has String userMemoPath = "users/" + currentUserUid + "/memos/"; //This is the path to the list of all memos String memoPath = "memos/"; //We need to retrieve an unused key from the memos reference DatabaseReference memoReference = FirebaseDatabase.getInstance().getReference().child("memos"); String key = memoReference.push().getKey(); GoalKicker.com Android Notes for Professionals 1103 Memo newMemo = new Memo("Important numbers", "1337, 42, 3.14159265359"); Map<String, Object> childUpdates = new HashMap<>(); //The second parameter **here** (the value) does not matter, it's just that the key exists childUpdates.put(userMemoPath + key, true); childUpdates.put(memoPath + key, newMemo.toMap()); FirebaseDatabase.getInstance().getReference().updateChildren(childUpdates); After the push, or database looks like this: |--database |-- memos |-- memokey1 |-- title: "Title" |-- content: "Message" |-- memokey2 |-- title: "Important Title" |-- content: "Important Message" |-- generatedMemokey3 |-- title: "Important numbers" |-- content: "1337, 42, 3.14159265359" |-- users |-- userKey1 |-- name: "<NAME>" |-- memos |-- memokey1 : true //The values here don't matter, we only need the keys. |-- memokey2 : true |-- generatedMemokey3 : true |-- userKey2 |-- name: "<NAME>" Section 231.8: Designing and understanding how to retrieve realtime data from the Firebase Database This example assumes that you have already set up a Firebase Realtime Database. If you are a starter, then please inform yourself here on how to add Firebase to your Android project. First, add the dependency of the Firebase Database to the app level build.gradle le: compile 'com.google.firebase:firebase-database:9.4.0' Now, let us create a chat app which stores data into the Firebase Database. Step 1: Create a class named Chat Just create a class with some basic variables required for the chat: public class Chat{ public String name, message; } Step 2: Create some JSON data For sending/retrieving data to/from the Firebase Database, you need to use JSON. Let us assume that some chats are already stored at the root level in the database. The data of these chats could look like as follows: [ GoalKicker.com Android Notes for Professionals 1104 { "name":"<NAME>", "message":"My first Message" }, { "name":"<NAME>", "message":"Second Message" }, { "name":"<NAME>", "message":"Third Message" } ] Step 3: Adding the listeners There are three types of listeners. In the following example we are going to use the childEventListener: DatabaseReference chatDb = FirebaseDatabase.getInstance().getReference() // Referencing the root of the database. .child("chats"); // Referencing the "chats" node under the root. chatDb.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { // This function is called for every child id chat in this case, so using the above // example, this function is going to be called 3 times. // Retrieving the Chat object from this function is simple. Chat chat; // Create a null chat object. // Use the getValue function in the dataSnapshot and pass the object's class name to // which you want to convert and get data. In this case it is Chat.class. chat = dataSnapshot.getValue(Chat.class); // Now you can use this chat object and add it into an ArrayList or something like // that and show it in the recycler view. } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) { // This function is called when any of the node value is changed, dataSnapshot will // get the data with the key of the child, so you can swap the new value with the // old one in the ArrayList or something like that. // To get the key, use the .getKey() function. // To get the value, use code similar to the above one. } @Override public void onChildRemoved(DataSnapshot dataSnapshot) { // This function is called when any of the child node is removed. dataSnapshot will // get the data with the key of the child. // To get the key, use the s String parameter . } @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) { // This function is called when any of the child nodes is moved to a different position. // To get the key, use the s String parameter. GoalKicker.com Android Notes for Professionals 1105 } @Override public void onCancelled(DatabaseError databaseError) { // If anything goes wrong, this function is going to be called. // You can get the exception by using databaseError.toException(); } }); Step 4: Add data to the database Just create a Chat class object and add the values as follows: Chat chat=new Chat(); chat.name="<NAME>"; chat.message="First message from android"; Now get a reference to the chats node as done in the retrieving session: DatabaseReference chatDb = FirebaseDatabase.getInstance().getReference().child("chats"); Before you start adding data, keep in mind that you need one more deep reference since a chat node has several more nodes and adding a new chat means adding a new node containing the chat details. We can generate a new and unique name of the node using the push() function on the DatabaseReference object, which will return another DatabaseReference, which in turn points to a newly formed node to insert the chat data. Example // The parameter is the chat object that was newly created a few lines above. chatDb.push().setValue(chat); The setValue() function will make sure that all of the application's onDataChanged functions are getting called (including the same device), which happens to be the attached listener of the "chats" node. GoalKicker.com Android Notes for Professionals 1106 Chapter 232: Firebase App Indexing Section 232.1: Supporting Http URLs Step 1: Allow Google to Crawl to your content.Edit servers robot.txt le.You can control google crawling for your content by editing this le,you can refer to this link for more details. Step 2: Associate your App with your website.Include assetlinks.json You upload it to your web server's .well-known directory.Content of your assetlinks.json are as[{ "relation": ["delegate_permission/common.handle_all_urls"], "target" : { "namespace": "android_app", "package_name": "<your_package_name>", "sha256_cert_fingerprints": ["<hash_of_app_certificate>"] } }] Step 3: Include App links in your manifest le to redirect Urls into your Application like below, <activity android:name=".activity.SampleActivity" android:label="@string/app_name" android:windowSoftInputMode="adjustResize|stateAlwaysHidden"> <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:host="example.live" android:pathPrefix="/vod" android:scheme="https"/> <data android:host="example.live" android:pathPrefix="/vod" android:scheme="http"/> </intent-filter> </activity> Refer to this if you want learn about each and every tag here. < action> Specify the ACTION_VIEW intent action so that the intent lter can be reached from Google Search. < data> Add one or more tags, where each tag represents a URI format that resolves to the activity. At minimum, the tag must include the android:scheme attribute. You can add additional attributes to further rene the type of URI that the activity accepts. For example, you might have multiple activities that accept similar URIs, but which dier simply based on the path name. In this case, use the android:path attribute or its variants (pathPattern or pathPrex) to dierentiate which activity the system should open for dierent URI paths. < category> Include the BROWSABLE category. The BROWSABLE category is required in order for the intent lter to be accessible from a web browser. Without it, clicking a link in a browser cannot resolve to your app. The DEFAULT category is optional, but recommended. Without this category, the activity can be started only with an explicit intent, using your app component name. Step 4: Handle incoming URLS GoalKicker.com Android Notes for Professionals 1107 @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_schedule); onNewIntent(getIntent()); } protected void onNewIntent(Intent intent) { String action = intent.getAction(); Uri data = intent.getData(); if (Intent.ACTION_VIEW.equals(action) && data != null) { articleId = data.getLastPathSegment(); TextView linkText = (TextView)findViewById(R.id.link); linkText.setText(data.toString()); } } Step 5: You can test this by using Android Debug Bridge command or studio congurations. Adb command: Launch your application and then run this command: adb shell am start -a android.intent.action.VIEW -d "{URL}" < package name > Android Studio Congurations: Android studio > Build > Edit Conguration >Launch options>select URL>then type in your Url here >Apply and test.Run your application if Run window shows error then you need to check your URL format with your applinks mentioned in manifest otherwise it will successfully run,and redirect to page mentioned your URL if specied. Section 232.2: Add AppIndexing API For Adding this to project you can nd ocial doc easily but in this example I'm going to highlight some of the key areas to be taken care of. Step 1: Add google service dependencies { ... compile 'com.google.android.gms:play-services-appindexing:9.4.0' ... } Step 2: Import classes import com.google.android.gms.appindexing.Action; import com.google.android.gms.appindexing.AppIndex; import com.google.android.gms.common.api.GoogleApiClient; Step 3: Add App Indexing API calls private GoogleApiClient mClient; private Uri mUrl; private String mTitle; private String mDescription; //If you know the values that to be indexed then you can initialize these variables in onCreate() @Override protected void onCreate(Bundle savedInstanceState) { GoalKicker.com Android Notes for Professionals 1108 mClient = new GoogleApiClient.Builder(this).addApi(AppIndex.API).build(); mUrl = "http://examplepetstore.com/dogs/standard-poodle"; mTitle = "Standard Poodle"; mDescription = "The Standard Poodle stands at least 18 inches at the withers"; } //If your data is coming from a network request, then initialize these value in onResponse() and make checks for NPE so that your code wont fall apart. //setting title and description for App Indexing mUrl = Uri.parse(android-app://com.famelive/https/m.fame.live/vod/ +model.getId()); mTitle = model.getTitle(); mDescription = model.getDescription(); mClient.connect(); AppIndex.AppIndexApi.start(mClient, getAction()); @Override protected void onStop() { if (mTitle != null && mDescription != null && mUrl != null) //if your response fails then check whether these are initialized or not if (getAction() != null) { AppIndex.AppIndexApi.end(mClient, getAction()); mClient.disconnect(); } super.onStop(); } public Action getAction() { Thing object = new Thing.Builder() .setName(mTitle) .setDescription(mDescription) .setUrl(mUrl) .build(); return new Action.Builder(Action.TYPE_WATCH) .setObject(object) .setActionStatus(Action.STATUS_TYPE_COMPLETED) .build(); } To test this just follow the step 4 in Remarks given below. GoalKicker.com Android Notes for Professionals 1109 Chapter 233: Firebase Crash Reporting Section 233.1: How to report an error Firebase Crash Reporting automatically generates reports for fatal errors (or uncaught exceptions). You can create your custom report using: FirebaseCrash.report(new Exception("My first Android non-fatal error")); You can check in the log when FirebaseCrash initialized the module: 0720 08:57:24.442 D/FirebaseCrashApiImpl: FirebaseCrash reporting API initialized 0720 08:57:24.442 I/FirebaseCrash: FirebaseCrash reporting initialized com.google.rebase.crash.internal.zzg@3333d325 0720 08:57:24.442 D/FirebaseApp: Initialized class com.google.rebase.crash.FirebaseCrash. And then when it sent the exception: 0720 08:57:47.052 D/FirebaseCrashApiImpl: throwable java.lang.Exception: My rst Android nonfatal error 0720 08:58:18.822 D/FirebaseCrashSenderServiceImpl: Response code: 200 0720 08:58:18.822 D/FirebaseCrashSenderServiceImpl: Report sent You can add custom logs to your report with FirebaseCrash.log("Activity created"); Section 233.2: How to add Firebase Crash Reporting to your app In order to add Firebase Crash Reporting to your app, perform the following steps: Create an app on the Firebase Console here. Copy the google-services.json le from your project into your in app/ directory. Add the following rules to your root-level build.gradle le in order to include the google-services plugin: buildscript { // ... dependencies { // ... classpath 'com.google.gms:google-services:3.0.0' } } In your module Gradle le, add the apply plugin line at the bottom of the le to enable the Gradle plugin: apply plugin: 'com.google.gms.google-services' GoalKicker.com Android Notes for Professionals 1110 Add the dependency for Crash Reporting to your app-level build.gradle le: compile 'com.google.firebase:firebase-crash:10.2.1' You can then re a custom exception from your application by using the following line: FirebaseCrash.report(new Exception("Non Fatal Error logging")); All your fatal exceptions will be reported to your Firebase Console. If you want to add custom logs to a console, you can use the following code: FirebaseCrash.log("Level 2 completed."); For more information, please visit: Ocial documentation Stack Overow dedicated topic GoalKicker.com Android Notes for Professionals 1111 Chapter 234: Twitter APIs Section 234.1: Creating login with twitter button and attach a callback to it 1. Inside your layout, add a Login button with the following code: <com.twitter.sdk.android.core.identity.TwitterLoginButton android:id="@+id/twitter_login_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true"/> 2. In the Activity or Fragment that displays the button, you need to create and attach a Callback to the Login Buttonas the following: import com.twitter.sdk.android.core.Callback; import com.twitter.sdk.android.core.Result; import com.twitter.sdk.android.core.TwitterException; import com.twitter.sdk.android.core.TwitterSession; import com.twitter.sdk.android.core.identity.TwitterLoginButton; ... loginButton = (TwitterLoginButton) findViewById(R.id.login_button); loginButton.setCallback(new Callback<TwitterSession>() { @Override public void success(Result<TwitterSession> result) { Log.d(TAG, "userName: " + session.getUserName()); Log.d(TAG, "userId: " + session.getUserId()); Log.d(TAG, "authToken: " + session.getAuthToken()); Log.d(TAG, "id: " + session.getId()); Log.d(TAG, "authToken: " + session.getAuthToken().token); Log.d(TAG, "authSecret: " + session.getAuthToken().secret); } @Override public void failure(TwitterException exception) { // Do something on failure } }); 3. Pass the result of the authentication Activity back to the button: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); // Make sure that the loginButton hears the result from any // Activity that it triggered. loginButton.onActivityResult(requestCode, resultCode, data); } Note, If using the TwitterLoginButton in a Fragment, use the following steps instead: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { GoalKicker.com Android Notes for Professionals 1112 super.onActivityResult(requestCode, resultCode, data); // Pass the activity result to the fragment, which will then pass the result to the login // button. Fragment fragment = getFragmentManager().findFragmentById(R.id.your_fragment_id); if (fragment != null) { fragment.onActivityResult(requestCode, resultCode, data); } } 4. Add the following lines to your build.gradle dependencies: apply plugin: 'io.fabric' repositories { maven { url 'https://maven.fabric.io/public' } } compile('com.twitter.sdk.android:twitter:1.14.1@aar') { transitive = true; } GoalKicker.com Android Notes for Professionals 1113 Chapter 235: Youtube-API Section 235.1: Activity extending YouTubeBaseActivity public class CustomYouTubeActivity extends YouTubeBaseActivity implements YouTubePlayer.OnInitializedListener, YouTubePlayer.PlayerStateChangeListener { private YouTubePlayerView mPlayerView; private YouTubePlayer mYouTubePlayer; private String mVideoId = "B08iLAtS3AQ"; private String mApiKey; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mApiKey = Config.YOUTUBE_API_KEY; mPlayerView = new YouTubePlayerView(this); mPlayerView.initialize(mApiKey, this); // setting up OnInitializedListener addContentView(mPlayerView, new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)); //show it in full screen } //Called when initialization of the player succeeds. @Override public void onInitializationSuccess(YouTubePlayer.Provider provider, YouTubePlayer player, boolean wasRestored) { player.setPlayerStateChangeListener(this); // setting up the player state change listener this.mYouTubePlayer = player; if (!wasRestored) player.cueVideo(mVideoId); } @Override public void onInitializationFailure(YouTubePlayer.Provider provider, YouTubeInitializationResult errorReason) { Toast.makeText(this, "Error While initializing", Toast.LENGTH_LONG).show(); } @Override public void onAdStarted() { } @Override public void onLoaded(String videoId) { //video has been loaded if(!TextUtils.isEmpty(mVideoId) && !this.isFinishing() && mYouTubePlayer != null) mYouTubePlayer.play(); // if we don't call play then video will not auto play, but user still has the option to play via play button } @Override public void onLoading() { } @Override public void onVideoEnded() { } GoalKicker.com Android Notes for Professionals 1114 @Override public void onVideoStarted() { } @Override public void onError(ErrorReason reason) { Log.e("onError", "onError : " + reason.name()); } } Section 235.2: Consuming YouTube Data API on Android This example will guide you how to get playlist data using the YouTube Data API on Android. SHA-1 ngerprint First you need to get an SHA-1 ngerprint for your machine. There are various methods for retrieving it. You can choose any method provided in this Q&A. Google API console and YouTube key for Android Now that you have an SHA-1 ngerprint, open the Google API console and create a project. Go to this page and create a project using that SHA-1 key and enable the YouTube Data API. Now you will get a key. This key will be used to send requests from Android and fetch data. Gradle part You will have to add the following lines to your Gradle le for the YouTube Data API: compile 'com.google.apis:google-api-services-youtube:v3-rev183-1.22.0' In order to use YouTube's native client to send requests, we have to add the following lines in Gradle: compile 'com.google.http-client:google-http-client-android:+' compile 'com.google.api-client:google-api-client-android:+' compile 'com.google.api-client:google-api-client-gson:+' The following conguration also needs to be added in Gradle in order to avoid conicts: configurations.all { resolutionStrategy.force 'com.google.code.findbugs:jsr305:3.0.2' } Below it is shown how the gradle.build would nally look like. build.gradle apply plugin: 'com.android.application' android { compileSdkVersion 25 buildToolsVersion "25.0.2" defaultConfig { applicationId "com.aam.skillschool" minSdkVersion 19 targetSdkVersion 25 GoalKicker.com Android Notes for Professionals 1115 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } configurations.all { resolutionStrategy.force 'com.google.code.findbugs:jsr305:3.0.2' } } dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.google.apis:google-api-services-youtube:v3-rev183-1.22.0' compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support:support-v4:25.3.1' compile 'com.google.http-client:google-http-client-android:+' compile 'com.google.api-client:google-api-client-android:+' compile 'com.google.api-client:google-api-client-gson:+' } Now comes the Java part. Since we will be using HttpTransport for networking and GsonFactory for converting JSON into POJO, we don't need any other library to send any requests. Now I want to show how to get playlists via the YouTube API by providing the playlist IDs. For this task I will use AsyncTask. To understand how we request parameters and to understand the ow, please take a look at the YouTube Data API. public class GetPlaylistDataAsyncTask extends AsyncTask<String[], Void, PlaylistListResponse> { private static final String YOUTUBE_PLAYLIST_PART = "snippet"; private static final String YOUTUBE_PLAYLIST_FIELDS = "items(id,snippet(title))"; private YouTube mYouTubeDataApi; public GetPlaylistDataAsyncTask(YouTube api) { mYouTubeDataApi = api; } @Override protected PlaylistListResponse doInBackground(String[]... params) { final String[] playlistIds = params[0]; PlaylistListResponse playlistListResponse; try { playlistListResponse = mYouTubeDataApi.playlists() .list(YOUTUBE_PLAYLIST_PART) .setId(TextUtils.join(",", playlistIds)) .setFields(YOUTUBE_PLAYLIST_FIELDS) .setKey(AppConstants.YOUTUBE_KEY) //Here you will have to provide the keys .execute(); } catch (IOException e) { e.printStackTrace(); return null; GoalKicker.com Android Notes for Professionals 1116 } return playlistListResponse; } } The above asynchronous task will return an instance of PlaylistListResponse which is a build-in class of the YouTube SDK. It has all the required elds, so we don't have to create POJOs ourself. Finally, in our MainActivity we will have to do the following: public class MainActivity extends AppCompatActivity { private YouTube mYoutubeDataApi; private final GsonFactory mJsonFactory = new GsonFactory(); private final HttpTransport mTransport = AndroidHttp.newCompatibleTransport(); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_review); mYoutubeDataApi = new YouTube.Builder(mTransport, mJsonFactory, null) .setApplicationName(getResources().getString(R.string.app_name)) .build(); String[] ids = {"some playlists ids here separated by "," }; new GetPlaylistDataAsyncTask(mYoutubeDataApi) { ProgressDialog progressDialog = new ProgressDialog(getActivity()); @Override protected void onPreExecute() { progressDialog.setTitle("Please wait....."); progressDialog.show(); super.onPreExecute(); } @Override protected void onPostExecute(PlaylistListResponse playlistListResponse) { super.onPostExecute(playlistListResponse); //Here we get the playlist data progressDialog.dismiss(); Log.d(TAG, playlistListResponse.toString()); } }.execute(ids); } } Section 235.3: Launching StandAlonePlayerActivity 1. Launch standalone player activity Intent standAlonePlayerIntent = YouTubeStandalonePlayer.createVideoIntent((Activity) context, Config.YOUTUBE_API_KEY, // which you have created in step 3 videoId, // video which is to be played 100, //The time, in milliseconds, where playback should start in the video true, //autoplay or not false); //lightbox mode or not; false will show in fullscreen context.startActivity(standAlonePlayerIntent); GoalKicker.com Android Notes for Professionals 1117 Section 235.4: YoutubePlayerFragment in portrait Activty The following code implements a simple YoutubePlayerFragment. The activity's layout is locked in portrait mode and when orientation changes or the user clicks full screen at the YoutubePlayer it turns to lansscape with the YoutubePlayer lling the screen. The YoutubePlayerFragment does not need to extend an activity provided by the Youtube library. It needs to implement YouTubePlayer.OnInitializedListener in order to get the YoutubePlayer initialized. So our Activity's class is the following import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.util.Log; import android.widget.Toast; import com.google.android.youtube.player.YouTubeInitializationResult; import com.google.android.youtube.player.YouTubePlayer; import com.google.android.youtube.player.YouTubePlayerFragment; public class MainActivity extends AppCompatActivity implements YouTubePlayer.OnInitializedListener { public static final String API_KEY ; public static final String VIDEO_ID = "B08iLAtS3AQ"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); YouTubePlayerFragment youTubePlayerFragment = (YouTubePlayerFragment) getFragmentManager() .findFragmentById(R.id.youtubeplayerfragment); youTubePlayerFragment.initialize(API_KEY, this); } /** * * @param provider The provider which was used to initialize the YouTubePlayer * @param youTubePlayer A YouTubePlayer which can be used to control video playback in the provider. * @param wasRestored Whether the player was restored from a previously saved state, as part of the YouTubePlayerView * or YouTubePlayerFragment restoring its state. true usually means playback is resuming from where * the user expects it would, and that a new video should not be loaded */ @Override public void onInitializationSuccess(YouTubePlayer.Provider provider, YouTubePlayer youTubePlayer, boolean wasRestored) { youTubePlayer.setFullscreenControlFlags(YouTubePlayer.FULLSCREEN_FLAG_CONTROL_ORIENTATION | YouTubePlayer.FULLSCREEN_FLAG_ALWAYS_FULLSCREEN_IN_LANDSCAPE); if(!wasRestored) { youTubePlayer.cueVideo(VIDEO_ID); } } /** GoalKicker.com Android Notes for Professionals 1118 * * @param provider The provider which failed to initialize a YouTubePlayer. * @param error The reason for this failure, along with potential resolutions to this failure. */ @Override public void onInitializationFailure(YouTubePlayer.Provider provider, YouTubeInitializationResult error) { final int REQUEST_CODE = 1; if(error.isUserRecoverableError()) { error.getErrorDialog(this,REQUEST_CODE).show(); } else { String errorMessage = String.format("There was an error initializing the YoutubePlayer (%1$s)", error.toString()); Toast.makeText(this, errorMessage, Toast.LENGTH_LONG).show(); } } } A YoutubePlayerFragment can be added to the activity's layout xaml as followed <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <fragment android:id="@+id/youtubeplayerfragment" android:name="com.google.android.youtube.player.YouTubePlayerFragment" android:layout_width="match_parent" android:layout_height="wrap_content"/> <ScrollView android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" GoalKicker.com Android Notes for Professionals 1119 android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="20dp" android:text="This is a YoutubePlayerFragment example" android:textStyle="bold"/> </LinearLayout> </ScrollView> </LinearLayout> Lastly you need to add the following attributes in your Manifest le inside the activity's tag android:configChanges="keyboardHidden|orientation|screenSize" android:screenOrientation="portrait" Section 235.5: YouTube Player API Obtaining the Android API Key : First you'll need to get the SHA-1 ngerprint on your machine using java keytool. Execute the below command in cmd/terminal to get the SHA-1 ngerprint. GoalKicker.com Android Notes for Professionals 1120 keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android keypass android MainActivity.java public class Activity extends YouTubeBaseActivity implements YouTubePlayer.OnInitializedListener { private static final int RECOVERY_DIALOG_REQUEST = 1; // YouTube player view private YouTubePlayerView youTubeView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); setContentView(R.layout.activity_main); youTubeView = (YouTubePlayerView) findViewById(R.id.youtube_view); // Initializing video player with developer key youTubeView.initialize(Config.DEVELOPER_KEY, this); } @Override public void onInitializationFailure(YouTubePlayer.Provider provider, YouTubeInitializationResult errorReason) { if (errorReason.isUserRecoverableError()) { errorReason.getErrorDialog(this, RECOVERY_DIALOG_REQUEST).show(); } else { String errorMessage = String.format( getString(R.string.error_player), errorReason.toString()); Toast.makeText(this, errorMessage, Toast.LENGTH_LONG).show(); } } @Override public void onInitializationSuccess(YouTubePlayer.Provider provider, YouTubePlayer player, boolean wasRestored) { if (!wasRestored) { // loadVideo() will auto play video // Use cueVideo() method, if you don't want to play it automatically player.loadVideo(Config.YOUTUBE_VIDEO_CODE); // Hiding player controls player.setPlayerStyle(YouTubePlayer.PlayerStyle.CHROMELESS); } } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == RECOVERY_DIALOG_REQUEST) { // Retry initialization if user performed a recovery action getYouTubePlayerProvider().initialize(Config.DEVELOPER_KEY, this); } } GoalKicker.com Android Notes for Professionals 1121 private YouTubePlayer.Provider getYouTubePlayerProvider() { return (YouTubePlayerView) findViewById(R.id.youtube_view); } } Now create Config.java le. This le holds the Google Console API developer key and YouTube video id Cong.java public class Config { // Developer key public static final String DEVELOPER_KEY = "<KEY>"; // YouTube video id public static final String YOUTUBE_VIDEO_CODE = "_oEA18Y8gM0"; } xml le <com.google.android.youtube.player.YouTubePlayerView android:id="@+id/youtube_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="30dp" /> GoalKicker.com Android Notes for Professionals 1122 Chapter 236: Integrate Google Sign In Parameter TAG Detail A String used while logging GoogleSignInHelper A static reference for helper AppCompatActivity An Activity reference GoogleApiClient A reference of GoogleAPIClient RC_SIGN_IN An integer represents activity result constant isLoggingOut A boolean to check if log-out task is running or not Section 236.1: Google Sign In with Helper class Add below to your build.gradle out of android tag: // Apply plug-in to app. apply plugin: 'com.google.gms.google-services' Add below helper class to your util package: /** * Created by Andy */ public class GoogleSignInHelper implements GoogleApiClient.OnConnectionFailedListener, GoogleApiClient.ConnectionCallbacks { private static final String TAG = GoogleSignInHelper.class.getSimpleName(); private static GoogleSignInHelper googleSignInHelper; private AppCompatActivity mActivity; private GoogleApiClient mGoogleApiClient; public static final int RC_SIGN_IN = 9001; private boolean isLoggingOut = false; public static GoogleSignInHelper newInstance(AppCompatActivity mActivity) { if (googleSignInHelper == null) { googleSignInHelper = new GoogleSignInHelper(mActivity, fireBaseAuthHelper); } return googleSignInHelper; } public GoogleSignInHelper(AppCompatActivity mActivity) { this.mActivity = mActivity; initGoogleSignIn(); } private void initGoogleSignIn() { // [START config_sign_in] // Configure Google Sign In GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestIdToken(mActivity.getString(R.string.default_web_client_id)) .requestEmail() .build(); // [END config_sign_in] mGoogleApiClient = new GoogleApiClient.Builder(mActivity) .enableAutoManage(mActivity /* FragmentActivity */, this /* GoalKicker.com Android Notes for Professionals 1123 OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .addConnectionCallbacks(this) .build(); } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { // An unresolvable error has occurred and Google APIs (including Sign-In) will not // be available. Log.d(TAG, "onConnectionFailed:" + connectionResult); Toast.makeText(mActivity, "Google Play Services error.", Toast.LENGTH_SHORT).show(); } public void getGoogleAccountDetails(GoogleSignInResult result) { // Google Sign In was successful, authenticate with FireBase GoogleSignInAccount account = result.getSignInAccount(); // You are now logged into Google } public void signOut() { if (mGoogleApiClient.isConnected()) { // Google sign out Auth.GoogleSignInApi.signOut(mGoogleApiClient).setResultCallback( new ResultCallback<Status>() { @Override public void onResult(@NonNull Status status) { isLoggingOut = false; } }); } else { isLoggingOut = true; } } public GoogleApiClient getGoogleClient() { return mGoogleApiClient; } @Override public void onConnected(@Nullable Bundle bundle) { Log.w(TAG, "onConnected"); if (isLoggingOut) { signOut(); } } @Override public void onConnectionSuspended(int i) { Log.w(TAG, "onConnectionSuspended"); } } Add below code to your OnActivityResult in Activity le: // [START onactivityresult] @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); GoalKicker.com Android Notes for Professionals 1124 // Result returned from launching the Intent from GoogleSignInApi.getSignInIntent(...); if (requestCode == GoogleSignInHelper.RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi.getSignInResultFromIntent(data); if (result.isSuccess()) { googleSignInHelper.getGoogleAccountDetails(result); } else { // Google Sign In failed, update UI appropriately // [START_EXCLUDE] Log.d(TAG, "signInWith Google failed"); // [END_EXCLUDE] } } } // [END onactivityresult] // [START signin] public void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(googleSignInHelper.getGoogleClient()); startActivityForResult(signInIntent, GoogleSignInHelper.RC_SIGN_IN); } // [END signin] GoalKicker.com Android Notes for Professionals 1125 Chapter 237: Google signin integration on android This topic is based on How to integrate google sign-in, On android apps Section 237.1: Integration of google Auth in your project. (Get a conguration le) First get the Conguration File for Sign-in from Open link below [https://developers.google.com/identity/sign-in/android/start-integrating][1] click on get A conguration le Enter App name And package name and click on choose and congure services provide SHA1 Enable google SIGNIN and generate conguration les Download the conguration le and place the le in app/ folder of your project 1. Add the dependency to your project-level build.gradle: classpath 'com.google.gms:google-services:3.0.0' 2. Add the plugin to your app-level build.gradle:(bottom) apply plugin: 'com.google.gms.google-services' 3. add this dependency to your app gradle le dependencies { compile 'com.google.android.gms:play-services-auth:9.8.0' } Section 237.2: Code Implementation Google SignIn In your sign-in activity's onCreate method, congure Google Sign-In to request the user data required by your app. GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); create a GoogleApiClient object with access to the Google Sign-In API and the options you specied. mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); Now When User click on Google signin button call this Function. GoalKicker.com Android Notes for Professionals 1126 private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } implement OnActivityResult to get the response. @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); // Result returned from launching the Intent from GoogleSignInApi.getSignInIntent(...); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi.getSignInResultFromIntent(data); handleSignInResult(result); } } Last step Handle The Result and get User Data private void handleSignInResult(GoogleSignInResult result) { Log.d(TAG, "handleSignInResult:" + result.isSuccess()); if (result.isSuccess()) { // Signed in successfully, show authenticated UI. GoogleSignInAccount acct = result.getSignInAccount(); mStatusTextView.setText(getString(R.string.signed_in_fmt, acct.getDisplayName())); updateUI(true); } else { // Signed out, show unauthenticated UI. updateUI(false); } } GoalKicker.com Android Notes for Professionals 1127 Chapter 238: Google Awareness APIs Section 238.1: Get changes for location within a certain range using Fence API If you want to detect when your user enters a specic location, you can create a fence for the specic location with a radius you want and be notied when your user enters or leaves the location. // Your own action filter, like the ones used in the Manifest private static final String FENCE_RECEIVER_ACTION = BuildConfig.APPLICATION_ID + "FENCE_RECEIVER_ACTION"; private static final String FENCE_KEY = "locationFenceKey"; private FenceReceiver mFenceReceiver; private PendingIntent mPendingIntent; // Make sure to initialize your client as described in the Remarks section protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // etc // The 0 is a standard Activity request code that can be changed for your needs mPendingIntent = PendingIntent.getBroadcast(this, 0, new Intent(FENCE_RECEIVER_ACTION), 0); registerReceiver(mFenceReceiver, new IntentFilter(FENCE_RECEIVER_ACTION)); // Create the fence AwarenessFence fence = LocationFence.entering(48.136334, 11.581660, 25); // Register the fence to receive callbacks. Awareness.FenceApi.updateFences(client, new FenceUpdateRequest.Builder() .addFence(FENCE_KEY, fence, mPendingIntent) .build()) .setResultCallback(new ResultCallback<Status>() { @Override public void onResult(@NonNull Status status) { if (status.isSuccess()) { Log.i(FENCE_KEY, "Successfully registered."); } else { Log.e(FENCE_KEY, "Could not be registered: " + status); } } }); } } Now create a BroadcastReciver to recive updates in user state: public class FenceReceiver extends BroadcastReceiver { private static final String TAG = "FenceReceiver"; @Override public void onReceive(Context context, Intent intent) { // Get the fence state FenceState fenceState = FenceState.extract(intent); switch (fenceState.getCurrentState()) { case FenceState.TRUE: Log.i(TAG, "User is in location"); break; GoalKicker.com Android Notes for Professionals 1128 case FenceState.FALSE: Log.i(TAG, "User is not in location"); break; case FenceState.UNKNOWN: Log.i(TAG, "User is doing something unknown"); break; } } } Section 238.2: Get current location using Snapshot API // Remember to intialize your client as described in the Remarks section Awareness.SnapshotApi.getLocation(client) .setResultCallback(new ResultCallback<LocationResult>() { @Override public void onResult(@NonNull LocationResult locationResult) { Location location = locationResult.getLocation(); Log.i(getClass().getSimpleName(), "Coordinates: "location.getLatitude() + "," + location.getLongitude() + ", radius : " + location.getAccuracy()); } }); Section 238.3: Get changes in user activity with Fence API If you want to detect when your user starts or nishes an activity such as walking, running, or any other activity of the DetectedActivityFence class, you can create a fence for the activity that you want to detect, and get notied when your user starts/nishes this activity. By using a BroadcastReceiver, you will get an Intent with data that contains the activity: // Your own action filter, like the ones used in the Manifest. private static final String FENCE_RECEIVER_ACTION = BuildConfig.APPLICATION_ID + "FENCE_RECEIVER_ACTION"; private static final String FENCE_KEY = "walkingFenceKey"; private FenceReceiver mFenceReceiver; private PendingIntent mPendingIntent; // Make sure to initialize your client as described in the Remarks section. protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // etc. // The 0 is a standard Activity request code that can be changed to your needs. mPendingIntent = PendingIntent.getBroadcast(this, 0, new Intent(FENCE_RECEIVER_ACTION), 0); registerReceiver(mFenceReceiver, new IntentFilter(FENCE_RECEIVER_ACTION)); // Create the fence. AwarenessFence fence = DetectedActivityFence.during(DetectedActivityFence.WALKING); // Register the fence to receive callbacks. Awareness.FenceApi.updateFences(client, new FenceUpdateRequest.Builder() .addFence(FENCE_KEY, fence, mPendingIntent) .build()) .setResultCallback(new ResultCallback<Status>() { @Override public void onResult(@NonNull Status status) { if (status.isSuccess()) { Log.i(FENCE_KEY, "Successfully registered."); } else { GoalKicker.com Android Notes for Professionals 1129 Log.e(FENCE_KEY, "Could not be registered: " + status); } } }); } } Now you can receive the intent with a BroadcastReceiver to get callbacks when the user changes the activity: public class FenceReceiver extends BroadcastReceiver { private static final String TAG = "FenceReceiver"; @Override public void onReceive(Context context, Intent intent) { // Get the fence state FenceState fenceState = FenceState.extract(intent); switch (fenceState.getCurrentState()) { case FenceState.TRUE: Log.i(TAG, "User is walking"); break; case FenceState.FALSE: Log.i(TAG, "User is not walking"); break; case FenceState.UNKNOWN: Log.i(TAG, "User is doing something unknown"); break; } } } Section 238.4: Get current user activity using Snapshot API For one-time, non-constant requests for a user's physical activity, use the Snapshot API: // Remember to initialize your client as described in the Remarks section Awareness.SnapshotApi.getDetectedActivity(client) .setResultCallback(new ResultCallback<DetectedActivityResult>() { @Override public void onResult(@NonNull DetectedActivityResult detectedActivityResult) { if (!detectedActivityResult.getStatus().isSuccess()) { Log.e(getClass().getSimpleName(), "Could not get the current activity."); return; } ActivityRecognitionResult result = detectedActivityResult .getActivityRecognitionResult(); DetectedActivity probableActivity = result.getMostProbableActivity(); Log.i(getClass().getSimpleName(), "Activity received : " + probableActivity.toString()); } }); Section 238.5: Get headphone state with Snapshot API // Remember to initialize your client as described in the Remarks section Awareness.SnapshotApi.getHeadphoneState(client) .setResultCallback(new ResultCallback<HeadphoneStateResult>() { @Override GoalKicker.com Android Notes for Professionals 1130 public void onResult(@NonNull HeadphoneStateResult headphoneStateResult) { Log.i(TAG, "Headphone state connection state: " + headphoneStateResult.getHeadphoneState() .getState() == HeadphoneState.PLUGGED_IN)); } }); Section 238.6: Get nearby places using Snapshot API // Remember to initialize your client as described in the Remarks section Awareness.SnapshotApi.getPlaces(client) .setResultCallback(new ResultCallback<PlacesResult>() { @Override public void onResult(@NonNull PlacesResult placesResult) { List<PlaceLikelihood> likelihoodList = placesResult.getPlaceLikelihoods(); if (likelihoodList == null || likelihoodList.isEmpty()) { Log.e(getClass().getSimpleName(), "No likely places"); } } }); As for getting the data in those places, here are some options: Place place = placeLikelihood.getPlace(); String likelihood = placeLikelihood.getLikelihood(); Place place = likelihood.getPlace(); String placeName = place.getName(); String placeAddress = place.getAddress(); String placeCoords = place.getLatLng(); String locale = extractFromLocale(place.getLocale())); Section 238.7: Get current weather using Snapshot API // Remember to initialize your client as described in the Remarks section Awareness.SnapshotApi.getWeather(client) .setResultCallback(new ResultCallback<WeatherResult>() { @Override public void onResult(@NonNull WeatherResult weatherResult) { Weather weather = weatherResult.getWeather(); if (weather == null) { Log.e(getClass().getSimpleName(), "No weather received"); } else { Log.i(getClass().getSimpleName(), "Temperature is " + weather.getTemperature(Weather.CELSIUS) + ", feels like " + weather.getFeelsLikeTemperature(Weather.CELSIUS) + ", humidity is " + weather.getHumidity()); } } }); GoalKicker.com Android Notes for Professionals 1131 Chapter 239: Google Maps API v2 for Android Parameter GoogleMap Details the GoogleMap is an object that is received on a onMapReady() event MarkerOptions MarkerOptions is the builder class of a Marker, and is used to add one marker to a map. Section 239.1: Custom Google Map Styles Map Style Google Maps come with a set of dierent styles to be applied, using this code : // Sets the map type to be "hybrid" map.setMapType(GoogleMap.MAP_TYPE_HYBRID); The dierent map styles are : Normal map.setMapType(GoogleMap.MAP_TYPE_NORMAL); Typical road map. Roads, some man-made features, and important natural features such as rivers are shown. Road and feature labels are also visible. GoalKicker.com Android Notes for Professionals 1132 Hybrid map.setMapType(GoogleMap.MAP_TYPE_HYBRID); Satellite photograph data with road maps added. Road and feature labels are also visible. Satellite map.setMapType(GoogleMap.MAP_TYPE_SATELLITE); Satellite photograph data. Road and feature labels are not visible. GoalKicker.com Android Notes for Professionals 1133 Terrain map.setMapType(GoogleMap.MAP_TYPE_TERRAIN); Topographic data. The map includes colors, contour lines and labels, and perspective shading. Some roads and labels are also visible. GoalKicker.com Android Notes for Professionals 1134 None map.setMapType(GoogleMap.MAP_TYPE_NONE); No tiles. The map will be rendered as an empty grid with no tiles loaded. GoalKicker.com Android Notes for Professionals 1135 OTHER STYLE OPTIONS Indoor Maps At high zoom levels, the map will show oor plans for indoor spaces. These are called indoor maps, and are displayed only for the 'normal' and 'satellite' map types. to enable or disable indoor maps, this is how it's done : GoogleMap.setIndoorEnabled(true). GoogleMap.setIndoorEnabled(false). We can add custom styles to maps. In onMapReady method add the following code snippet mMap = googleMap; try { // Customise the styling of the base map using a JSON object defined // in a raw resource file. boolean success = mMap.setMapStyle( MapStyleOptions.loadRawResourceStyle( MapsActivity.this, R.raw.style_json)); if (!success) { Log.e(TAG, "Style parsing failed."); } } catch (Resources.NotFoundException e) { GoalKicker.com Android Notes for Professionals 1136 Log.e(TAG, "Can't find style.", e); } under res folder create a folder name raw and add the styles json le. Sample style.json le [ { "featureType": "all", "elementType": "geometry", "stylers": [ { "color": "#242f3e" } ] }, { "featureType": "all", "elementType": "labels.text.stroke", "stylers": [ { "lightness": -80 } ] }, { "featureType": "administrative", "elementType": "labels.text.fill", "stylers": [ { "color": "#746855" } ] }, { "featureType": "administrative.locality", "elementType": "labels.text.fill", "stylers": [ { "color": "#d59563" } ] }, { "featureType": "poi", "elementType": "labels.text.fill", "stylers": [ { "color": "#d59563" } ] }, { "featureType": "poi.park", "elementType": "geometry", "stylers": [ { "color": "#263c3f" } ] }, { GoalKicker.com Android Notes for Professionals 1137 "featureType": "poi.park", "elementType": "labels.text.fill", "stylers": [ { "color": "#6b9a76" } ] }, { "featureType": "road", "elementType": "geometry.fill", "stylers": [ { "color": "#2b3544" } ] }, { "featureType": "road", "elementType": "labels.text.fill", "stylers": [ { "color": "#9ca5b3" } ] }, { "featureType": "road.arterial", "elementType": "geometry.fill", "stylers": [ { "color": "#38414e" } ] }, { "featureType": "road.arterial", "elementType": "geometry.stroke", "stylers": [ { "color": "#212a37" } ] }, { "featureType": "road.highway", "elementType": "geometry.fill", "stylers": [ { "color": "#746855" } ] }, { "featureType": "road.highway", "elementType": "geometry.stroke", "stylers": [ { "color": "#1f2835" } ] }, GoalKicker.com Android Notes for Professionals 1138 { "featureType": "road.highway", "elementType": "labels.text.fill", "stylers": [ { "color": "#f3d19c" } ] }, { "featureType": "road.local", "elementType": "geometry.fill", "stylers": [ { "color": "#38414e" } ] }, { "featureType": "road.local", "elementType": "geometry.stroke", "stylers": [ { "color": "#212a37" } ] }, { "featureType": "transit", "elementType": "geometry", "stylers": [ { "color": "#2f3948" } ] }, { "featureType": "transit.station", "elementType": "labels.text.fill", "stylers": [ { "color": "#d59563" } ] }, { "featureType": "water", "elementType": "geometry", "stylers": [ { "color": "#17263c" } ] }, { "featureType": "water", "elementType": "labels.text.fill", "stylers": [ { "color": "#515c6d" } ] GoalKicker.com Android Notes for Professionals 1139 }, { "featureType": "water", "elementType": "labels.text.stroke", "stylers": [ { "lightness": -20 } ] } ] To generate styles json le click this link GoalKicker.com Android Notes for Professionals 1140 GoalKicker.com Android Notes for Professionals 1141 GoalKicker.com Android Notes for Professionals 1142 Section 239.2: Default Google Map Activity This Activity code will provide basic functionality for including a Google Map using a SupportMapFragment. The Google Maps V2 API includes an all-new way to load maps. Activities now have to implement the OnMapReadyCallBack interface, which comes with a onMapReady() method override that is executed every time we run SupportMapFragment.getMapAsync(OnMapReadyCallback); and the call is successfully completed. Maps use Markers , Polygons and PolyLines to show interactive information to the user. MapsActivity.java: public class MapsActivity extends AppCompatActivity implements OnMapReadyCallback { private GoogleMap mMap; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager() .findFragmentById(R.id.map); mapFragment.getMapAsync(this); } @Override public void onMapReady(GoogleMap googleMap) { mMap = googleMap; // Add a marker in Sydney, Australia, and move the camera. LatLng sydney = new LatLng(-34, 151); mMap.addMarker(new MarkerOptions().position(sydney).title("Marker in Sydney")); mMap.moveCamera(CameraUpdateFactory.newLatLng(sydney)); } } Notice that the code above inates a layout, which has a SupportMapFragment nested inside the container Layout, dened with an ID of R.id.map. The layout le is shown below: activity_maps.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <fragment xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:map="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/map" tools:context="com.example.app.MapsActivity" android:name="com.google.android.gms.maps.SupportMapFragment"/> </LinearLayout> GoalKicker.com Android Notes for Professionals 1143 Section 239.3: Show Current Location in a Google Map Here is a full Activity class that places a Marker at the current location, and also moves the camera to the current position. There are a few thing going on in sequence here: Check Location permission Once Location permission is granted, call setMyLocationEnabled(), build the GoogleApiClient, and connect it Once the GoogleApiClient is connected, request location updates public class MapLocationActivity extends AppCompatActivity implements OnMapReadyCallback, GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener, LocationListener { GoogleMap mGoogleMap; SupportMapFragment mapFrag; LocationRequest mLocationRequest; GoogleApiClient mGoogleApiClient; Location mLastLocation; Marker mCurrLocationMarker; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); getSupportActionBar().setTitle("Map Location Activity"); mapFrag = (SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map); mapFrag.getMapAsync(this); } @Override public void onPause() { super.onPause(); //stop location updates when Activity is no longer active if (mGoogleApiClient != null) { LocationServices.FusedLocationApi.removeLocationUpdates(mGoogleApiClient, this); } } @Override public void onMapReady(GoogleMap googleMap) { mGoogleMap=googleMap; mGoogleMap.setMapType(GoogleMap.MAP_TYPE_HYBRID); //Initialize Google Play Services if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { //Location Permission already granted buildGoogleApiClient(); mGoogleMap.setMyLocationEnabled(true); } else { GoalKicker.com Android Notes for Professionals 1144 //Request Location Permission checkLocationPermission(); } } else { buildGoogleApiClient(); mGoogleMap.setMyLocationEnabled(true); } } protected synchronized void buildGoogleApiClient() { mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); mGoogleApiClient.connect(); } @Override public void onConnected(Bundle bundle) { mLocationRequest = new LocationRequest(); mLocationRequest.setInterval(1000); mLocationRequest.setFastestInterval(1000); mLocationRequest.setPriority(LocationRequest.PRIORITY_BALANCED_POWER_ACCURACY); if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, this); } } @Override public void onConnectionSuspended(int i) {} @Override public void onConnectionFailed(ConnectionResult connectionResult) {} @Override public void onLocationChanged(Location location) { mLastLocation = location; if (mCurrLocationMarker != null) { mCurrLocationMarker.remove(); } //Place current location marker LatLng latLng = new LatLng(location.getLatitude(), location.getLongitude()); MarkerOptions markerOptions = new MarkerOptions(); markerOptions.position(latLng); markerOptions.title("Current Position"); markerOptions.icon(BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_MAGENTA)); mCurrLocationMarker = mGoogleMap.addMarker(markerOptions); //move map camera mGoogleMap.moveCamera(CameraUpdateFactory.newLatLng(latLng)); mGoogleMap.animateCamera(CameraUpdateFactory.zoomTo(11)); //stop location updates if (mGoogleApiClient != null) { GoalKicker.com Android Notes for Professionals 1145 LocationServices.FusedLocationApi.removeLocationUpdates(mGoogleApiClient, this); } } public static final int MY_PERMISSIONS_REQUEST_LOCATION = 99; private void checkLocationPermission() { if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) { // Should we show an explanation? if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.ACCESS_FINE_LOCATION)) { // Show an explanation to the user *asynchronously* -- don't block // this thread waiting for the user's response! After the user // sees the explanation, try again to request the permission. new AlertDialog.Builder(this) .setTitle("Location Permission Needed") .setMessage("This app needs the Location permission, please accept to use location functionality") .setPositiveButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { //Prompt the user once explanation has been shown ActivityCompat.requestPermissions(MapLocationActivity.this, new String[]{Manifest.permission.ACCESS_FINE_LOCATION}, MY_PERMISSIONS_REQUEST_LOCATION ); } }) .create() .show(); } else { // No explanation needed, we can request the permission. ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.ACCESS_FINE_LOCATION}, MY_PERMISSIONS_REQUEST_LOCATION ); } } } @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case MY_PERMISSIONS_REQUEST_LOCATION: { // If request is cancelled, the result arrays are empty. if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { // permission was granted, yay! Do the // location-related task you need to do. if (ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) == PackageManager.PERMISSION_GRANTED) { if (mGoogleApiClient == null) { buildGoogleApiClient(); } mGoogleMap.setMyLocationEnabled(true); } GoalKicker.com Android Notes for Professionals 1146 } else { // permission denied, boo! Disable the // functionality that depends on this permission. Toast.makeText(this, "permission denied", Toast.LENGTH_LONG).show(); } return; } // other 'case' lines to check for other // permissions this app might request } } } activity_main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <fragment xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:map="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/map" tools:context="com.example.app.MapLocationActivity" android:name="com.google.android.gms.maps.SupportMapFragment"/> </LinearLayout> Result: Show explanation if needed on Marshmallow and Nougat using an AlertDialog (this case happens when the user had previously denied a permission request, or had granted the permission and then later revoked it in the settings): GoalKicker.com Android Notes for Professionals 1147 Prompt the user for Location permission on Marshmallow and Nougat by calling ActivityCompat.requestPermissions(): GoalKicker.com Android Notes for Professionals 1148 Move camera to current location and place Marker when the Location permission is granted: GoalKicker.com Android Notes for Professionals 1149 Section 239.4: Change Oset By changing mappoint x and y values as you need you can change oset possition of google map,by default it will be in the center of the map view. Call below method where you want to change it! Better to use it inside your onLocationChanged like changeOffsetCenter(location.getLatitude(),location.getLongitude()); public void changeOffsetCenter(double latitude,double longitude) { Point mappoint = mGoogleMap.getProjection().toScreenLocation(new LatLng(latitude, longitude)); mappoint.set(mappoint.x, mappoint.y-100); // change these values as you need , just hard coded a value if you want you can give it based on a ratio like using DisplayMetrics as well mGoogleMap.animateCamera(CameraUpdateFactory.newLatLng(mGoogleMap.getProjection().fromScreenLocatio n(mappoint))); } Section 239.5: MapView: embedding a GoogleMap in an existing layout It is possible to treat a GoogleMap as an Android view if we make use of the provided MapView class. Its usage is very similar to MapFragment. In your layout use MapView as follows: <com.google.android.gms.maps.MapView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:map="http://schemas.android.com/apk/res-auto" GoalKicker.com Android Notes for Professionals 1150 android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" <!-map:mapType="0" Specifies a change to the initial map type map:zOrderOnTop="true" Control whether the map view's surface is placed on top of its window map:useVieLifecycle="true" When using a MapFragment, this flag specifies whether the lifecycle of the map should be tied to the fragment's view or the fragment itself map:uiCompass="true" Enables or disables the compass map:uiRotateGestures="true" Sets the preference for whether rotate gestures should be enabled or disabled map:uiScrollGestures="true" Sets the preference for whether scroll gestures should be enabled or disabled map:uiTiltGestures="true" Sets the preference for whether tilt gestures should be enabled or disabled map:uiZoomGestures="true" Sets the preference for whether zoom gestures should be enabled or disabled map:uiZoomControls="true" Enables or disables the zoom controls map:liteMode="true" Specifies whether the map should be created in lite mode map:uiMapToolbar="true" Specifies whether the mapToolbar should be enabled map:ambientEnabled="true" Specifies whether ambient-mode styling should be enabled map:cameraMinZoomPreference="0.0" Specifies a preferred lower bound for camera zoom map:cameraMaxZoomPreference="1.0" Specifies a preferred upper bound for camera zoom --> /> Your activity needs to implement the OnMapReadyCallback interface in order to work: /** * This shows how to create a simple activity with a raw MapView and add a marker to it. This * requires forwarding all the important lifecycle methods onto MapView. */ public class RawMapViewDemoActivity extends AppCompatActivity implements OnMapReadyCallback { private MapView mMapView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.raw_mapview_demo); mMapView = (MapView) findViewById(R.id.map); mMapView.onCreate(savedInstanceState); mMapView.getMapAsync(this); } @Override protected void onResume() { super.onResume(); mMapView.onResume(); } @Override public void onMapReady(GoogleMap map) { map.addMarker(new MarkerOptions().position(new LatLng(0, 0)).title("Marker")); } @Override protected void onPause() { mMapView.onPause(); super.onPause(); GoalKicker.com Android Notes for Professionals 1151 } @Override protected void onDestroy() { mMapView.onDestroy(); super.onDestroy(); } @Override public void onLowMemory() { super.onLowMemory(); mMapView.onLowMemory(); } @Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); mMapView.onSaveInstanceState(outState); } } Section 239.6: Get debug SHA1 ngerprint 1. Open Android Studio 2. Open Your Project 3. Click on Gradle (From Right Side Panel, you will see Gradle Bar) 4. Click on Refresh (Click on Refresh from Gradle Bar, you will see List Gradle scripts of your Project) 5. Click on Your Project (Your Project Name form List (root)) 6. Click on Tasks 7. Click on android 8. Double Click on signingReport (You will get SHA1 and MD5 in Run Bar) GoalKicker.com Android Notes for Professionals 1152 Section 239.7: Adding markers to a map To add markers to a Google Map, for example from an ArrayList of MyLocation Objects, we can do it this way. The MyLocation holder class: public class MyLocation { LatLng latLng; String title; String snippet; } Here is a method that would take a list of MyLocation Objects and place a Marker for each one: private void LocationsLoaded(List<MyLocation> locations){ for (MyLocation myLoc : locations){ mMap.addMarker(new MarkerOptions() .position(myLoc.latLng) .title(myLoc.title) .snippet(myLoc.snippet) .icon(BitmapDescriptorFactory.defaultMarker(BitmapDescriptorFactory.HUE_MAGENTA)); } } Note: For the purpose of this example, mMap is a class member variable of the Activity, where we've assigned it to the map reference received in the onMapReady() override. Section 239.8: UISettings Using UISettings, the appearance of the Google Map can be modied. Here is an example of some common settings: mGoogleMap.setMapType(GoogleMap.MAP_TYPE_HYBRID); mGoogleMap.getUiSettings().setMapToolbarEnabled(true); mGoogleMap.getUiSettings().setZoomControlsEnabled(true); mGoogleMap.getUiSettings().setCompassEnabled(true); Result: GoalKicker.com Android Notes for Professionals 1153 Section 239.9: InfoWindow Click Listener Here is an example of how to dene a dierent action for each Marker's InfoWindow click event. Use a HashMap in which the marker ID is the key, and the value is the corresponding action it should take when the InfoWindow is clicked. Then, use a OnInfoWindowClickListener to handle the event of a user clicking the InfoWindow, and use the HashMap to determine which action to take. In this simple example we will open up a dierent Activity based on which Marker's InfoWindow was clicked. Declare the HashMap as an instance variable of the Activity or Fragment: //Declare HashMap to store mapping of marker to Activity HashMap<String, String> markerMap = new HashMap<String, String>(); Then, each time you add a Marker, make an entry in the HashMap with the Marker ID and the action it should take when it's InfoWindow is clicked. For example, adding two Markers and dening an action to take for each: Marker markerOne = googleMap.addMarker(new MarkerOptions().position(latLng1) .title("Marker One") .snippet("This is Marker One"); String idOne = markerOne.getId(); markerMap.put(idOne, "action_one"); GoalKicker.com Android Notes for Professionals 1154 Marker markerTwo = googleMap.addMarker(new MarkerOptions().position(latLng2) .title("Marker Two") .snippet("This is Marker Two"); String idTwo = markerTwo.getId(); markerMap.put(idTwo, "action_two"); In the InfoWindow click listener, get the action from the HashMap, and open up the corresponding Activity based on the action of the Marker: mGoogleMap.setOnInfoWindowClickListener(new GoogleMap.OnInfoWindowClickListener() { @Override public void onInfoWindowClick(Marker marker) { String actionId = markerMap.get(marker.getId()); if (actionId.equals("action_one")) { Intent i = new Intent(MainActivity.this, ActivityOne.class); startActivity(i); } else if (actionId.equals("action_two")) { Intent i = new Intent(MainActivity.this, ActivityTwo.class); startActivity(i); } } }); Note If the code is in a Fragment, replace MainActivity.this with getActivity(). Section 239.10: Obtaining the SH1-Fingerprint of your certicate keystore le In order to obtain a Google Maps API key for your certicate, you must provide the API console with the SH1ngerprint of your debug/release keystore. You can obtain the keystore by using the JDK's keytool program as described here in the docs. Another approach is to obtain the ngerprint programmatically by running this snippet with your app signed with the debug/release certicate and printing the hash to the log. PackageInfo info; try { info = getPackageManager().getPackageInfo("com.package.name", PackageManager.GET_SIGNATURES); for (Signature signature : info.signatures) { MessageDigest md; md = MessageDigest.getInstance("SHA"); md.update(signature.toByteArray()); String hash= new String(Base64.encode(md.digest(), 0)); Log.e("hash", hash); } } catch (NameNotFoundException e1) { Log.e("name not found", e1.toString()); } catch (NoSuchAlgorithmException e) { Log.e("no such an algorithm", e.toString()); } catch (Exception e) { Log.e("exception", e.toString()); } GoalKicker.com Android Notes for Professionals 1155 Section 239.11: Do not launch Google Maps when the map is clicked (lite mode) When a Google Map is displayed in lite mode clicking on a map will open the Google Maps application. To disable this functionality you must call setClickable(false) on the MapView, e.g.: final MapView mapView = (MapView)view.findViewById(R.id.map); mapView.setClickable(false); GoalKicker.com Android Notes for Professionals 1156 Chapter 240: Google Drive API Google Drive is a le hosting service created by Google. It provides le storage service and allows the user to upload les in the cloud and also share with other people. Using Google Drive API, we can synchronize les between computer or mobile device and Google Drive Cloud. Section 240.1: Integrate Google Drive in Android Create a New Project on Google Developer Console To integrate Android application with Google Drive, create the credentials of project in the Google Developers Console. So, we need to create a project on Google Developer console. To create a project on Google Developer Console, follow these steps: Go to Google Developer Console for Android. Fill your project name in the input eld and click on the create button to create a new project on Google Developer console. We need to create credentials to access API. So, click on the Create credentials button. GoalKicker.com Android Notes for Professionals 1157 Now, a pop window will open. Click on API Key option in the list to create API key. We need an API key to call Google APIs for Android. So, click on the Android Key to identify your Android Project. GoalKicker.com Android Notes for Professionals 1158 Next, we need to add Package Name of the Android Project and SHA-1 ngerprint in the input elds to create API key. We need to generate SHA-1 ngerprint. So, open your terminal and run Keytool utility to get the SHA1 ngerprint. While running Keytool utility, you need to provide keystore password. Default development keytool password is android. keytool -exportcert -alias androiddebugkey -keystore ~/.android/debug.keystore -list -v GoalKicker.com Android Notes for Professionals 1159 Now, add Package name and SHA-1 ngerprint in input elds on credentials page. Finally, click on create button to create API key. This will create API key for Android. We will use the this API key to integrate Android application with Google Drive. GoalKicker.com Android Notes for Professionals 1160 Enable Google Drive API We need to enable Google Drive Api to access les stored on Google Drive from Android application. To enable Google Drive API, follow below steps: Go to your Google Developer console Dashboard and click on Enable APIs get credentials like keys then you will see popular Google APIs. GoalKicker.com Android Notes for Professionals 1161 Click on Drive API link to open overview page of Google Drive API. Click on the Enable button to enable Google drive API. It allows client access to Google Drive. GoalKicker.com Android Notes for Professionals 1162 Add Internet Permission App needs Internet access Google Drive les. Use the following code to set up Internet permissions in AndroidManifest.xml le : <uses-permission android:name="android.permission.INTERNET" /> Add Google Play Services We will use Google play services API which includes the Google Drive Android API. So, we need to setup Google play services SDK in Android Application. Open your build.gradle(app module) le and add Google play services SDK as a dependencies. dependencies { .... compile 'com.google.android.gms:play-services:<latest_version>' .... } Add API key in Manifest le To use Google API in Android application, we need to add API key and version of the Google Play Service in the AndroidManifest.xml le. Add the correct metadata tags inside the tag of the AndroidManifest.xml le. Connect and Authorize the Google Drive Android API We need to authenticate and connect Google Drive Android API with Android application. Authorization of Google Drive Android API is handled by the GoogleApiClient. We will use GoogleApiClient within onResume() method. /** * Called when the activity will start interacting with the user. * At this point your activity is at the top of the activity stack, GoalKicker.com Android Notes for Professionals 1163 * with user input going to it. */ @Override protected void onResume() { super.onResume(); if (mGoogleApiClient == null) { /** * Create the API client and bind it to an instance variable. * We use this instance as the callback for connection and connection failures. * Since no account name is passed, the user is prompted to choose. */ mGoogleApiClient = new GoogleApiClient.Builder(this) .addApi(Drive.API) .addScope(Drive.SCOPE_FILE) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .build(); } mGoogleApiClient.connect(); } Disconnect Google Deive Android API When activity stops, we will disconnected Google Drive Android API connection with Android application by calling disconnect() method inside activitys onStop() method. @Override protected void onStop() { super.onStop(); if (mGoogleApiClient != null) { // disconnect Google Android Drive API connection. mGoogleApiClient.disconnect(); } super.onPause(); } Implement Connection Callbacks and Connection Failed Listener We will implement Connection Callbacks and Connection Failed Listener of Google API client in MainActivity.java le to know status about connection of Google API client. These listeners provide onConnected(), onConnectionFailed(), onConnectionSuspended() method to handle the connection issues between app and Drive. If user has authorized the application, the onConnected() method is invoked. If user has not authorized application, onConnectionFailed() method is invoked and a dialog is displayed to user that your app is not authorized to access Google Drive. In case connection is suspended, onConnectionSuspended() method is called. You need to implement ConnectionCallbacks and OnConnectionFailedListener in your activity. Use the following code in your Java le. @Override public void onConnectionFailed(ConnectionResult result) { // Called whenever the API client fails to connect. Log.i(TAG, "GoogleApiClient connection failed:" + result.toString()); GoalKicker.com Android Notes for Professionals 1164 if (!result.hasResolution()) { // show the localized error dialog. GoogleApiAvailability.getInstance().getErrorDialog(this, result.getErrorCode(), 0).show(); return; } /** * The failure has a resolution. Resolve it. * Called typically when the app is not yet authorized, and an * dialog is displayed to the user. */ authorization try { result.startResolutionForResult(this, REQUEST_CODE_RESOLUTION); } catch (SendIntentException e) { Log.e(TAG, "Exception while starting resolution activity", e); } } /** * It invoked when Google API client connected * @param connectionHint */ @Override public void onConnected(Bundle connectionHint) { Toast.makeText(getApplicationContext(), "Connected", Toast.LENGTH_LONG).show(); } /** * It invoked when connection suspended * @param cause */ @Override public void onConnectionSuspended(int cause) { Log.i(TAG, "GoogleApiClient connection suspended"); } Section 240.2: Create a File on Google Drive We will add a le on Google Drive. We will use the createFile() method of a Drive object to create le programmatically on Google Drive. In this example we are adding a new text le in the users root folder. When a le is added, we need to specify the initial set of metadata, le contents, and the parent folder. We need to create a CreateMyFile() callback method and within this method, use the Drive object to retrieve a DriveContents resource. Then we pass the API client to the Drive object and call the driveContentsCallback callback method to handle result of DriveContents. A DriveContents resource contains a temporary copy of the le's binary stream which is only available to the application. public void CreateMyFile(){ fileOperation = true; GoalKicker.com Android Notes for Professionals 1165 // Create new contents resource. Drive.DriveApi.newDriveContents(mGoogleApiClient) .setResultCallback(driveContentsCallback); } Result Handler of DriveContents Handling the response requires to check if the call was successful or not. If the call was successful, we can retrieve the DriveContents resource. We will create a result handler of DriveContents. Within this method, we call the CreateFileOnGoogleDrive() method and pass the result of DriveContentsResult: /** * This is the Result result handler of Drive contents. * This callback method calls the CreateFileOnGoogleDrive() method. */ final ResultCallback<DriveContentsResult> driveContentsCallback = new ResultCallback<DriveContentsResult>() { @Override public void onResult(DriveContentsResult result) { if (result.getStatus().isSuccess()) { if (fileOperation == true){ CreateFileOnGoogleDrive(result); } } } }; Create File Programmatically To create les, we need to use a MetadataChangeSet object. By using this object, we set the title (le name) and le type. Also, we must use the createFile() method of the DriveFolder class and pass the Google client API, the MetaDataChangeSet object, and the driveContents to create a le. We call the result handler callback to handle the result of the created le. We use the following code to create a new text le in the user's root folder: /** * Create a file in the root folder using a MetadataChangeSet object. * @param result */ public void CreateFileOnGoogleDrive(DriveContentsResult result){ final DriveContents driveContents = result.getDriveContents(); // Perform I/O off the UI thread. new Thread() { @Override public void run() { // Write content to DriveContents. OutputStream outputStream = driveContents.getOutputStream(); Writer writer = new OutputStreamWriter(outputStream); try { writer.write("Hello Christlin!"); writer.close(); } catch (IOException e) { Log.e(TAG, e.getMessage()); } MetadataChangeSet changeSet = new MetadataChangeSet.Builder() GoalKicker.com Android Notes for Professionals 1166 .setTitle("My First Drive File") .setMimeType("text/plain") .setStarred(true).build(); // Create a file in the root folder. Drive.DriveApi.getRootFolder(mGoogleApiClient) .createFile(mGoogleApiClient, changeSet, driveContents) setResultCallback(fileCallback); } }.start(); } Handle result of Created File The following code will create a callback method to handle the result of the created le: /** * Handle result of Created file */ final private ResultCallback<DriveFolder.DriveFileResult> fileCallback = new ResultCallback<DriveFolder.DriveFileResult>() { @Override public void onResult(DriveFolder.DriveFileResult result) { if (result.getStatus().isSuccess()) { Toast.makeText(getApplicationContext(), "file created: "+ result.getDriveFile().getDriveId(), Toast.LENGTH_LONG).show(); } return; } }; GoalKicker.com Android Notes for Professionals 1167 Chapter 241: Displaying Google Ads Section 241.1: Adding Interstitial Ad Interstitial ads are full-screen ads that cover the interface of their host app. They're typically displayed at natural transition points in the ow of an app, such as between activities or during the pause between levels in a game. Make sure you have necessary permissions in your Manifest le: <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> 1. Go to your AdMob account. 2. Click on Monetize tab. 3. Select or Create the app and choose the platform. 4. Select Interstitial and give an ad unit name. 5. Once the ad unit is created, you can notice the Ad unit ID on the dashboard. For example: ca-apppub-00000000000/000000000 6. Add dependencies compile 'com.google.firebase:firebase-ads:10.2.1' This one should be on the bottom. apply plugin: 'com.google.gms.google-services' Add your Ad unit ID to your strings.xml le <string name="interstitial_full_screen">ca-app-pub-00000000/00000000</string> Add CongChanges and meta-data to your manifest: <activity android:name="com.google.android.gms.ads.AdActivity" android:configChanges="keyboard|keyboardHidden|orientation|screenLayout|uiMode|screenSize|smallestS creenSize" android:theme="@android:style/Theme.Translucent" /> and <meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version" /> Activity: public class AdActivity extends AppCompatActivity { private String TAG = AdActivity.class.getSimpleName(); InterstitialAd mInterstitialAd; GoalKicker.com Android Notes for Professionals 1168 @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_second); mInterstitialAd = new InterstitialAd(this); // set the ad unit ID mInterstitialAd.setAdUnitId(getString(R.string.interstitial_full_screen)); AdRequest adRequest = new AdRequest.Builder() .build(); // Load ads into Interstitial Ads mInterstitialAd.loadAd(adRequest); mInterstitialAd.setAdListener(new AdListener() { public void onAdLoaded() { showInterstitial(); } }); } private void showInterstitial() { if (mInterstitialAd.isLoaded()) { mInterstitialAd.show(); } } } This AdActivity will show a full screen ad now. Section 241.2: Basic Ad Setup You'll need to add the following to your dependencies: compile 'com.google.firebase:firebase-ads:10.2.1' and then put this in the same le. apply plugin: 'com.google.gms.google-services' Next you'll need to add relevant information into your strings.xml. <string name="banner_ad_unit_id">ca-app-pub-####/####</string> Next place an adview wherever you want it and style it just like any other view. <com.google.android.gms.ads.AdView android:id="@+id/adView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_alignParentBottom="true" ads:adSize="BANNER" ads:adUnitId="@string/banner_ad_unit_id"> </com.google.android.gms.ads.AdView> GoalKicker.com Android Notes for Professionals 1169 And last but not least, throw this in your onCreate. MobileAds.initialize(getApplicationContext(), "ca-app-pub-YOUR_ID"); AdView mAdView = (AdView) findViewById(R.id.adView); AdRequest adRequest = new AdRequest.Builder().build(); mAdView.loadAd(adRequest); If you copy-pasted exactly you should now have a small banner ad. Simply place more AdViews wherever you need them for more. GoalKicker.com Android Notes for Professionals 1170 Chapter 242: AdMob Param Details The ID of your ad. Get your ID from the admob site. "While it's not a requirement, storing your ad unit ID values in a resource le is a good ads:adUnitId="@string/main_screen_ad" practice. As your app grows and your ad publishing needs mature, it may be necessary to change the ID values. If you keep them in a resource le, you never have to search through your code looking for them.".[1] Section 242.1: Implementing Note: This example requires a valid Admob account and valid Admob ad code. Build.gradle on app level Change to the latest version if existing: compile 'com.google.firebase:firebase-ads:10.2.1' Manifest Internet permission is required to access the ad data. Note that this permission does not have to be requested (using API 23+) as it is a normal permission and not dangerous: <uses-permission android:name="android.permission.INTERNET" /> XML The following XML example shows a banner ad: <com.google.android.gms.ads.AdView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/adView" ads:adSize="BANNER" ads:adUnitId="@string/main_screen_ad" /> For the code of other types, refer to the Google AdMob Help. Java The following code is for the integration of banner ads. Note that other ad types may require dierent integration: // Alternative for faster initialization. // MobileAds.initialize(getApplicationContext(), "AD_UNIT_ID"); AdView mAdView = (AdView) findViewById(R.id.adView); // Add your device test ID if you are doing testing before releasing. // The device test ID can be found in the admob stacktrace. AdRequest adRequest = new AdRequest.Builder().build(); mAdView.loadAd(adRequest); Add the AdView life cycle methods in the onResume(), onPause(), and onDestroy() methods of your activity: @Override public void onPause() { if (mAdView != null) { mAdView.pause(); GoalKicker.com Android Notes for Professionals 1171 } super.onPause(); } @Override public void onResume() { super.onResume(); if (mAdView != null) { mAdView.resume(); } } @Override public void onDestroy() { if (mAdView != null) { mAdView.destroy(); } super.onDestroy(); } GoalKicker.com Android Notes for Professionals 1172 Chapter 243: Google Play Store Section 243.1: Open Google Play Store Listing for your app The following code snippet shows how to open the Google Play Store Listing of your app in a safe way. Usually you want to use it when asking the user to leave a review for your app. private void openPlayStore() { String packageName = getPackageName(); Intent playStoreIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("market://details?id=" + packageName)); setFlags(playStoreIntent); try { startActivity(playStoreIntent); } catch (Exception e) { Intent webIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("https://play.google.com/store/apps/details?id=" + packageName)); setFlags(webIntent); startActivity(webIntent); } } @SuppressWarnings("deprecation") private void setFlags(Intent intent) { intent.addFlags(Intent.FLAG_ACTIVITY_NO_HISTORY); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) intent.addFlags(Intent.FLAG_ACTIVITY_NEW_DOCUMENT); else intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET); } Note: The code opens the Google Play Store if the app is installed. Otherwise it will just open the web browser. Section 243.2: Open Google Play Store with the list of all applications from your publisher account You can add a "Browse Our Other Apps" button in your app, listing all your(publisher) applications in the Google Play Store app. String urlApp = "market://search?q=pub:Google+Inc."; String urlWeb = "http://play.google.com/store/search?q=pub:Google+Inc."; try { Intent i = new Intent(Intent.ACTION_VIEW, Uri.parse(urlApp)); setFlags(i); startActivity(i); } catch (android.content.ActivityNotFoundException anfe) { Intent i = new Intent(Intent.ACTION_VIEW, Uri.parse(urlWeb))); setFlags(i); startActivity(i); } @SuppressWarnings("deprecation") public void setFlags(Intent i) { i.addFlags(Intent.FLAG_ACTIVITY_NO_HISTORY); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) i.addFlags(Intent.FLAG_ACTIVITY_NEW_DOCUMENT); } GoalKicker.com Android Notes for Professionals { 1173 else { i.addFlags(Intent.FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET); } } GoalKicker.com Android Notes for Professionals 1174 Chapter 244: Sign your Android App for Release Android requires that all APKs be signed for release. Section 244.1: Sign your App 1. In the menu bar, click Build > Generate Signed APK. 2. Select the module you would like to release from the drop down and click Next. 3. To Create a new keystore, click Create new. Now ll the required information and press ok in New Key Store. GoalKicker.com Android Notes for Professionals 1175 4. On the Generate Signed APK Wizard elds are already populated for you if you just created new key store otherwise ll it and click next. 5. On the next window, select a destination for the signed APK, select the build type and click nish. Section 244.2: Congure the build.gradle with signing conguration You can dene the signing conguration to sign the apk in the build.gradle le. You can dene: storeFile : the keystore le storePassword: the keystore password keyAlias: a key alias name keyPassword: A key alias password You have to dene the signingConfigs block to create a signing conguration: android { signingConfigs { myConfig { storeFile file("myFile.keystore") storePassword "xxxx" keyAlias "xxxx" keyPassword "xxxx" } } //.... } Then you can assign it to one or more build types. android { buildTypes { GoalKicker.com Android Notes for Professionals 1176 release { signingConfig signingConfigs.myConfig } } } GoalKicker.com Android Notes for Professionals 1177 Chapter 245: TensorFlow TensorFlow was designed with mobile and embedded platforms in mind. We have sample code and build support you can try now for these platforms: Android iOS Raspberry Pi Section 245.1: How to use Install Bazel from here. Bazel is the primary build system for TensorFlow. Now, edit the WORKSPACE, we can nd the WORKSPACE le in the root directory of the TensorFlow that we have cloned earlier. # Uncomment and update the paths in these entries to build the Android demo. #android_sdk_repository( # name = "androidsdk", # api_level = 23, # build_tools_version = "25.0.1", # # Replace with path to Android SDK on your system # path = "<PATH_TO_SDK>", #) # #android_ndk_repository( # name="androidndk", # path="<PATH_TO_NDK>", # api_level=14) Like below with our sdk and ndk path: android_sdk_repository( name = "androidsdk", api_level = 23, build_tools_version = "25.0.1", # Replace with path to Android SDK on your system path = "/Users/amitshekhar/Library/Android/sdk/", ) android_ndk_repository( name="androidndk", path="/Users/amitshekhar/Downloads/android-ndk-r13/", api_level=14) GoalKicker.com Android Notes for Professionals 1178 Chapter 246: Android Vk Sdk Section 246.1: Initialization and login 1. Create a new application here: create application 2. Choose standalone applicaton and conrm app creation via SMS. 3. Fill Package namefor Android as your current package name. You can get your package name inside android manifest le, at the very begginning. 4. Get your Certicate ngerprint by executing this command in your shell/cmd: keytool -exportcert -alias androiddebugkey -keystore path-to-debug-or-production-keystore -list -v You can also get this ngerprint by SDK itself: String[] fingerprints = VKUtil.getCertificateFingerprint(this, this.getPackageName()); Log.d("MainActivity", fingerprints[0]); 5. Add recieved ngerprint into your Signing certicate ngerprint for Android: eld in Vk app settings (where you entered your package name) 6. Then add this to your gradle le: compile 'com.vk:androidsdk:1.6.5' 8. Initialize the SDK on startup using the following method. The best way is to call it in the Applications onCreate method. private static final int VK_ID = your_vk_id; public static final String VK_API_VERSION = "5.52"; //current version @Override public void onCreate() { super.onCreate(); VKSdk.customInitialize(this, VK_ID, VK_API_VERSION); } This is the best way to initizlize VKSdk. Don't use the methid where VK_ID should be placed inside strings.xml because api will not work correctly after it. 9. Final step is to login using vksdk. public static final String[] VK_SCOPES = new String[]{ VKScope.FRIENDS, VKScope.MESSAGES, VKScope.NOTIFICATIONS, VKScope.OFFLINE, VKScope.STATUS, VKScope.STATS, VKScope.PHOTOS }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); someButtonForLogin.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { GoalKicker.com Android Notes for Professionals 1179 VKSdk.login(this, VK_SCOPES); } }); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); VKSdk.onActivityResult(requestCode, resultCode, data, new VKCallback<VKAccessToken>() { @Override public void onResult(VKAccessToken res) { res.accessToken; //getting our token here. } @Override public void onError(VKError error) { Toast.makeText(SocialNetworkChooseActivity.this, "User didn't pass Authorization", Toast.LENGTH_SHORT).show(); } }); } GoalKicker.com Android Notes for Professionals 1180 Chapter 247: Project SDK versions Parameter Details The SDK version for each eld is the Android release's SDK API level integer. For example, Froyo SDK Version (Android 2.2) corresponds to API level 8. These integers are also dened in Build.VERSION_CODES. An Android application needs to run on all kinds of devices. Each device may have a dierent version on Android running on it. Now, each Android version might not support all the features that your app requires, and so while building an app, you need to keep the minimum and maximum Android version in mind. Section 247.1: Dening project SDK versions In your build.gradle le of main module(app), dene your minimum and target version number. android { //the version of sdk source used to compile your project compileSdkVersion 23 defaultConfig { //the minimum sdk version required by device to run your app minSdkVersion 19 //you normally don't need to set max sdk limit so that your app can support future versions of android without updating app //maxSdkVersion 23 // //the latest sdk version of android on which you are targeting(building and testing) your app, it should be same as compileSdkVersion targetSdkVersion 23 } } GoalKicker.com Android Notes for Professionals 1181 Chapter 248: Facebook SDK for Android Parameter Details TAG A String used while logging FacebookSignInHelper A static reference to facebook helper CallbackManager A callback for facebook operations Activity A context PERMISSION_LOGIN An array that contains all permission required from facebook to login. loginCallback A callback for facebook login Section 248.1: How to add Facebook Login in Android Add below dependencies to your build.gradle // Facebook login compile 'com.facebook.android:facebook-android-sdk:4.21.1' Add below helper class to your utility package: /** * Created by Andy * An utility for Facebook */ public class FacebookSignInHelper { private static final String TAG = FacebookSignInHelper.class.getSimpleName(); private static FacebookSignInHelper facebookSignInHelper = null; private CallbackManager callbackManager; private Activity mActivity; private static final Collection<String> PERMISSION_LOGIN = (Collection<String>) Arrays.asList("public_profile", "user_friends","email"); private FacebookCallback<LoginResult> loginCallback; public static FacebookSignInHelper newInstance(Activity context) { if (facebookSignInHelper == null) facebookSignInHelper = new FacebookSignInHelper(context); return facebookSignInHelper; } public FacebookSignInHelper(Activity mActivity) { try { this.mActivity = mActivity; // Initialize the SDK before executing any other operations, // especially, if you're using Facebook UI elements. FacebookSdk.sdkInitialize(this.mActivity); callbackManager = CallbackManager.Factory.create(); loginCallback = new FacebookCallback<LoginResult>() { @Override public void onSuccess(LoginResult loginResult) { // You are logged into Facebook } @Override public void onCancel() { Log.d(TAG, "Facebook: Cancelled by user"); GoalKicker.com Android Notes for Professionals 1182 } @Override public void onError(FacebookException error) { Log.d(TAG, "FacebookException: " + error.getMessage()); } }; } catch (Exception e) { e.printStackTrace(); } } /** * To login user on facebook without default Facebook button */ public void loginUser() { try { LoginManager.getInstance().registerCallback(callbackManager, loginCallback); LoginManager.getInstance().logInWithReadPermissions(this.mActivity, PERMISSION_LOGIN); } catch (Exception e) { e.printStackTrace(); } } /** * To log out user from facebook */ public void signOut() { // Facebook sign out LoginManager.getInstance().logOut(); } public CallbackManager getCallbackManager() { return callbackManager; } public FacebookCallback<LoginResult> getLoginCallback() { return loginCallback; } /** * Attempts to log debug key hash for facebook * * @param context : A reference to context * @return : A facebook debug key hash */ public static String getKeyHash(Context context) { String keyHash = null; try { PackageInfo info = context.getPackageManager().getPackageInfo( context.getPackageName(), PackageManager.GET_SIGNATURES); for (Signature signature : info.signatures) { MessageDigest md = MessageDigest.getInstance("SHA"); md.update(signature.toByteArray()); keyHash = Base64.encodeToString(md.digest(), Base64.DEFAULT); Log.d(TAG, "KeyHash:" + keyHash); } } catch (PackageManager.NameNotFoundException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { GoalKicker.com Android Notes for Professionals 1183 e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return keyHash; } } Add below code in Your Activity: FacebookSignInHelper facebookSignInHelper = FacebookSignInHelper.newInstance(LoginActivity.this, fireBaseAuthHelper); facebookSignInHelper.loginUser(); Add below code to your OnActivityResult: facebookSignInHelper.getCallbackManager().onActivityResult(requestCode, resultCode, data); Section 248.2: Create your own custom button for Facebook login Once you rst add the Facebook login/signup, the button looks something like: Most of the times, it doesn't match with the design-specs of your app. And here's how you can customize it: <FrameLayout android:layout_below="@+id/no_network_bar" android:id="@+id/FrameLayout1" android:layout_width="match_parent" android:layout_height="wrap_content"> <com.facebook.login.widget.LoginButton android:id="@+id/login_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:visibility="gone" /> <Button android:background="#3B5998" android:layout_width="match_parent" android:layout_height="60dp" android:id="@+id/fb" android:onClick="onClickFacebookButton" android:textAllCaps="false" android:text="Sign up with Facebook" android:textSize="22sp" android:textColor="#ffffff" /> </FrameLayout> Just wrap the original com.facebook.login.widget.LoginButton into a FrameLayout and make its visibility gone. Next, add your custom button in the same FrameLayout. I've added some sample specs. You can always make your own drawable background for the facebook button and set it as the background of the button. GoalKicker.com Android Notes for Professionals 1184 The nal thing we do is simply convert the click on my custom button to a click on the facecbook button: //The original Facebook button LoginButton loginButton = (LoginButton)findViewById(R.id.login_button); //Our custom Facebook button fb = (Button) findViewById(R.id.fb); public void onClickFacebookButton(View view) { if (view == fb) { loginButton.performClick(); } } Great! Now the button looks something like this: Section 248.3: A minimalistic guide to Facebook login/signup implementation 1. You have to setup the prerequisites. 2. Add the Facebook activity to the AndroidManifest.xml le: <activity android:name="com.facebook.FacebookActivity" android:configChanges= "keyboard|keyboardHidden|screenLayout|screenSize|orientation" android:theme="@android:style/Theme.Translucent.NoTitleBar" android:label="@string/app_name" /> 3. Add the login button to your layout XML le: <com.facebook.login.widget.LoginButton android:id="@+id/login_button" android:layout_width="wrap_content" android:layout_height="wrap_content" /> 4. Now you have the Facebook button. If the user clicks on it, the Facebook login dialog will come up on top of the app's screen. Here the user can ll in their credentials and press the Log In button. If the credentials are correct, the dialog grants the corresponding permissions and a callback is sent to your original activity containing the button. The following code shows how you can receive that callback: loginButton.registerCallback(callbackManager, new FacebookCallback<LoginResult>() { @Override public void onSuccess(LoginResult loginResult) { // Completed without error. You might want to use the retrieved data here. } @Override public void onCancel() { // The user either cancelled the Facebook login process or didn't authorize the app. } @Override GoalKicker.com Android Notes for Professionals 1185 public void onError(FacebookException exception) { // The dialog was closed with an error. The exception will help you recognize what exactly went wrong. } }); Section 248.4: Setting permissions to access data from the Facebook prole If you want to retrieve the details of a user's Facebook prole, you need to set permissions for the same: loginButton = (LoginButton)findViewById(R.id.login_button); loginButton.setReadPermissions(Arrays.asList("email", "user_about_me")); You can keep adding more permissions like friends-list, posts, photos etc. Just pick the right permission and add it the above list. Note: You don't need to set any explicit permissions for accessing the public prole (rst name, last name, id, gender etc). Section 248.5: Logging out of Facebook Facebook SDK 4.0 onwards, this is how we logout: com.facebook.login.LoginManager.getInstance().logOut(); For versions before 4.0, the logging out is gone by explicitly clearing the access token: Session session = Session.getActiveSession(); session.closeAndClearTokenInformation(); GoalKicker.com Android Notes for Professionals 1186 Chapter 249: Thread Section 249.1: Thread Example with its description While launching an application rstly main thread is executed. This Main thread handles all the UI concept of application. If we want to run long the task in which we don't need the UI then we use thread for running that task in background. Here is the example of Thread which describes blow: new Thread(new Runnable() { public void run() { for(int i = 1; i < 5;i++) { System.out.println(i); } } }).start(); We can create thread by creating the object of Thread which have Thread.run() method for running the thread.Here, run() method is called by the start() method. We can also run the the multiple threads independently, which is known as MultiThreading. This thread also have the functionality of sleep by which the currently executing thread to sleep (temporarily cease execution) for the specied number of time. But sleep throws the InterruptedException So, we have to handle it by using try/catch like this. try{Thread.sleep(500);}catch(InterruptedException e){System.out.println(e);} Section 249.2: Updating the UI from a Background Thread It is common to use a background Thread for doing network operations or long running tasks, and then update the UI with the results when needed. This poses a problem, as only the main thread can update the UI. The solution is to use the runOnUiThread() method, as it allows you to initiate code execution on the UI thread from a background Thread. In this simple example, a Thread is started when the Activity is created, runs until the magic number of 42 is randomly generated, and then uses the runOnUiThread() method to update the UI once this condition is met. public class MainActivity extends AppCompatActivity { TextView mTextView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mTextView = (TextView) findViewById(R.id.my_text_view); new Thread(new Runnable() { @Override public void run() { while (true) { GoalKicker.com Android Notes for Professionals 1187 //do stuff.... Random r = new Random(); if (r.nextInt(100) == 42) { break; } } runOnUiThread(new Runnable() { @Override public void run() { mTextView.setText("Ready Player One"); } }); } }).start(); } } GoalKicker.com Android Notes for Professionals 1188 Chapter 250: AsyncTask Parameter Details Params the type of the parameters sent to the task upon execution. Progress the type of the progress units published during the background computation Result the type of the result of the background computation. Section 250.1: Basic Usage In Android Activities and Services, most callbacks are run on the main thread. This makes it simple to update the UI, but running processor- or I/O-heavy tasks on the main thread can cause your UI to pause and become unresponsive (ocial documentation on what then happens). You can remedy this by putting these heavier tasks on a background thread. One way to do this is using an AsyncTask, which provides a framework to facilitate easy usage of a background Thread, and also perform UI Thread tasks before, during, and after the background Thread has completed its work. Methods that can be overridden when extending AsyncTask: onPreExecute() : invoked on the UI thread before the task is executed doInBackground(): invoked on the background thread immediately after onPreExecute() nishes executing. onProgressUpdate(): invoked on the UI thread after a call to publishProgress(Progress...). onPostExecute(): invoked on the UI thread after the background computation nishes Example public class MyCustomAsyncTask extends AsyncTask<File, Void, String> { @Override protected void onPreExecute(){ // This runs on the UI thread before the background thread executes. super.onPreExecute(); // Do pre-thread tasks such as initializing variables. Log.v("myBackgroundTask", "Starting Background Task"); } @Override protected String doInBackground(File... params) { // Disk-intensive work. This runs on a background thread. // Search through a file for the first line that contains "Hello", and return // that line. try (Scanner scanner = new Scanner(params[0])) { while (scanner.hasNextLine()) { final String line = scanner.nextLine(); publishProgress(); // tell the UI thread we made progress if (line.contains("Hello")) { return line; } } return null; } } GoalKicker.com Android Notes for Professionals 1189 @Override protected void onProgressUpdate(Void...p) { // Runs on the UI thread after publishProgress is invoked Log.v("Read another line!") } @Override protected void onPostExecute(String s) { // This runs on the UI thread after complete execution of the doInBackground() method // This function receives result(String s) returned from the doInBackground() method. // Update UI with the found string. TextView view = (TextView) findViewById(R.id.found_string); if (s != null) { view.setText(s); } else { view.setText("Match not found."); } } } Usage: MyCustomAsyncTask asyncTask = new MyCustomAsyncTask<File, Void, String>(); // Run the task with a user supplied filename. asyncTask.execute(userSuppliedFilename); or simply: new MyCustomAsyncTask().execute(userSuppliedFilename); Note When dening an AsyncTask we can pass three types between < > brackets. Dened as <Params, Progress, Result> (see Parameters section) In the previous example we've used types <File, Void, String>: AsyncTask<File, Void, String> // Params has type File // Progress has unused type // Result has type String Void is used when you want to mark a type as unused. Note that you can't pass primitive types (i.e. int, float and 6 others) as parameters. In such cases, you should pass their wrapper classes, e.g. Integer instead of int, or Float instead of float. The AsyncTask and Activity life cycle AsyncTasks don't follow Activity instances' life cycle. If you start an AsyncTask inside an Activity and you rotate the device, the Activity will be destroyed and a new instance will be created. But the AsyncTask will not die. It will go on living until it completes. Solution: AsyncTaskLoader One subclass of Loaders is the AsyncTaskLoader. This class performs the same function as the AsyncTask, but much better. It can handle Activity conguration changes more easily, and it behaves within the life cycles of Fragments and Activities. The nice thing is that the AsyncTaskLoader can be used in any situation that the AsyncTask is being used. Anytime data needs to be loaded into memory for the Activity/Fragment to handle, The AsyncTaskLoader can GoalKicker.com Android Notes for Professionals 1190 do the job better. Section 250.2: Pass Activity as WeakReference to avoid memory leaks It is common for an AsyncTask to require a reference to the Activity that called it. If the AsyncTask is an inner class of the Activity, then you can reference it and any member variables/methods directly. If, however, the AsyncTask is not an inner class of the Activity, you will need to pass an Activity reference to the AsyncTask. When you do this, one potential problem that may occur is that the AsyncTask will keep the reference of the Activity until the AsyncTask has completed its work in its background thread. If the Activity is nished or killed before the AsyncTask's background thread work is done, the AsyncTask will still have its reference to the Activity, and therefore it cannot be garbage collected. As a result, this will cause a memory leak. In order to prevent this from happening, make use of a WeakReference in the AsyncTask instead of having a direct reference to the Activity. Here is an example AsyncTask that utilizes a WeakReference: private class MyAsyncTask extends AsyncTask<String, Void, Void> { private WeakReference<Activity> mActivity; public MyAsyncTask(Activity activity) { mActivity = new WeakReference<Activity>(activity); } @Override protected void onPreExecute() { final Activity activity = mActivity.get(); if (activity != null) { .... } } @Override protected Void doInBackground(String... params) { //Do something String param1 = params[0]; String param2 = params[1]; return null; } @Override protected void onPostExecute(Void result) { final Activity activity = mActivity.get(); if (activity != null) { activity.updateUI(); } } } Calling the AsyncTask from an Activity: GoalKicker.com Android Notes for Professionals 1191 new MyAsyncTask(this).execute("param1", "param2"); Calling the AsyncTask from a Fragment: new MyAsyncTask(getActivity()).execute("param1", "param2"); Section 250.3: Download Image using AsyncTask in Android This tutorial explains how to download Image using AsyncTask in Android. The example below download image while showing progress bar while during download. Understanding Android AsyncTask Async task enables you to implement MultiThreading without get Hands dirty into threads. AsyncTask enables proper and easy use of the UI thread. It allows performing background operations and passing the results on the UI thread. If you are doing something isolated related to UI, for example downloading data to present in a list, go ahead and use AsyncTask. AsyncTasks should ideally be used for short operations (a few seconds at the most.) An asynchronous task is dened by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute(), doInBackground(), onProgressUpdate() and onPostExecute(). In onPreExecute() you can dene code, which need to be executed before background processing starts. doInBackground have code which needs to be executed in background, here in doInBackground() we can send results to multiple times to event thread by publishProgress() method, to notify background processing has been completed we can return results simply. onProgressUpdate() method receives progress updates from doInBackground() method, which is published via publishProgress() method, and this method can use this progress update to update event thread onPostExecute() method handles results returned by doInBackground() method. The generic types used are Params, the type of the parameters sent to the task upon execution Progress, the type of the progress units published during the background computation. Result, the type of the result of the background computation. If an async task not using any types, then it can be marked as Void type. An running async task can be cancelled by calling cancel(boolean) method. Downloading image using Android AsyncTask your .xml layout <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/downloadButton" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Click Here to Download" /> <ImageView android:id="@+id/imageView" android:layout_width="match_parent" android:layout_height="match_parent" android:contentDescription="Your image will appear here" /> GoalKicker.com Android Notes for Professionals 1192 </LinearLayout> .java class package com.javatechig.droid; import java.io.InputStream; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.HttpStatus; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.DefaultHttpClient; import android.app.Activity; import android.app.ProgressDialog; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageView; public class ImageDownladerActivity extends Activity { private ImageView downloadedImg; private ProgressDialog simpleWaitDialog; private String downloadUrl = "http://www.9ori.com/store/media/images/8ab579a656.jpg"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.asynch); Button imageDownloaderBtn = (Button) findViewById(R.id.downloadButton); downloadedImg = (ImageView) findViewById(R.id.imageView); imageDownloaderBtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub new ImageDownloader().execute(downloadUrl); } }); } private class ImageDownloader extends AsyncTask { @Override protected Bitmap doInBackground(String... param) { // TODO Auto-generated method stub return downloadBitmap(param[0]); } @Override protected void onPreExecute() { Log.i("Async-Example", "onPreExecute Called"); simpleWaitDialog = ProgressDialog.show(ImageDownladerActivity.this, GoalKicker.com Android Notes for Professionals 1193 "Wait", "Downloading Image"); } @Override protected void onPostExecute(Bitmap result) { Log.i("Async-Example", "onPostExecute Called"); downloadedImg.setImageBitmap(result); simpleWaitDialog.dismiss(); } private Bitmap downloadBitmap(String url) { // initilize the default HTTP client object final DefaultHttpClient client = new DefaultHttpClient(); //forming a HttpGet request final HttpGet getRequest = new HttpGet(url); try { HttpResponse response = client.execute(getRequest); //check 200 OK for success final int statusCode = response.getStatusLine().getStatusCode(); if (statusCode != HttpStatus.SC_OK) { Log.w("ImageDownloader", "Error " + statusCode + " while retrieving bitmap from " + url); return null; } final HttpEntity entity = response.getEntity(); if (entity != null) { InputStream inputStream = null; try { // getting contents from the stream inputStream = entity.getContent(); // decoding stream data back into image Bitmap that android understands final Bitmap bitmap = BitmapFactory.decodeStream(inputStream); return bitmap; } finally { if (inputStream != null) { inputStream.close(); } entity.consumeContent(); } } } catch (Exception e) { // You Could provide a more explicit error message for IOException getRequest.abort(); Log.e("ImageDownloader", "Something went wrong while" + " retrieving bitmap from " + url + e.toString()); } return null; } } } GoalKicker.com Android Notes for Professionals 1194 Since there is currently no comment eld for examples (or I haven't found it or I haven't permission for it) here is some comment about this: This is a good example what can be done with AsyncTask. However the example currently has problems with possible memory leaks app crash if there was a screen rotation shortly before the async task nished. For details see: Pass Activity as WeakReference to avoid memory leaks http://stackoverow.com/documentation/android/117/asynctask/5377/possible-problems-with-inner-async-t asks Avoid leaking Activities with AsyncTask Section 250.4: Canceling AsyncTask YourAsyncTask task = new YourAsyncTask(); task.execute(); task.cancel(); This doesn't stop your task if it was in progress, it just sets the cancelled ag which can be checked by checking the return value of isCancelled() (assuming your code is currently running) by doing this: class YourAsyncTask extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... params) { while(!isCancelled()) { ... doing long task stuff //Do something, you need, upload part of file, for example if (isCancelled()) { return null; // Task was detected as canceled } if (yourTaskCompleted) { return null; } } } } Note If an AsyncTask is canceled while doInBackground(Params... params) is still executing then the method onPostExecute(Result result) will NOT be called after doInBackground(Params... params) returns. The AsyncTask will instead call the onCancelled(Result result) to indicate that the task was cancelled during execution. Section 250.5: AsyncTask: Serial Execution and Parallel Execution of Task AsyncTask is an abstract Class and does not inherit the Thread class. It has an abstract method doInBackground(Params... params), which is overridden to perform the task. This method is called from AsyncTask.call(). GoalKicker.com Android Notes for Professionals 1195 Executor are part of java.util.concurrent package. Moreover, AsyncTask contains 2 Executors THREAD_POOL_EXECUTOR It uses worker threads to execute the tasks parallelly. public static final Executor THREAD_POOL_EXECUTOR = new ThreadPoolExecutor(CORE_POOL_SIZE, MAXIMUM_POOL_SIZE, KEEP_ALIVE, TimeUnit.SECONDS, sPoolWorkQueue, sThreadFactory); SERIAL_EXECUTOR It executes the task serially, i.e. one by one. private static class SerialExecutor implements Executor { } Both Executors are static, hence only one THREAD_POOL_EXECUTOR and one SerialExecutor objects exist, but you can create several AsyncTask objects. Therefore, if you try to do multiple background task with the default Executor (SerialExecutor), these task will be queue and executed serially. If you try to do multiple background task with THREAD_POOL_EXECUTOR, then they will be executed parallelly. Example: public class MainActivity extends Activity { private Button bt; private int CountTask = 0; private static final String TAG = "AsyncTaskExample"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); bt = (Button) findViewById(R.id.button); bt.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { BackgroundTask backgroundTask = new BackgroundTask (); Integer data[] = { ++CountTask, null, null }; // Task Executed in thread pool ( 1 ) backgroundTask.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, data); // Task executed Serially ( 2 ) // Uncomment the below code and comment the above code of Thread // pool Executor and check // backgroundTask.execute(data); Log.d(TAG, "Task = " + (int) CountTask + " Task Queued"); } }); } private class BackgroundTask extends AsyncTask<Integer, Integer, Integer> { int taskNumber; GoalKicker.com Android Notes for Professionals 1196 @Override protected Integer doInBackground(Integer... integers) { taskNumber = integers[0]; try { Thread.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } Log.d(TAG, "Task = " + taskNumber + " Task Running in Background"); publishProgress(taskNumber); return null; } @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected void onPostExecute(Integer aLong) { super.onPostExecute(aLong); } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); Log.d(TAG, "Task = " + (int) values[0] + " Task Execution Completed"); } } } Perform Click on button several times to start a task and see the result. Task Executed in thread pool(1) Each task takes 1000 ms to complete. At t=36s, tasks 2, 3 and 4 are queued and started executing also because they are executing parallelly. 08-02 19:48:35.815: D/AsyncTaskExample(11693): Task = 1 Task Queued 08-02 19:48:35.815: D/AsyncTaskExample(11693): Task = 1 Task Running in Background 08-02 19:48:**36.025**: D/AsyncTaskExample(11693): Task = 2 Task Queued 08-02 19:48:**36.025**: D/AsyncTaskExample(11693): Task = 2 Task Running in Background 08-02 19:48:**36.165**: D/AsyncTaskExample(11693): Task = 3 Task Queued 08-02 19:48:**36.165**: D/AsyncTaskExample(11693): Task = 3 Task Running in Background 08-02 19:48:**36.325**: D/AsyncTaskExample(11693): Task = 4 Task Queued 08-02 19:48:**36.325**: D/AsyncTaskExample(11693): Task = 4 Task Running in Background 08-02 19:48:**36.815**: D/AsyncTaskExample(11693): Task = 1 Task Execution Completed 08-02 19:48:**36.915**: D/AsyncTaskExample(11693): Task = 5 Task Queued 08-02 19:48:**36.915**: D/AsyncTaskExample(11693): Task = 5 Task Running in Background 08-02 19:48:37.025: D/AsyncTaskExample(11693): Task = 2 Task Execution Completed 08-02 19:48:37.165: D/AsyncTaskExample(11693): Task = 3 Task Execution Completed ---------- Comment Task Executed in thread pool (1) and uncomment Task executed Serially (2). GoalKicker.com Android Notes for Professionals 1197 Perform Click on button several times to start a task and see the result. It is executing the task serially hence every task is started after the current task completed execution. Hence when Task 1's execution completes, only Task 2 starts running in background. Vice versa. 08-02 19:42:57.505: D/AsyncTaskExample(10299): Task = 1 Task Queued 08-02 19:42:57.505: D/AsyncTaskExample(10299): Task = 1 Task Running in Background 08-02 19:42:57.675: D/AsyncTaskExample(10299): Task = 2 Task Queued 08-02 19:42:57.835: D/AsyncTaskExample(10299): Task = 3 Task Queued 08-02 19:42:58.005: D/AsyncTaskExample(10299): Task = 4 Task Queued 08-02 19:42:58.155: D/AsyncTaskExample(10299): Task = 5 Task Queued 08-02 19:42:58.505: D/AsyncTaskExample(10299): Task = 1 Task Execution Completed 08-02 19:42:58.505: D/AsyncTaskExample(10299): Task = 2 Task Running in Background 08-02 19:42:58.755: D/AsyncTaskExample(10299): Task = 6 Task Queued 08-02 19:42:59.295: D/AsyncTaskExample(10299): Task = 7 Task Queued 08-02 19:42:59.505: D/AsyncTaskExample(10299): Task = 2 Task Execution Completed 08-02 19:42:59.505: D/AsyncTaskExample(10299): Task = 3 Task Running in Background 08-02 19:43:00.035: D/AsyncTaskExample(10299): Task = 8 Task Queued 08-02 19:43:00.505: D/AsyncTaskExample(10299): Task = 3 Task Execution Completed 08-02 19:43:**00.505**: D/AsyncTaskExample(10299): Task = 4 Task Running in Background 08-02 19:43:**01.505**: D/AsyncTaskExample(10299): Task = 4 Task Execution Completed 08-02 19:43:**01.515**: D/AsyncTaskExample(10299): Task = 5 Task Running in Background 08-02 19:43:**02.515**: D/AsyncTaskExample(10299): Task = 5 Task Execution Completed 08-02 19:43:**02.515**: D/AsyncTaskExample(10299): Task = 6 Task Running in Background 08-02 19:43:**03.515**: D/AsyncTaskExample(10299): Task = 7 Task Running in Background 08-02 19:43:**03.515**: D/AsyncTaskExample(10299): Task = 6 Task Execution Completed 08-02 19:43:04.515: D/AsyncTaskExample(10299): Task = 8 Task Running in Background 08-02 19:43:**04.515**: D/AsyncTaskExample(10299): Task = 7 Task Execution Completed Section 250.6: Order of execution When rst introduced, AsyncTasks were executed serially on a single background thread. Starting with DONUT, this was changed to a pool of threads allowing multiple tasks to operate in parallel. Starting with HONEYCOMB, tasks are executed on a single thread to avoid common application errors caused by parallel execution. If you truly want parallel execution, you can invoke executeOnExecutor(java.util.concurrent.Executor, Object[]) with THREAD_POOL_EXECUTOR. SERIAL_EXECUTOR -> An Executor that executes tasks one at a time in serial order. THREAD_POOL_EXECUTOR -> An Executor that can be used to execute tasks in parallel. sample : Task task = new Task(); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) task.executeOnExecutor(AsyncTask.SERIAL_EXECUTOR, data); else task.execute(data); Section 250.7: Publishing progress Sometimes, we need to update the progress of the computation done by an AsyncTask. This progress could be represented by a string, an integer, etc. To do this, we have to use two functions. First, we need to set the onProgressUpdate function whose parameter type is the same as the second type parameter of our AsyncTask. GoalKicker.com Android Notes for Professionals 1198 class YourAsyncTask extends AsyncTask<URL, Integer, Long> { @Override protected void onProgressUpdate(Integer... args) { setProgressPercent(args[0]) } } Second, we have to use the function publishProgress necessarily on the doInBackground function, and that is all, the previous method will do all the job. protected Long doInBackground(URL... urls) { int count = urls.length; long totalSize = 0; for (int i = 0; i < count; i++) { totalSize += Downloader.downloadFile(urls[i]); publishProgress((int) ((i / (float) count) * 100)); } return totalSize; } GoalKicker.com Android Notes for Professionals 1199 Chapter 251: Testing UI with Espresso Section 251.1: Overall Espresso Setup Espresso : androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2' androidTestCompile 'com.android.support.test:runner:0.5' ViewMatchers A collection of objects that implement Matcher<? super View> interface. You can pass one or more of these to the onView method to locate a view within the current view hierarchy. ViewActions A collection of ViewActions that can be passed to the ViewInteraction.perform() method (for example, click()). ViewAssertions A collection of ViewAssertions that can be passed the ViewInteraction.check() method. Most of the time, you will use the matches assertion, which uses a View matcher to assert the state of the currently selected view. Espresso cheat sheet by google GoalKicker.com Android Notes for Professionals 1200 GoalKicker.com Android Notes for Professionals 1201 Enter Text In EditText onView(withId(R.id.edt_name)).perform(typeText("XYZ")); closeSoftKeyboard(); Perform Click on View onView(withId(R.id.btn_id)).perform(click()); Checking View is Displayed onView(withId(R.id.edt_pan_number)).check(ViewAssertions.matches((isDisplayed()))); Section 251.2: Espresso simple UI test UI testing tools Two main tools that are nowadays mostly used for UI testing are Appium and Espresso. Appium Espresso blackbox test white/gray box testing what you see is what you can test can change inner workings of the app and prepare it for testing, e.g. save some data to database or sharedpreferences before running the test used mostly for integration end to end tests and entire user ows testing the functionality of a screen and/or ow can be abstracted so test written can be executed on iOS and Android Android Only well supported well supported supports parallel testing on multiple devices with selenium grid Not out of the box parallel testing, plugins like Spoon exists until true Google support comes out How to add espresso to the project dependencies { // Set this dependency so you can use Android JUnit Runner androidTestCompile 'com.android.support.test:runner:0.5' // Set this dependency to use JUnit 4 rules androidTestCompile 'com.android.support.test:rules:0.5' // Set this dependency to build and run Espresso tests androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2' // Set this dependency to build and run UI Automator tests androidTestCompile 'com.android.support.test.uiautomator:uiautomator-v18:2.2.2' } NOTE If you are using latest support libraries, annotations etc. you need to exclude the older versions from espresso to avoid collisions: // there is a conflict with the test support library (see http://stackoverflow.com/questions/29857695) // so for now re exclude the support-annotations dependency from here to avoid clashes androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2') { exclude group: 'com.android.support', module: 'support-annotations' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } GoalKicker.com Android Notes for Professionals 1202 // exclude a couple of more modules here because of <http://stackoverflow.com/questions/29216327> and // more specifically of <https://code.google.com/p/android-test-kit/issues/detail?id=139> // otherwise you'll receive weird crashes on devices and dex exceptions on emulators // Espresso-contrib for DatePicker, RecyclerView, Drawer actions, Accessibility checks, CountingIdlingResource androidTestCompile('com.android.support.test.espresso:espresso-contrib:2.2.2') { exclude group: 'com.android.support', module: 'support-annotations' exclude group: 'com.android.support', module: 'design' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } //excluded specific packages due to https://code.google.com/p/android/issues/detail?id=183454 androidTestCompile('com.android.support.test.espresso:espresso-intents:2.2.2') { exclude group: 'com.android.support', module: 'support-annotations' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } androidTestCompile('com.android.support.test.espresso:espresso-web:2.2.2') { exclude group: 'com.android.support', module: 'support-annotations' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } androidTestCompile('com.android.support.test:runner:0.5') { exclude group: 'com.android.support', module: 'support-annotations' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } androidTestCompile('com.android.support.test:rules:0.5') { exclude group: 'com.android.support', module: 'support-annotations' exclude module: 'support-annotations' exclude module: 'recyclerview-v7' exclude module: 'support-v4' exclude module: 'support-v7' } Other than these imports it is necessary to add android instrumentation test runner to build.gradle android.defaultCong: testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" Device setup For non aky test it is recommended to set following settings on your devices: Developer options / Disable Animations - reduces akyness of tests Developer options / Stay awake - if you have dedicated devices for tests this is usefull Developer options / Logger buer sizes - set to higher number if you run very big test suites on your phone Accessibility / Touch & Hold delay - long to avoid problems with tapping in espresso GoalKicker.com Android Notes for Professionals 1203 Quite a setup from the real world ha? Well now when thats out of the way lets take a look how to setup a small test Writing the test Lets assume that we have the following screen: The screen contains: text input eld - R.id.textEntry button which shows snackbar with typed text when clicked - R.id.shownSnackbarBtn snackbar which should contain user typed text - android.support.design.R.id.snackbar_text Now lets create a class that will test our ow: /** * Testing of the snackbar activity. **/ @RunWith(AndroidJUnit4.class) @LargeTest public class SnackbarActivityTest{ //espresso rule which tells which activity to start @Rule public final ActivityTestRule<SnackbarActivity> mActivityRule = new ActivityTestRule<>(SnackbarActivity.class, true, false); @Override public void tearDown() throws Exception { super.tearDown(); //just an example how tear down should cleanup after itself GoalKicker.com Android Notes for Professionals 1204 mDatabase.clear(); mSharedPrefs.clear(); } @Override public void setUp() throws Exception { super.setUp(); //setting up your application, for example if you need to have a user in shared //preferences to stay logged in you can do that for all tests in your setup User mUser = new User(); mUser.setToken("randomToken"); } /** *Test methods should always start with "testXYZ" and it is a good idea to *name them after the intent what you want to test **/ @Test public void testSnackbarIsShown() { //start our activity mActivityRule.launchActivity(null); //check is our text entry displayed and enter some text to it String textToType="new snackbar text"; onView(withId(R.id.textEntry)).check(matches(isDisplayed())); onView(withId(R.id.textEntry)).perform(typeText(textToType)); //click the button to show the snackbar onView(withId(R.id.shownSnackbarBtn)).perform(click()); //assert that a view with snackbar_id with text which we typed and is displayed onView(allOf(withId(android.support.design.R.id.snackbar_text), withText(textToType))) .check(matches(isDisplayed())); } } As you noticed there are 3-4 things that you might notice come often: onView(withXYZ) <-- viewMatchers with them you are able to nd elements on screen perform(click()) <-- viewActions, you can execute actions on elements you previously found check(matches(isDisplayed())) <-- viewAssertions, checks you want to do on screens you previously found All of these and many others can be found here: https://google.github.io/android-testing-support-library/docs/espresso/cheatsheet/index.html Thats it, now you can run the test either with right clicking on the class name / test and selecting Run test or with command: ./gradlew connectedFLAVORNAMEAndroidTest Section 251.3: Open Close DrawerLayout public final class DrawerLayoutTest { @Test public void Open_Close_Drawer_Layout() { onView(withId(R.id.drawer_layout)).perform(actionOpenDrawer()); onView(withId(R.id.drawer_layout)).perform(actionCloseDrawer()); } public static ViewAction actionOpenDrawer() { GoalKicker.com Android Notes for Professionals 1205 return new ViewAction() { @Override public Matcher<View> getConstraints() { return isAssignableFrom(DrawerLayout.class); } @Override public String getDescription() { return "open drawer"; } @Override public void perform(UiController uiController, View view) { ((DrawerLayout) view).openDrawer(GravityCompat.START); } }; } public static ViewAction actionCloseDrawer() { return new ViewAction() { @Override public Matcher<View> getConstraints() { return isAssignableFrom(DrawerLayout.class); } @Override public String getDescription() { return "close drawer"; } @Override public void perform(UiController uiController, View view) { ((DrawerLayout) view).closeDrawer(GravityCompat.START); } }; } } Section 251.4: Set Up Espresso In the build.gradle le of your Android app module add next dependencies: dependencies { // Android JUnit Runner androidTestCompile 'com.android.support.test:runner:0.5' // JUnit4 Rules androidTestCompile 'com.android.support.test:rules:0.5' // Espresso core androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2' // Espresso-contrib for DatePicker, RecyclerView, Drawer actions, Accessibility checks, CountingIdlingResource androidTestCompile 'com.android.support.test.espresso:espresso-contrib:2.2.2' //UI Automator tests androidTestCompile 'com.android.support.test.uiautomator:uiautomator-v18:2.2.2' } Specify the AndroidJUnitRunner for the testInstrumentationRunner parameter in the build.gradle le. android { defaultConfig { testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } } GoalKicker.com Android Notes for Professionals 1206 Additionally, add this dependency for providing intent mocking support androidTestCompile 'com.android.support.test.espresso:espresso-intents:2.2.2' And add this one for webview testing support // Espresso-web for WebView support androidTestCompile 'com.android.support.test.espresso:espresso-web:2.2.2' Section 251.5: Performing an action on a view It is possible to perform ViewActions on a view using the perform method. The ViewActions class provides helper methods for the most common actions, like: ViewActions.click() ViewActions.typeText() ViewActions.clearText() For example, to click on the view: onView(...).perform(click()); onView(withId(R.id.button_simple)).perform(click()); You can execute more than one action with one perform call: onView(...).perform(typeText("Hello"), click()); If the view you are working with is located inside a ScrollView (vertical or horizontal), consider preceding actions that require the view to be displayed (like click() and typeText()) with scrollTo(). This ensures that the view is displayed before proceeding to the other action: onView(...).perform(scrollTo(), click()); Section 251.6: Finding a view with onView With the ViewMatchers you can nd view in the current view hierarchy. To nd a view, use the onView() method with a view matcher which selects the correct view. The onView() methods return an object of type ViewInteraction. For example, nding a view by its R.id is as simple as: onView(withId(R.id.my_view)) Finding a view with a text: onView(withText("Hello World")) Section 251.7: Create Espresso Test Class Place next java class in src/androidTest/java and run it. public class UITest { GoalKicker.com Android Notes for Professionals 1207 @Test public void Simple_Test() { onView(withId(R.id.my_view)) .perform(click()) .check(matches(isDisplayed())); } // withId(R.id.my_view) is a ViewMatcher // click() is a ViewAction // matches(isDisplayed()) is a ViewAssertion } Section 251.8: Up Navigation @Test public void testUpNavigation() { intending(hasComponent(ParentActivity.class.getName())).respondWith(new Instrumentation.ActivityResult(0, null)); onView(withContentDescription("Navigate up")).perform(click()); intended(hasComponent(ParentActivity.class.getName())); } Note that this is a workaround and will collide with other Views that have the same content description. Section 251.9: Group a collection of test classes in a test suite You can organize the execution of your instrumented unit tests dening a Suite. /** * Runs all unit tests. */ @RunWith(Suite.class) @Suite.SuiteClasses({MyTest1.class , MyTest2.class, MyTest3.class}) public class AndroidTestSuite {} Then in AndroidStudio you can run with gradle or setting a new conguration like: GoalKicker.com Android Notes for Professionals 1208 Test suites can be nested. Section 251.10: Espresso custom matchers Espresso by default has many matchers that help you nd views that you need to do some checks or interactions with them. Most important ones can be found in the following cheat sheet: https://google.github.io/android-testing-support-library/docs/espresso/cheatsheet/ Some examples of matchers are: withId(R.id.ID_of_object_you_are_looking_for); withText("Some text you expect object to have"); isDisplayed() <-- check is the view visible doesNotExist() <-- check that the view does not exist All of these are very useful for everyday use, but if you have more complex views writing your custom matchers can make the tests more readable and you can reuse them in dierent places. There are 2 most common type of matchers you can extend: TypeSafeMatcher BoundedMatcher Implementing TypeSafeMatcher requires you to check the instanceOf the view you are asserting against, if its the correct type you match some of its properties against a value you provided to a matcher. For example, type safe matcher that validates an image view has correct drawable: public class DrawableMatcher extends TypeSafeMatcher<View> { private @DrawableRes final int expectedId; String resourceName; GoalKicker.com Android Notes for Professionals 1209 public DrawableMatcher(@DrawableRes int expectedId) { super(View.class); this.expectedId = expectedId; } @Override protected boolean matchesSafely(View target) { //Type check we need to do in TypeSafeMatcher if (!(target instanceof ImageView)) { return false; } //We fetch the image view from the focused view ImageView imageView = (ImageView) target; if (expectedId < 0) { return imageView.getDrawable() == null; } //We get the drawable from the resources that we are going to compare with image view source Resources resources = target.getContext().getResources(); Drawable expectedDrawable = resources.getDrawable(expectedId); resourceName = resources.getResourceEntryName(expectedId); if (expectedDrawable == null) { return false; } //comparing the bitmaps should give results of the matcher if they are equal Bitmap bitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap(); Bitmap otherBitmap = ((BitmapDrawable) expectedDrawable).getBitmap(); return bitmap.sameAs(otherBitmap); } @Override public void describeTo(Description description) { description.appendText("with drawable from resource id: "); description.appendValue(expectedId); if (resourceName != null) { description.appendText("["); description.appendText(resourceName); description.appendText("]"); } } } Usage of the matcher could be wrapped like this: public static Matcher<View> withDrawable(final int resourceId) { return new DrawableMatcher(resourceId); } onView(withDrawable(R.drawable.someDrawable)).check(matches(isDisplayed())); Bounded matchers are similar you just don't have to do the type check but, since that is done automagically for you: /** * Matches a {@link TextInputFormView}'s input hint with the given resource ID * * @param stringId * @return */ GoalKicker.com Android Notes for Professionals 1210 public static Matcher<View> withTextInputHint(@StringRes final int stringId) { return new BoundedMatcher<View, TextInputFormView>(TextInputFormView.class) { private String mResourceName = null; @Override public void describeTo(final Description description) { //fill these out properly so your logging and error reporting is more clear description.appendText("with TextInputFormView that has hint "); description.appendValue(stringId); if (null != mResourceName) { description.appendText("["); description.appendText(mResourceName); description.appendText("]"); } } @Override public boolean matchesSafely(final TextInputFormView view) { if (null == mResourceName) { try { mResourceName = view.getResources().getResourceEntryName(stringId); } catch (Resources.NotFoundException e) { throw new IllegalStateException("could not find string with ID " + stringId, e); } } return view.getResources().getString(stringId).equals(view.getHint()); } }; } More on matchers can be read up on: http://hamcrest.org/ https://developer.android.com/reference/android/support/test/espresso/matcher/ViewMatchers.html GoalKicker.com Android Notes for Professionals 1211 Chapter 252: Writing UI tests - Android Focus of this document is to represent goals and ways how to write android UI and integration tests. Espresso and UIAutomator are provided by Google so focus should be around these tools and their respective wrappers e.g. Appium, Spoon etc. Section 252.1: MockWebServer example In case your activities, fragments and UI require some background processing a good thing to use is a MockWebServer which runs localy on an android device which brings a closed and testable enviroment for your UI. https://github.com/square/okhttp/tree/master/mockwebserver First step is including the gradle dependency: testCompile 'com.squareup.okhttp3:mockwebserver:(insert latest version)' Now steps for running and using the mock server are: create mock server object start it at specic address and port (usually localhost:portnumber) enqueue responses for specic requests start the test This is nicely explained in the github page of the mockwebserver but in our case we want something nicer and reusable for all tests, and JUnit rules will come nicely into play here: /** *JUnit rule that starts and stops a mock web server for test runner */ public class MockServerRule extends UiThreadTestRule { private MockWebServer mServer; public static final int MOCK_WEBSERVER_PORT = 8000; @Override public Statement apply(final Statement base, Description description) { return new Statement() { @Override public void evaluate() throws Throwable { startServer(); try { base.evaluate(); } finally { stopServer(); } } }; } /** * Returns the started web server instance * * @return mock server */ public MockWebServer server() { GoalKicker.com Android Notes for Professionals 1212 return mServer; } public void startServer() throws IOException, NoSuchAlgorithmException { mServer = new MockWebServer(); try { mServer(MOCK_WEBSERVER_PORT); } catch (IOException e) { throw new IllegalStateException(e,"mock server start issue"); } } public void stopServer() { try { mServer.shutdown(); } catch (IOException e) { Timber.e(e, "mock server shutdown error); } } } Now lets assume that we have the exact same activity like in previous example, just in this case when we push the button app will fetch something from the network for example: https://someapi.com/name This would return some text string which would be concatenated in the snackbar text e.g. NAME + text you typed in. /** * Testing of the snackbar activity with networking. **/ @RunWith(AndroidJUnit4.class) @LargeTest public class SnackbarActivityTest{ //espresso rule which tells which activity to start @Rule public final ActivityTestRule<SnackbarActivity> mActivityRule = new ActivityTestRule<>(SnackbarActivity.class, true, false); //start mock web server @Rule public final MockServerRule mMockServerRule = new MockServerRule(); @Override public void tearDown() throws Exception { //same as previous example } @Override public void setUp() throws Exception { //same as previous example **//IMPORTANT:** point your application to your mockwebserver endpoint e.g. MyAppConfig.setEndpointURL("http://localhost:8000"); } /** *Test methods should always start with "testXYZ" and it is a good idea to *name them after the intent what you want to test **/ @Test public void testSnackbarIsShown() { //setup mockweb server GoalKicker.com Android Notes for Professionals 1213 mMockServerRule.server().setDispatcher(getDispatcher()); mActivityRule.launchActivity(null); //check is our text entry displayed and enter some text to it String textToType="new snackbar text"; onView(withId(R.id.textEntry)).check(matches(isDisplayed())); //we check is our snackbar showing text from mock webserver plus the one we typed onView(withId(R.id.textEntry)).perform(typeText("JazzJackTheRabbit" + textToType)); //click the button to show the snackbar onView(withId(R.id.shownSnackbarBtn)).perform(click()); //assert that a view with snackbar_id with text which we typed and is displayed onView(allOf(withId(android.support.design.R.id.snackbar_text), withText(textToType))) .check(matches(isDisplayed())); } /** *creates a mock web server dispatcher with prerecorded requests and responses **/ private Dispatcher getDispatcher() { final Dispatcher dispatcher = new Dispatcher() { @Override public MockResponse dispatch(RecordedRequest request) throws InterruptedException { if (request.getPath().equals("/name")){ return new MockResponse().setResponseCode(200) .setBody("JazzJackTheRabbit"); } throw new IllegalStateException("no mock set up for " + request.getPath()); } }; return dispatcher; } I would suggest wrapping the dispatcher in some sort of a Builder so you can easily chain and add new responses for your screens. e.g. return newDispatcherBuilder() .withSerializedJSONBody("/authenticate", Mocks.getAuthenticationResponse()) .withSerializedJSONBody("/getUserInfo", Mocks.getUserInfo()) .withSerializedJSONBody("/checkNotBot", Mocks.checkNotBot()); Section 252.2: IdlingResource The power of idling resources lies in not having to wait for some app's processing (networking, calculations, animations, etc.) to nish with sleep(), which brings akiness and/or prolongs the tests run. The ocial documentation can be found here. Implementation There are three things that you need to do when implementing IdlingResource interface: getName() - Returns the name of your idling resource. isIdleNow() - Checks whether your xyz object, operation, etc. is idle at the moment. registerIdleTransitionCallback (IdlingResource.ResourceCallback callback) - Provides a callback which you should call when your object transitions to idle. Now you should create your own logic and determine when your app is idle and when not, since this is dependant on the app. Below you will nd a simple example, just to show how it works. There are other examples online, but specic app implementation brings to specic idling resource implementations. GoalKicker.com Android Notes for Professionals 1214 NOTES There have been some Google examples where they put IdlingResources in the code of the app. Do not do this. They presumably placed it there just to show how they work. Keeping your code clean and maintaining single principle of responsibility is up to you! Example Let us say that you have an activity which does weird stu and takes a long time for the fragment to load and thus making your Espresso tests fail by not being able to nd resources from your fragment (you should change how your activity is created and when to speed it up). But in any case to keep it simple, the following example shows how it should look like. Our example idling resource would get two objects: The tag of the fragment which you need to nd and waiting to get attached to the activity. A FragmentManager object which is used for nding the fragment. /** * FragmentIdlingResource - idling resource which waits while Fragment has not been loaded. */ public class FragmentIdlingResource implements IdlingResource { private final FragmentManager mFragmentManager; private final String mTag; //resource callback you use when your activity transitions to idle private volatile ResourceCallback resourceCallback; public FragmentIdlingResource(FragmentManager fragmentManager, String tag) { mFragmentManager = fragmentManager; mTag = tag; } @Override public String getName() { return FragmentIdlingResource.class.getName() + ":" + mTag; } @Override public boolean isIdleNow() { //simple check, if your fragment is added, then your app has became idle boolean idle = (mFragmentManager.findFragmentByTag(mTag) != null); if (idle) { //IMPORTANT: make sure you call onTransitionToIdle resourceCallback.onTransitionToIdle(); } return idle; } @Override public void registerIdleTransitionCallback(ResourceCallback resourceCallback) { this.resourceCallback = resourceCallback; } } Now that you have your IdlingResource written, you need to use it somewhere right? Usage Let us skip the entire test class setup and just look how a test case would look like: GoalKicker.com Android Notes for Professionals 1215 @Test public void testSomeFragmentText() { mActivityTestRule.launchActivity(null); //creating the idling resource IdlingResource fragmentLoadedIdlingResource = new FragmentIdlingResource(mActivityTestRule.getActivity().getSupportFragmentManager(), SomeFragmentText.TAG); //registering the idling resource so espresso waits for it Espresso.registerIdlingResources(idlingResource1); onView(withId(R.id.txtHelloWorld)).check(matches(withText(helloWorldText))); //lets cleanup after ourselves Espresso.unregisterIdlingResources(fragmentLoadedIdlingResource); } Combination with JUnit rule This is not to hard; you can also apply the idling resource in form of a JUnit test rule. For example, let us say that you have some SDK that contains Volley in it and you want Espresso to wait for it. Instead of going through each test case or applying it in setup, you could create a JUnit rule and just write: @Rule public final SDKIdlingRule mSdkIdlingRule = new SDKIdlingRule(SDKInstanceHolder.getInstance()); Now since this is an example, don't take it for granted; all code here is imaginary and used only for demonstration purposes: public class SDKIdlingRule implements TestRule { //idling resource you wrote to check is volley idle or not private VolleyIdlingResource mVolleyIdlingResource; //request queue that you need from volley to give it to idling resource private RequestQueue mRequestQueue; //when using the rule extract the request queue from your SDK public SDKIdlingRule(SDKClass sdkClass) { mRequestQueue = getVolleyRequestQueue(sdkClass); } private RequestQueue getVolleyRequestQueue(SDKClass sdkClass) { return sdkClass.getVolleyRequestQueue(); } @Override public Statement apply(final Statement base, Description description) { return new Statement() { @Override public void evaluate() throws Throwable { //registering idling resource mVolleyIdlingResource = new VolleyIdlingResource(mRequestQueue); Espresso.registerIdlingResources(mVolleyIdlingResource); try { base.evaluate(); } finally { if (mVolleyIdlingResource != null) { //deregister the resource when test finishes Espresso.unregisterIdlingResources(mVolleyIdlingResource); } } } }; GoalKicker.com Android Notes for Professionals 1216 } } GoalKicker.com Android Notes for Professionals 1217 Chapter 253: Unit testing in Android with JUnit Section 253.1: Moving Business Logic Out of Android Componenets A lot of the value from local JVM unit tests comes from the way you design your application. You have to design it in such a way where you can decouple your business logic from your Android Components. Here is an example of such a way using the Model-View-Presenter pattern. Lets practice this out by implementing a basic sign up screen that only takes a username and password. Our Android app is responsible for validating that the username the user supplies is not blank and that the password is at least eight characters long and contains at least one digit. If the username/password is valid we perform our sign up api call, otherwise we display an error message. Example where business logic is highly coupled with Android Component. public class LoginActivity extends Activity{ ... private void onSubmitButtonClicked(){ String username = findViewById(R.id.username).getText().toString(); String password = findViewById(R.id.password).getText().toString(); boolean isUsernameValid = username != null && username.trim().length() != 0; boolean isPasswordValid = password != null && password.trim().length() >= 8 && password.matches(".*\\d+.*"); if(isUsernameValid && isPasswordValid){ performSignUpApiCall(username, password); } else { displayInvalidCredentialsErrorMessage(); } } } Example where business logic is decoupled from Android Component. Here we dene in a single class, LoginContract, that will house the various interactions between our various classes. public interface LoginContract { public interface View { performSignUpApiCall(String username, String password); displayInvalidCredentialsErrorMessage(); } public interface Presenter { void validateUserCredentials(String username, String password); } } Our LoginActivity is for the most part the same except that we have removed the responsibility of having to know how to validate a user's sign up form (our business logic). The LoginActivity will now rely on our new LoginPresenter to perform validation. public class LoginActivity extends Activity implements LoginContract.View{ private LoginContract.Presenter presenter; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); presenter = new LoginPresenter(this); .... GoalKicker.com Android Notes for Professionals 1218 } ... private void onSubmitButtonClicked(){ String username = findViewById(R.id.username).getText().toString(); String password = findViewById(R.id.password).getText().toString(); presenter.validateUserCredentials(username, password); } ... } Now your business logic will reside in your new LoginPresenter class. public class LoginPresenter implements LoginContract.Presenter{ private LoginContract.View view; public LoginPresenter(LoginContract.View view){ this.view = view; } public void validateUserCredentials(String username, String password){ boolean isUsernameValid = username != null && username.trim().length() != 0; boolean isPasswordValid = password != null && password.trim().length() >= 8 && password.matches(".*\\d+.*"); if(isUsernameValid && isPasswordValid){ view.performSignUpApiCall(username, password); } else { view.displayInvalidCredentialsErrorMessage(); } } } And now we can create local JVM unit tests against your new LoginPresenter class. public class LoginPresenterTest { @Mock LoginContract.View view; private LoginPresenter presenter; @Before public void setUp() throws Exception { MockitoAnnotations.initMocks(this); presenter = new LoginPresenter(view); } @Test public void test_validateUserCredentials_userDidNotEnterUsername_displayErrorMessage() throws Exception { String username = ""; String password = "kingslayer1"; presenter.validateUserCredentials(username, password); Mockito.verify(view). displayInvalidCredentialsErrorMessage(); } @Test public void test_validateUserCredentials_userEnteredFourLettersAndOneDigitPassword_displayErrorMessage() throws Exception { String username = "<NAME>"; GoalKicker.com Android Notes for Professionals 1219 String password = "king1"; presenter.validateUserCredentials(username, password); Mockito.verify(view). displayInvalidCredentialsErrorMessage(); } @Test public void test_validateUserCredentials_userEnteredNineLettersButNoDigitsPassword_displayErrorMessage() throws Exception { String username = "Jaime Lanninster"; String password = "kingslayer"; presenter.validateUserCredentials(username, password); Mockito.verify(view). displayInvalidCredentialsErrorMessage(); } @Test public void test_validateUserCredentials_userEnteredNineLettersButOneDigitPassword_performApiCallToSignUpUser() throws Exception { String username = "Jaime Lanninster"; String password = "kingslayer1"; presenter.validateUserCredentials(username, password); Mockito.verify(view).performSignUpApiCall(username, password); } } As you can see, when we extracted our business logic out of the LoginActivity and placed it in the LoginPresenter POJO. We can now create local JVM unit tests against our business logic. It should be noted that there are various other implications from our change in architecture such as we are close to adhering to each class having a single responsibility, additional classes, etc. These are just side eects of the way I choose to go about performing this decoupling via the MVP style. MVP is just one way to go about this but there are other alternatives that you may want to look at such as MVVM. You just have to pick the best system that works for you. Section 253.2: Creating Local unit tests Place your test classes here: /src/test/<pkg_name>/ Example test class public class ExampleUnitTest { @Test public void addition_isCorrect() throws Exception { int a=4, b=5, c; c = a + b; assertEquals(9, c); // This test passes assertEquals(10, c); //Test fails } } Breakdown public class ExampleUnitTest { ... } The test class, you can create several test classes and place them inside the test package. @Test GoalKicker.com Android Notes for Professionals 1220 public void addition_isCorrect() { ... } The test method, several test methods can be created inside a test class. Notice the annotation @Test. The Test annotation tells JUnit that the public void method to which it is attached can be run as a test case. There are several other useful annotations like @Before, @After etc. This page would be a good place to start. assertEquals(9, c); // This test passes assertEquals(10, c); //Test fails These methods are member of the Assert class. Some other useful methods are assertFalse(), assertNotNull(), assertTrue etc. Here's an elaborate Explanation. Annotation Information for JUnit Test: @Test: The Test annotation tells JUnit that the public void method to which it is attached can be run as a test case. To run the method, JUnit rst constructs a fresh instance of the class then invokes the annotated method. @Before: When writing tests, it is common to nd that several tests need similar objects created before they can run. Annotating a public void method with @Before causes that method to be run before the Test method. @After: If you allocate external resources in a Before method you need to release them after the test runs. Annotating a public void method with @After causes that method to be run after the Test method. All @After methods are guaranteed to run even if a Before or Test method throws an exception Tip Quickly create test classes in Android Studio Place the cursor on the class name for which you want to create a test class. Press Alt + Enter (Windows). Select Create Test, hit Return. Select the methods for which you want to create test methods, click OK. Select the directory where you want to create the test class. You're done, this what you get is your rst test. Tip Easily execute tests in Android Studio Right click test the package. Select Run 'Tests in ... All the tests in the package will be executed at once. Section 253.3: Getting started with JUnit Setup To start unit testing your Android project using JUnit you need to add the JUnit dependency to your project and you need to create a test source-set which is going to contain the source code for the unit tests. Projects created with Android Studio often already include the JUnit dependency and the test source-set GoalKicker.com Android Notes for Professionals 1221 Add the following line to your module build.gradle le within the dependencies Closure: testCompile 'junit:junit:4.12' JUnit test classes are located in a special source-set named test. If this source-set does not exist you need to create a new folder yourself. The folder structure of a default Android Studio (Gradle based) project looks like this: <project-root-folder> /app (module root folder) /build /libs /src /main (source code) /test (unit test source code) /androidTest (instrumentation test source code) build.gradle (module gradle file) /build /gradle build.gradle (project gradle file) gradle.properties gradlew gradlew.bat local.properties settings.gradle (gradle settings) If your project doesn't have the /app/src/test folder you need to create it yourself. Within the test folder you also need a java folder (create it if it doesn't exist). The java folder in the test source set should contains the same package structure as your main source-set. If setup correctly your project structure (in the Android view in Android Studio) should look like this: Note: You don't necessarily need to have the androidTest source-set, this source-set is often found in projects created by Android Studio and is included here for reference. Writing a test 1. Create a new class within the test source-set. Right click the test source-set in the project view choose New > Java class. The most used naming pattern is to use the name of the class you're going to test with Test added to it. So StringUtilities becomes StringUtilitiesTest. 2. Add the @RunWith annotation The @RunWith annotation is needed in order to make JUnit run the tests we're going to dene in our test class. The default JUnit runner (for JUnit 4) is the BlockJUnit4ClassRunner but instead of using this running GoalKicker.com Android Notes for Professionals 1222 directly its more convenient to use the alias JUnit4 which is a shorthand for the default JUnit runner. @RunWith(JUnit4.class) public class StringUtilitiesTest { } 3. Create a test A unit test is essentially just a method which, in most cases, should not fail if run. In other words it should not throw an exception. Inside a test method you will almost always nd assertions that check if specic conditions are met. If an assertion fails it throws an exception which causes the method/test to fail. A test method is always annotated with the @Test annotation. Without this annotation JUnit won't automatically run the test. @RunWith(JUnit4.class) public class StringUtilitiesTest { @Test public void addition_isCorrect() throws Exception { assertEquals("Hello JUnit", "Hello" + " " + "JUnit"); } } Note: unlike the standard Java method naming convention unit test method names do often contain underscores. Running a test 1. Method To run a single test method you can right click the method and click Run 'addition_isCorrect()' or use the keyboard shortcut ctrl+shift+f10. If everything is setup correctly JUnit starts running the method and you should see the following interface within Android Studio: GoalKicker.com Android Notes for Professionals 1223 2. Class You can also run all the tests dened in a single class, by right clicking the class in the project view and clicking Run 'StringUtilitiesTest ' or use the keyboard shortcut ctrl+shift+f10 if you have selected the class in the project view. 3. Package (everything) If you wan't to run all the tests dened in the project or in a package you can just right click the package and click Run ... just like you would run all the tests dened in a single class. Section 253.4: Exceptions JUnit can also be used to test if a method throws a specic exception for a given input. In this example we will test if the following method really throws an exception if the Boolean format (input) is not recognized/unknown: public static boolean parseBoolean(@NonNull String raw) throws IllegalArgumentException{ raw = raw.toLowerCase().trim(); switch (raw) { case "t": case "yes": case "1": case "true": return true; case "f": case "no": case "0": case "false": return false; default: throw new IllegalArgumentException("Unknown boolean format: " + raw); } } By adding the expected parameter to the @Test annotation, one can dene which exception is expected to be thrown. The unit test will fail if this exception does not occur, and succeed if the exception is indeed thrown: @Test(expected = IllegalArgumentException.class) public void parseBoolean_parsesInvalidFormat_throwsException(){ StringUtilities.parseBoolean("Hello JUnit"); } This works well, however, it does limit you to just a single test case within the method. Sometimes you might want to test multiple cases within a single method. A technique often used to overcome this limitation is using try-catch blocks and the Assert.fail() method: @Test public void parseBoolean_parsesInvalidFormats_throwsException(){ GoalKicker.com Android Notes for Professionals 1224 try { StringUtilities.parseBoolean("Hello!"); fail("Expected IllegalArgumentException"); } catch(IllegalArgumentException e){ } try { StringUtilities.parseBoolean("JUnit!"); fail("Expected IllegalArgumentException"); } catch(IllegalArgumentException e){ } } Note: Some people consider it to be bad practice to test more than a single case inside a unit test. Section 253.5: Static import JUnit denes quite some assertEquals methods at least one for each primitive type and one for Objects is available. These methods are by default not directly available to call and should be called like this: Assert.assertEquals. But because these methods are used so often people almost always use a static import so that the method can be directly used as if it is part of the class itself. To add a static import for the assertEquals method use the following import statement: import static org.junit.Assert.assertEquals; You can also static import all assert methods including the assertArrayEquals, assertNotNull and assertFalse etc. using the following static import: import static org.junit.Assert.*; Without static import: @Test public void addition_isCorrect(){ Assert.assertEquals(4 , 2 + 2); } With static import: @Test public void addition_isCorrect(){ assertEquals(4 , 2 + 2); } GoalKicker.com Android Notes for Professionals 1225 Chapter 254: Inter-app UI testing with UIAutomator Section 254.1: Prepare your project and write the rst UIAutomator test Add the required libraries into the dependencies section of your Android module's build.gradle: android { ... defaultConfig { ... testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } } dependencies { ... androidTestCompile 'com.android.support.test:runner:0.5' androidTestCompile 'com.android.support.test:rules:0.5' androidTestCompile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.2' androidTestCompile 'com.android.support:support-annotations:23.4.0' } Note that of course the versions may dier in the mean time. After this sync with the changes. Then add a new Java class inside the androidTest folder: public class InterAppTest extends InstrumentationTestCase { private UiDevice device; @Override public void setUp() throws Exception { device = UiDevice.getInstance(getInstrumentation()); } public void testPressHome() throws Exception { device.pressHome(); } } By making a right click on the class tab and on "Run "InterAppTest" executes this test. Section 254.2: Writing more complex tests using the UIAutomatorViewer In order to enable writing more complex UI tests the UIAutomatorViewer is needed. The tool located at /tools/ makes a fullscreen screenshot including the layouts of the currently displayed views. See the subsequent picture to get an idea of what is shown: GoalKicker.com Android Notes for Professionals 1226 For the UI tests we are looking for resource-id, content-desc or something else to identify a view and use it inside our tests. The uiautomatorviewer is executed via terminal. If we now for instance want to click on the applications button and then open some app and swipe around, this is how the test method can look like: public void testOpenMyApp() throws Exception { // wake up your device device.wakeUp(); // switch to launcher (hide the previous application, if some is opened) device.pressHome(); // enter applications menu (timeout=200ms) device.wait(Until.hasObject(By.desc(("Apps"))), 200); UiObject2 appsButton = device.findObject(By.desc(("Apps"))); assertNotNull(appsButton); appsButton.click(); // enter some application (timeout=200ms) device.wait(Until.hasObject(By.desc(("MyApplication"))), 200); UiObject2 someAppIcon = device.findObject(By.desc(("MyApplication"))); assertNotNull(someAppIcon); someAppIcon.click(); // do a swipe (steps=20 is 0.1 sec.) device.swipe(200, 1200, 1300, 1200, 20); assertTrue(isSomeConditionTrue) } GoalKicker.com Android Notes for Professionals 1227 Section 254.3: Creating a test suite of UIAutomator tests Putting UIAutomator tests together to a test suite is a quick thing: package de.androidtest.myapplication; import org.junit.runner.RunWith; import org.junit.runners.Suite; @RunWith(Suite.class) @Suite.SuiteClasses({InterAppTest1.class, InterAppTest2.class}) public class AppTestSuite {} Execute similar to a single test by clicking right and run the suite. GoalKicker.com Android Notes for Professionals 1228 Chapter 255: Lint Warnings Section 255.1: Using tools:ignore in xml les The attribute tools:ignore can be used in xml les to dismiss lint warnings. BUT dismissing lint warnings with this technique is most of the time the wrong way to proceed. A lint warning must be understood and xed... it can be ignored if and only if you have a full understanding of it's meaning and a strong reason to ignore it. Here is a use case where it legitimate to ignore a lint warning: You are developing a system-app (signed with the device manufacturer key) Your app need to change the device date (or any other protected action) Then you can do this in your manifest : (i.e. requesting the protected permission and ignoring the lint warning because you know that in your case the permission will be granted) <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" ...> <uses-permission android:name="android.permission.SET_TIME" tools:ignore="ProtectedPermissions"/> Section 255.2: Congure LintOptions with gradle You can congure lint by adding a lintOptions section in the build.gradle le: android { //..... lintOptions { // turn off checking the given issue id's disable 'TypographyFractions','TypographyQuotes' // turn on the given issue id's enable 'RtlHardcoded','RtlCompat', 'RtlEnabled' // check *only* the given issue id's check 'NewApi', 'InlinedApi' // set to true to turn off analysis progress reporting by lint quiet true // if true, stop the gradle build if errors are found abortOnError false // if true, only report errors ignoreWarnings true } } You can run lint for a specic variant (see below), e.g. ./gradlew lintRelease, or for all variants (./gradlew lint), in which case it produces a report which describes which specic variants a given issue applies to. GoalKicker.com Android Notes for Professionals 1229 Check here for the DSL reference for all available options. Section 255.3: Conguring lint checking in Java and XML source les You can disable Lint checking from your Java and XML source les. Conguring lint checking in Java To disable Lint checking specically for a Java class or method in your Android project, add the @SuppressLint annotation to that Java code. Example: @SuppressLint("NewApi") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); To disable checking for all Lint issues: @SuppressLint("all") Conguring lint checking in XML You can use the tools:ignore attribute to disable Lint checking for specic sections of your XML les. For example: tools:ignore="NewApi,StringFormatInvalid" To suppress checking for all Lint issues in the XML element, use tools:ignore="all" Section 255.4: How to congure the lint.xml le You can specify your Lint checking preferences in the lint.xml le. If you are creating this le manually, place it in the root directory of your Android project. If you are conguring Lint preferences in Android Studio, the lint.xml le is automatically created and added to your Android project for you. Example: <?xml version="1.0" encoding="UTF-8"?> <lint> <!-- list of issues to configure --> </lint> By setting the severity attribute value in the tag, you can disable Lint checking for an issue or change the severity level for an issue. The following example shows the contents of a lint.xml le. <?xml version="1.0" encoding="UTF-8"?> <lint> GoalKicker.com Android Notes for Professionals 1230 <!-- Disable the given check in this project --> <issue id="IconMissingDensityFolder" severity="ignore" /> <!-- Ignore the ObsoleteLayoutParam issue in the specified files --> <issue id="ObsoleteLayoutParam"> <ignore path="res/layout/activation.xml" /> <ignore path="res/layout-xlarge/activation.xml" /> </issue> <!-- Ignore the UselessLeaf issue in the specified file --> <issue id="UselessLeaf"> <ignore path="res/layout/main.xml" /> </issue> <!-- Change the severity of hardcoded strings to "error" --> <issue id="HardcodedText" severity="error" /> </lint> Section 255.5: Mark Suppress Warnings It's good practice to mark some warnings in your code. For example, some deprecated methods is need for your testing, or old support version. But Lint checking will mark that code with warnings. For avoiding this problem, you need use annotation @SuppressWarnings. For example, add ignoring to warnings to deprecated methods. You need to put warnings description in annotation also: @SuppressWarnings("deprecated"); public void setAnotherColor (int newColor) { getApplicationContext().getResources().getColor(newColor) } Using this annotation you can ignore all warnings, including Lint, Android, and other. Using Suppress Warnings, helps to understand code correctly! Section 255.6: Importing resources without "Deprecated" error Using the Android API 23 or higher, very often such situation can be seen: This situation is caused by the structural change of the Android API regarding getting the resources. Now the function: public int getColor(@ColorRes int id, @Nullable Theme theme) throws NotFoundException should be used. But the android.support.v4 library has another solution. Add the following dependency to the build.gradle le: com.android.support:support-v4:24.0.0 Then all methods from support library are available: GoalKicker.com Android Notes for Professionals 1231 ContextCompat.getColor(context, R.color.colorPrimaryDark); ContextCompat.getDrawable(context, R.drawable.btn_check); ContextCompat.getColorStateList(context, R.color.colorPrimary); DrawableCompat.setTint(drawable); ContextCompat.getColor(context,R.color.colorPrimaryDark)); Moreover more methods from support library can be used: ViewCompat.setElevation(textView, 1F); ViewCompat.animate(textView); TextViewCompat.setTextAppearance(textView, R.style.AppThemeTextStyle); ... GoalKicker.com Android Notes for Professionals 1232 Chapter 256: Performance Optimization Your Apps performance is a crucial element of the user experience. Try to avoid bad performing patterns like doing work on the UI thread and learn how to write fast and responsive apps. Section 256.1: Save View lookups with the ViewHolder pattern Especially in a ListView, you can run into performance problems by doing too many findViewById() calls during scrolling. By using the ViewHolder pattern, you can save these lookups and improve your ListView performance. If your list item contains a single TextView, create a ViewHolder class to store the instance: static class ViewHolder { TextView myTextView; } While creating your list item, attach a ViewHolder object to the list item: public View getView(int position, View convertView, ViewGroup parent) { Item i = getItem(position); if(convertView == null) { convertView = LayoutInflater.from(getContext()).inflate(R.layout.list_item, parent, false); // Create a new ViewHolder and save the TextView instance ViewHolder holder = new ViewHolder(); holder.myTextView = (TextView)convertView.findViewById(R.id.my_text_view); convertView.setTag(holder); } // Retrieve the ViewHolder and use the TextView ViewHolder holder = (ViewHolder)convertView.getTag(); holder.myTextView.setText(i.getText()); return convertView; } Using this pattern, findViewById() will only be called when a new View is being created and the ListView can recycle your views much more eciently. GoalKicker.com Android Notes for Professionals 1233 Chapter 257: Android Kernel Optimization Section 257.1: Low RAM Conguration Android now supports devices with 512MB of RAM. This documentation is intended to help OEMs optimize and congure Android 4.4 for low-memory devices. Several of these optimizations are generic enough that they can be applied to previous releases as well. Enable Low Ram Device ag We are introducing a new API called ActivityManager.isLowRamDevice() for applications to determine if they should turn o specic memory-intensive features that work poorly on low-memory devices. For 512MB devices, this API is expected to return: "true" It can be enabled by the following system property in the device makele. PRODUCT_PROPERTY_OVERRIDES += ro.config.low_ram=true Disable JIT System-wide JIT memory usage is dependent on the number of applications running and the code footprint of those applications. The JIT establishes a maximum translated code cache size and touches the pages within it as needed. JIT costs somewhere between 3M and 6M across a typical running system. The large apps tend to max out the code cache fairly quickly (which by default has been 1M). On average, JIT cache usage runs somewhere between 100K and 200K bytes per app. Reducing the max size of the cache can help somewhat with memory usage, but if set too low will send the JIT into a thrashing mode. For the really low-memory devices, we recommend the JIT be disabled entirely. This can be achieved by adding the following line to the product makele: PRODUCT_PROPERTY_OVERRIDES += dalvik.vm.jit.codecachesize=0 Section 257.2: How to add a CPU Governor The CPU governor itself is just 1 C le, which is located in kernel_source/drivers/cpufreq/, for example: cpufreq_smartass2.c. You are responsible yourself for nd the governor (look in an existing kernel repo for your device) But in order to successfully call and compile this le into your kernel you will have to make the following changes: 1. Copy your governor le (cpufreq_govname.c) and browse to kernel_source/drivers/cpufreq, now paste it. 2. and open Kcong (this is the interface of the cong menu layout) when adding a kernel, you want it to show up in your cong. You can do that by adding the choice of governor. config CPU_FREQ_GOV_GOVNAMEHERE tristate "'gov_name_lowercase' cpufreq governor" depends on CPU_FREQ help governor' - a custom governor! for example, for smartassV2. config CPU_FREQ_GOV_SMARTASS2 tristate "'smartassV2' cpufreq governor" GoalKicker.com Android Notes for Professionals 1234 depends on CPU_FREQ help 'smartassV2' - a "smart" optimized governor! next to adding the choice, you also must declare the possibility that the governor gets chosen as default governor. config CPU_FREQ_DEFAULT_GOV_GOVNAMEHERE bool "gov_name_lowercase" select CPU_FREQ_GOV_GOVNAMEHERE help Use the CPUFreq governor 'govname' as default. for example, for smartassV2. config CPU_FREQ_DEFAULT_GOV_SMARTASS2 bool "smartass2" select CPU_FREQ_GOV_SMARTASS2 help Use the CPUFreq governor 'smartassV2' as default. cant nd the right place to put it? Just search for CPU_FREQ_GOV_CONSERVATIVE, and place the code beneath, same thing counts for CPU_FREQ_DEFAULT_GOV_CONSERVATIVE Now that Kcong is nished you can save and close the le. 3. While still in the /drivers/cpufreq folder, open Makele. In Makele, add the line corresponding to your CPU Governor. for example: obj-$(CONFIG_CPU_FREQ_GOV_SMARTASS2) += cpufreq_smartass2.o Be ware that you do not call the native C le, but the O le! which is the compiled C le. Save the le. 4. Move to: kernel_source/includes/linux. Now open cpufreq.h Scroll down until you see something like: #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND) extern struct cpufreq_governor cpufreq_gov_ondemand; #define CPUFREQ_DEFAULT_GOVERNOR (&amp;cpufreq_gov_ondemand) (other cpu governors are also listed there) Now add your entry with the selected CPU Governor, example: #elif defined(CONFIG_CPU_FREQ_DEFAULT_GOV_SMARTASS2) extern struct cpufreq_governor cpufreq_gov_smartass2; #define CPUFREQ_DEFAULT_GOVERNOR (&amp;cpufreq_gov_smartass2) Save the le and close it. The initial CPU Governor setup is now complete. when youve done all steps successfully, you should be able to choose your governor from the menu (menuconfig, xconfig, gconfig, nconfig). Once checked in the menu it will be included to the kernel. Commit that is nearly the same as above instructions: Add smartassV2 and lulzactive governor commit GoalKicker.com Android Notes for Professionals 1235 Section 257.3: I/O Schedulers You can enhance your kernel by adding new I/O schedulers if needed. Globally, governors and schedulers are the same; they both provide a way how the system should work. However, for the schedulers it is all about the input/output datastream except for the CPU settings. I/O schedulers decide how an upcoming I/O activity will be scheduled. The standard schedulers such as noop or cfq are performing very reasonably. I/O schedulers can be found in kernel_source/block. 1. Copy your I/O scheduler le (for example, sio-iosched.c) and browse to kernel_source/block. Paste the scheduler le there. 2. Now open Kcong.iosched and add your choice to the Kcong, for example for SIO: config IOSCHED_SIO tristate "Simple I/O scheduler" default y ---help--The Simple I/O scheduler is an extremely simple scheduler, based on noop and deadline, that relies on deadlines to ensure fairness. The algorithm does not do any sorting but basic merging, trying to keep a minimum overhead. It is aimed mainly for aleatory access devices (eg: flash devices). 3. Then set the default choice option as follows: default "sio" if DEFAULT_SIO Save the le. 4. Open the Makele in kernel_source/block/ and simply add the following line for SIO: obj-$(CONFIG_IOSCHED_SIO) += sio-iosched.o Save the le and you are done! The I/O schedulers should now pop up at the menu cong. Similar commit on GitHub: added Simple I/O scheduler. GoalKicker.com Android Notes for Professionals 1236 Chapter 258: Memory Leaks Section 258.1: Avoid leaking Activities with AsyncTask A word of caution: AsyncTask has many gotcha's apart from the memory leak described here. So be careful with this API, or avoid it altogether if you don't fully understand the implications. There are many alternatives (Thread, EventBus, RxAndroid, etc). One common mistake with AsyncTask is to capture a strong reference to the host Activity (or Fragment): class MyActivity extends Activity { private AsyncTask<Void, Void, Void> myTask = new AsyncTask<Void, Void, Void>() { // Don't do this! Inner classes implicitly keep a pointer to their // parent, which in this case is the Activity! } } This is a problem because AsyncTask can easily outlive the parent Activity, for example if a conguration change happens while the task is running. The right way to do this is to make your task a static class, which does not capture the parent, and holding a weak reference to the host Activity: class MyActivity extends Activity { static class MyTask extends AsyncTask<Void, Void, Void> { // Weak references will still allow the Activity to be garbage-collected private final WeakReference<MyActivity> weakActivity; MyTask(MyActivity myActivity) { this.weakActivity = new WeakReference<>(myActivity); } @Override public Void doInBackground(Void... params) { // do async stuff here } @Override public void onPostExecute(Void result) { // Re-acquire a strong reference to the activity, and verify // that it still exists and is active. MyActivity activity = weakActivity.get(); if (activity == null || activity.isFinishing() || activity.isDestroyed()) { // activity is no longer valid, don't do anything! return; } // The activity is still valid, do main-thread stuff here } } } GoalKicker.com Android Notes for Professionals 1237 Section 258.2: Common memory leaks and how to x them 1. Fix your contexts: Try using the appropriate context: For example since a Toast can be seen in many activities instead of in just one, use getApplicationContext() for toasts, and since services can keep running even though an activity has ended start a service with: Intent myService = new Intent(getApplicationContext(), MyService.class); Use this table as a quick guide for what context is appropriate: Original article on context here. 2. Static reference to Context A serious memory leak mistake is keeping a static reference to View. Every View has an inner reference to the Context. Which means an old Activity with its whole view hierarchy will not be garbage collected until the app is terminated. You will have your app twice in memory when rotating the screen. Make sure there is absolutely no static reference to View, Context or any of their descendants. 3. Check that you're actually nishing your services. For example, I have an intentService that uses the Google location service API. And I forgot to call googleApiClient.disconnect();: //Disconnect from API onDestroy() if (googleApiClient.isConnected()) { LocationServices.FusedLocationApi.removeLocationUpdates(googleApiClient, GoogleLocationService.this); googleApiClient.disconnect(); } 4. Check image and bitmaps usage: If you are using Square's library Picasso I found I was leaking memory by not using the .fit(), that drastically reduced my memory footprint from 50MB in average to less than 19MB: Picasso.with(ActivityExample.this) .load(object.getImageUrl()) .fit() GoalKicker.com Android Notes for Professionals //Activity context //This avoided the OutOfMemoryError 1238 .centerCrop() .into(imageView); //makes image to not stretch 5. If you are using broadcast receivers unregister them. 6. If you are using java.util.Observer (Observer pattern): Make sure to use deleteObserver(observer); Section 258.3: Detect memory leaks with the LeakCanary library LeakCanary is an Open Source Java library to detect memory leaks in your debug builds. Just add the dependencies in the build.gradle: dependencies { debugCompile 'com.squareup.leakcanary:leakcanary-android:1.5.1' releaseCompile 'com.squareup.leakcanary:leakcanary-android-no-op:1.5.1' testCompile 'com.squareup.leakcanary:leakcanary-android-no-op:1.5.1' } Then in your Application class: public class ExampleApplication extends Application { @Override public void onCreate() { super.onCreate(); if (LeakCanary.isInAnalyzerProcess(this)) { // This process is dedicated to LeakCanary for heap analysis. // You should not init your app in this process. return; } LeakCanary.install(this); } } Now LeakCanary will automatically show a notication when an activity memory leak is detected in your debug build. NOTE: Release code will contain no reference to LeakCanary other than the two empty classes that exist in the leakcanary-android-no-op dependency. Section 258.4: Anonymous callback in activities Every Time you create an anonymous class, it retains an implicit reference to its parent class. So when you write: public class LeakyActivity extends Activity { ... foo.registerCallback(new BarCallback() { @Override public void onBar() GoalKicker.com Android Notes for Professionals 1239 { // do something } }); } You are in fact sending a reference to your LeakyActivity instance to foo. When the user navigates away from your LeakyActivity, this reference can prevent the LeakyActivity instance from being garbage collected. This is a serious leak as activities hold a reference to their entire view hierarchy and are therefore rather large objects in memory. How to avoid this leak: You can of course avoid using anonymous callbacks in activities entirely. You can also unregister all of your callbacks with respect to the activity lifecycle. like so: public class NonLeakyActivity extends Activity { private final BarCallback mBarCallback = new BarCallback() { @Override public void onBar() { // do something } }); @Override protected void onResume() { super.onResume(); foo.registerCallback(mBarCallback); } @Override protected void onPause() { super.onPause(); foo.unregisterCallback(mBarCallback); } } Section 258.5: Activity Context in static classes Often you will want to wrap some of Android's classes in easier to use utility classes. Those utility classes often require a context to access the android OS or your apps' resources. A common example of this is a wrapper for the SharedPreferences class. In order to access Androids shared preferences one must write: context.getSharedPreferences(prefsName, mode); And so one may be tempted to create the following class: public class LeakySharedPrefsWrapper { private static Context sContext; public static void init(Context context) { sContext = context; } GoalKicker.com Android Notes for Professionals 1240 public int getInt(String name,int defValue) { return sContext.getSharedPreferences("a name", Context.MODE_PRIVATE).getInt(name,defValue); } } now, if you call init() with your activity context, the LeakySharedPrefsWrapper will retain a reference to your activity, preventing it from being garbage collected. How to avoid: When calling static helper functions, you can send in the application context using context.getApplicationContext(); When creating static helper functions, you can extract the application context from the context you are given (Calling getApplicationContext() on the application context returns the application context). So the x to our wrapper is simple: public static void init(Context context) { sContext = context.getApplicationContext(); } If the application context is not appropriate for your use case, you can include a Context parameter in each utility function, you should avoid keeping references to these context parameters. In this case the solution would look like so: public int getInt(Context context,String name,int defValue) { // do not keep a reference of context to avoid potential leaks. return context.getSharedPreferences("a name", Context.MODE_PRIVATE).getInt(name,defValue); } Section 258.6: Avoid leaking Activities with Listeners If you implement or create a listener in an Activity, always pay attention to the lifecycle of the object that has the listener registered. Consider an application where we have several dierent activities/fragments interested in when a user is logged in or out. One way of doing this would be to have a singleton instance of a UserController that can be subscribed to in order to get notied when the state of the user changes: public class UserController { private static UserController instance; private List<StateListener> listeners; public static synchronized UserController getInstance() { if (instance == null) { instance = new UserController(); } return instance; } private UserController() { // Init } GoalKicker.com Android Notes for Professionals 1241 public void registerUserStateChangeListener(StateListener listener) { listeners.add(listener); } public void logout() { for (StateListener listener : listeners) { listener.userLoggedOut(); } } public void login() { for (StateListener listener : listeners) { listener.userLoggedIn(); } } public interface StateListener { void userLoggedIn(); void userLoggedOut(); } } Then there are two activities, SignInActivity: public class SignInActivity extends Activity implements UserController.StateListener{ UserController userController; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.userController = UserController.getInstance(); this.userController.registerUserStateChangeListener(this); } @Override public void userLoggedIn() { startMainActivity(); } @Override public void userLoggedOut() { showLoginForm(); } ... public void onLoginClicked(View v) { userController.login(); } } And MainActivity: public class MainActivity extends Activity implements UserController.StateListener{ UserController userController; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GoalKicker.com Android Notes for Professionals 1242 this.userController = UserController.getInstance(); this.userController.registerUserStateChangeListener(this); } @Override public void userLoggedIn() { showUserAccount(); } @Override public void userLoggedOut() { finish(); } ... public void onLogoutClicked(View v) { userController.logout(); } } What happens with this example is that every time the user logs in and then logs out again, a MainActivity instance is leaked. The leak occurs because there is a reference to the activity in UserController#listeners. Please note: Even if we use an anonymous inner class as a listener, the activity would still leak: ... this.userController.registerUserStateChangeListener(new UserController.StateListener() { @Override public void userLoggedIn() { showUserAccount(); } @Override public void userLoggedOut() { finish(); } }); ... The activity would still leak, because the anonymous inner class has an implicit reference to the outer class (in this case the activity). This is why it is possible to call instance methods in the outer class from the inner class. In fact, the only type of inner classes that do not have a reference to the outer class are static inner classes. In short, all instances of non-static inner classes hold an implicit reference to the instance of the outer class that created them. There are two main approaches to solving this, either by adding a method to remove a listener from UserController#listeners or using a WeakReference to hold the reference of the listeners. Alternative 1: Removing listeners Let us start by creating a new method removeUserStateChangeListener(StateListener listener): public class UserController { ... public void registerUserStateChangeListener(StateListener listener) { GoalKicker.com Android Notes for Professionals 1243 listeners.add(listener); } public void removeUserStateChangeListener(StateListener listener) { listeners.remove(listener); } ... } Then let us call this method in the activity's onDestroy method: public class MainActivity extends Activity implements UserController.StateListener{ ... @Override protected void onDestroy() { super.onDestroy(); userController.removeUserStateChangeListener(this); } } With this modication the instances of MainActivity are no longer leaked when the user logs in and out. However, if the documentation isn't clear, chances are that the next developer that starts using UserController might miss that it is required to unregister the listener when the activity is destroyed, which leads us to the second method of avoiding these types of leaks. Alternative 2: Using weak references First o, let us start by explaining what a weak reference is. A weak reference, as the name suggests, holds a weak reference to an object. Compared to a normal instance eld, which is a strong reference, a weak references does not stop the garbage collector, GC, from removing the objects. In the example above this would allow MainActivity to be garbage-collected after it has been destroyed if the UserController used WeakReference to the reference the listeners. In short, a weak reference is telling the GC that if no one else has a strong reference to this object, go ahead and remove it. Let us modify the UserController to use a list of WeakReference to keep track of it's listeners: public class UserController { ... private List<WeakReference<StateListener>> listeners; ... public void registerUserStateChangeListener(StateListener listener) { listeners.add(new WeakReference<>(listener)); } public void removeUserStateChangeListener(StateListener listenerToRemove) { WeakReference referencesToRemove = null; for (WeakReference<StateListener> listenerRef : listeners) { StateListener listener = listenerRef.get(); if (listener != null && listener == listenerToRemove) { referencesToRemove = listenerRef; break; } GoalKicker.com Android Notes for Professionals 1244 } listeners.remove(referencesToRemove); } public void logout() { List referencesToRemove = new LinkedList(); for (WeakReference<StateListener> listenerRef : listeners) { StateListener listener = listenerRef.get(); if (listener != null) { listener.userLoggedOut(); } else { referencesToRemove.add(listenerRef); } } } public void login() { List referencesToRemove = new LinkedList(); for (WeakReference<StateListener> listenerRef : listeners) { StateListener listener = listenerRef.get(); if (listener != null) { listener.userLoggedIn(); } else { referencesToRemove.add(listenerRef); } } } ... } With this modication it doesn't matter whether or not the listeners are removed, since UserController holds no strong references to any of the listeners. However, writing this boilerplate code every time is cumbersome. Therefore, let us create a generic class called WeakCollection: public class WeakCollection<T> { private LinkedList<WeakReference<T>> list; public WeakCollection() { this.list = new LinkedList<>(); } public void put(T item){ //Make sure that we don't re add an item if we already have the reference. List<T> currentList = get(); for(T oldItem : currentList){ if(item == oldItem){ return; } } list.add(new WeakReference<T>(item)); } public List<T> get() { List<T> ret = new ArrayList<>(list.size()); List<WeakReference<T>> itemsToRemove = new LinkedList<>(); for (WeakReference<T> ref : list) { T item = ref.get(); if (item == null) { itemsToRemove.add(ref); } else { ret.add(item); } GoalKicker.com Android Notes for Professionals 1245 } for (WeakReference ref : itemsToRemove) { this.list.remove(ref); } return ret; } public void remove(T listener) { WeakReference<T> refToRemove = null; for (WeakReference<T> ref : list) { T item = ref.get(); if (item == listener) { refToRemove = ref; } } if(refToRemove != null){ list.remove(refToRemove); } } } Now let us re-write UserController to use WeakCollection<T> instead: public class UserController { ... private WeakCollection<StateListener> listenerRefs; ... public void registerUserStateChangeListener(StateListener listener) { listenerRefs.put(listener); } public void removeUserStateChangeListener(StateListener listenerToRemove) { listenerRefs.remove(listenerToRemove); } public void logout() { for (StateListener listener : listenerRefs.get()) { listener.userLoggedOut(); } } public void login() { for (StateListener listener : listenerRefs.get()) { listener.userLoggedIn(); } } ... } As shown in the code example above, the WeakCollection<T> removes all of the boilerplate code needed to use WeakReference instead of a normal list. To top it all o: If a call to UserController#removeUserStateChangeListener(StateListener) is missed, the listener, and all the objects it is referencing, will not leak. Section 258.7: Avoid memory leaks with Anonymous Class, GoalKicker.com Android Notes for Professionals 1246 Handler, Timer Task, Thread In android, every developer uses Anonymous Class (Runnable) at least once in a project. Any Anonymous Class has a reference to its parent (activity). If we perform a long-running task, the parent activity will not be destroyed until the task is ended. Example uses handler and Anonymous Runnable class. The memory will be leak when we quit the activity before the Runnable is nished. new Handler().postDelayed(new Runnable() { @Override public void run() { // do abc long 5s or so } }, 10000); // run "do abc" after 10s. It same as timer, thread... How do we solve it? 1. Don't do any long operating with Anonymous Class or we need a Static class for it and pass WeakReference into it (such as activity, view...). Thread is the same with Anonymous Class. 2. Cancel the Handler, Timer when activity is destroyed. GoalKicker.com Android Notes for Professionals 1247 Chapter 259: Enhancing Android Performance Using Icon Fonts Section 259.1: How to integrate Icon fonts In order to use icon fonts, just follow the steps below: Add the font le to your project You may create your font icon le from online websites such as icomoon, where you can upload SVG les of the required icons and then download the created icon font. Then, place the .ttf font le into a folder named fonts (name it as you wish) in the assets folder: Create a Helper Class Now, create the following helper class, so that you can avoid repeating the initialisation code for the font: public class FontManager { public static final String ROOT = "fonts/"; FONT_AWESOME = ROOT + "myfont.ttf"; public static Typeface getTypeface(Context context) { return Typeface.createFromAsset(context.getAssets(), FONT_AWESOME); } } You may use the Typeface class in order to pick the font from the assets. This way you can set the typeface to various views, for example, to a button: Button button=(Button) findViewById(R.id.button); Typeface iconFont=FontManager.getTypeface(getApplicationContext()); button.setTypeface(iconFont); Now, the button typeface has been changed to the newly created icon font. GoalKicker.com Android Notes for Professionals 1248 Pick up the icons you want Open the styles.css le attached to the icon font. There you will nd the styles with Unicode characters of your icons: .icon-arrow-circle-down:before { content: \e001; } .icon-arrow-circle-left:before { content: \e002; } .icon-arrow-circle-o-down:before { content: \e003; } .icon-arrow-circle-o-left:before { content: \e004; } This resource le will serve as a dictionary, which maps the Unicode character associated with a specic icon to a human-readable name. Now, create the string resources as follows: <resources> <! Icon Fonts --> <string name=icon_arrow_circle_down>&#xe001; </string> <string name=icon_arrow_circle_left>&#xe002; </string> <string name=icon_arrow_circle-o_down>&#xe003; </string> <string name=icon_arrow_circle_o_left>&#xe004; </string> </resources> Use the icons in your code Now, you may use your font in various views, for example, as follows: button.setText(getString(R.string.icon_arrow_circle_left)) You may also create button text views using icon fonts: GoalKicker.com Android Notes for Professionals 1249 Section 259.2: TabLayout with icon fonts public class TabAdapter extends FragmentPagerAdapter { CustomTypefaceSpan fonte; List<Fragment> fragments = new ArrayList<>(4); private String[] icons = {"\ue001","\uE002","\uE003","\uE004"}; public TabAdapter(FragmentManager fm, CustomTypefaceSpan fonte) { super(fm); this.fonte = fonte for (int i = 0; i < 4; i++){ fragments.add(MyFragment.newInstance()); } } public List<Fragment> getFragments() { return fragments; } @Override public Fragment getItem(int position) { return fragments.get(position); } GoalKicker.com Android Notes for Professionals 1250 @Override public CharSequence getPageTitle(int position) { SpannableStringBuilder ss = new SpannableStringBuilder(icons[position]); ss.setSpan(fonte,0,ss.length(), Spanned.SPAN_INCLUSIVE_INCLUSIVE); ss.setSpan(new RelativeSizeSpan(1.5f),0,ss.length(), Spanned.SPAN_INCLUSIVE_INCLUSIVE ); return ss; } @Override public int getCount() { return 4; } } In this example, myfont.ttf is in Assets folder. Creating Assets folder In your activity class //.. TabLayout tabs; ViewPager tabs_pager; public CustomTypefaceSpan fonte; //.. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //... fm = getSupportFragmentManager(); fonte = new CustomTypefaceSpan("icomoon",Typeface.createFromAsset(getAssets(),"myfont.ttf")); this.tabs = ((TabLayout) hasViews.findViewById(R.id.tabs)); this.tabs_pager = ((ViewPager) hasViews.findViewById(R.id.tabs_pager)); //... } @Override protected void onStart() { super.onStart(); //.. tabs_pager.setAdapter(new TabAdapter(fm,fonte)); tabs.setupWithViewPager(tabs_pager); //.. GoalKicker.com Android Notes for Professionals 1251 Chapter 260: Bitmap Cache Parameter Details key key to store bitmap in memory cache bitmap bitmap value which will cache into memory Memory ecient bitmap caching: This is particularly important if your application uses animations as they will be stopped during GC cleanup and make your application appears sluggish to the user. A cache allows reusing objects which are expensive to create. If you load on object into memory, you can think of this as a cache for the object.Working with bitmap in android is tricky.It is more important to cache the bimap if you are going to use it repeatedly. Section 260.1: Bitmap Cache Using LRU Cache LRU Cache The following example code demonstrates a possible implementation of the LruCache class for caching images. private LruCache<String, Bitmap> mMemoryCache; Here string value is key for bitmap value. // Get max available VM memory, exceeding this amount will throw an // OutOfMemory exception. Stored in kilobytes as LruCache takes an // int in its constructor. final int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); // Use 1/8th of the available memory for this memory cache. final int cacheSize = maxMemory / 8; mMemoryCache = new LruCache<String, Bitmap>(cacheSize) { @Override protected int sizeOf(String key, Bitmap bitmap) { // The cache size will be measured in kilobytes rather than // number of items. return bitmap.getByteCount() / 1024; } }; For add bitmap to the memory cache public void addBitmapToMemoryCache(String key, Bitmap bitmap) { if (getBitmapFromMemCache(key) == null) { mMemoryCache.put(key, bitmap); } } For get bitmap from memory cache public Bitmap getBitmapFromMemCache(String key) { return mMemoryCache.get(key); } For loading bitmap into imageview just use getBitmapFromMemCache("Pass key"). GoalKicker.com Android Notes for Professionals 1252 Chapter 261: Loading Bitmaps Eectively This Topic Mainly Concentrate on Loading the Bitmaps Eectively in Android Devices. When it comes to loading a bitmap, the question comes where it is loaded from. Here we are going to discuss about how to load the Bitmap from Resource with in the Android Device. i.e. eg from Gallery. We will go through this by example which are discussed below. Section 261.1: Load the Image from Resource from Android Device. Using Intents Using Intents to Load the Image from Gallery. 1. Initially you need to have the permission <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> 2. Use the Following Code to have the layout as designed follows. GoalKicker.com Android Notes for Professionals 1253 GoalKicker.com Android Notes for Professionals 1254 GoalKicker.com Android Notes for Professionals 1255 <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="androidexamples.idevroids.loadimagefrmgallery.MainActivity"> <ImageView android:id="@+id/imgView" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="@color/abc_search_url_text_normal"></ImageView> <Button android:id="@+id/buttonLoadPicture" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="0" android:text="Load Picture" android:layout_gravity="bottom|center"></Button> </LinearLayout> 3. Use the Following code to Display the image with button Click. Button Click will be Button loadImg = (Button) this.findViewById(R.id.buttonLoadPicture); loadImg.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent i = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI); startActivityForResult(i, RESULT_LOAD_IMAGE); } }); 3. Once you clicked on the button , it will open the gallery with help of intent. You need to select image and send it back to main activity. Here with help of onActivityResult we can do that. protected void onActivityResult(int requestCode, int resultCode, Intent data) super.onActivityResult(requestCode, resultCode, data); { if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) { Uri selectedImage = data.getData(); String[] filePathColumn = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); String picturePath = cursor.getString(columnIndex); cursor.close(); GoalKicker.com Android Notes for Professionals 1256 ImageView imageView = (ImageView) findViewById(R.id.imgView); imageView.setImageBitmap(BitmapFactory.decodeFile(picturePath)); } } GoalKicker.com Android Notes for Professionals 1257 Chapter 262: Exceptions Section 262.1: ActivityNotFoundException This is a very common Exception. It causes your application to stop during the start or execution of your app. In the LogCat you see the message: android.content.ActivityNotFoundException : Unable to find explicit activity class; have you declared this activity in your AndroidManifest.xml? In this case, check if you have declared your activity in the AndroidManifest.xml le. The simplest way to declare your Activity in AndroidManifest.xml is: <activity android:name="com.yourdomain.YourStoppedActivity" /> Section 262.2: OutOfMemoryError This is a runtime error that happens when you request a large amount of memory on the heap. This is common when loading a Bitmap into an ImageView. You have some options: 1. Use a large application heap Add the "largeHeap" option to the application tag in your AndroidManifest.xml. This will make more memory available to your app but will likely not x the root issue. <application largeHeap="true" ... > 2. Recycle your bitmaps After loading a bitmap, be sure to recycle it and free up memory: if (bitmap != null && !bitmap.isRecycled()) bitmap.recycle(); 3. Load sampled bitmaps into memory Avoid loading the entire bitmap into memory at once by sampling a reduced size, using BitmapOptions and inSampleSize. See Android documentation for example Section 262.3: Registering own Handler for unexpected exceptions This is how you can react to exceptions which have not been catched, similar to the system's standard "Application XYZ has crashed" import android.app.Application; import android.util.Log; import java.io.File; GoalKicker.com Android Notes for Professionals 1258 import java.io.FileWriter; import java.io.IOException; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Date; import java.util.Locale; /** * Application class writing unexpected exceptions to a crash file before crashing. */ public class MyApplication extends Application { private static final String TAG = "ExceptionHandler"; @Override public void onCreate() { super.onCreate(); // Setup handler for uncaught exceptions. final Thread.UncaughtExceptionHandler defaultHandler = Thread.getDefaultUncaughtExceptionHandler(); Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { @Override public void uncaughtException(Thread thread, Throwable e) { try { handleUncaughtException(e); System.exit(1); } catch (Throwable e2) { Log.e(TAG, "Exception in custom exception handler", e2); defaultHandler.uncaughtException(thread, e); } } }); } private void handleUncaughtException(Throwable e) throws IOException { Log.e(TAG, "Uncaught exception logged to local file", e); // Create a new unique file final DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss", Locale.US); String timestamp; File file = null; while (file == null || file.exists()) { timestamp = dateFormat.format(new Date()); file = new File(getFilesDir(), "crashLog_" + timestamp + ".txt"); } Log.i(TAG, "Trying to create log file " + file.getPath()); file.createNewFile(); // Write the stacktrace to the file FileWriter writer = null; try { writer = new FileWriter(file, true); for (StackTraceElement element : e.getStackTrace()) { writer.write(element.toString()); } } finally { if (writer != null) writer.close(); } // You can (and probably should) also display a dialog to notify the user } } GoalKicker.com Android Notes for Professionals 1259 Then register this Application class in your AndroidManifest.xml: <application android:name="de.ioxp.arkmobile.MyApplication" > Section 262.4: UncaughtException If you want to handle uncaught exceptions try to catch them all in onCreate method of you Application class: public class MyApp extends Application { @Override public void onCreate() { super.onCreate(); try { Thread .setDefaultUncaughtExceptionHandler( new Thread.UncaughtExceptionHandler() { @Override public void uncaughtException(Thread thread, Throwable e) { Log.e(TAG, "Uncaught Exception thread: "+thread.getName()+" "+e.getStackTrace()); handleUncaughtException (thread, e); } }); } catch (SecurityException e) { Log.e(TAG, "Could not set the Default Uncaught Exception Handler:" +e.getStackTrace()); } } private void handleUncaughtException (Thread thread, Throwable e){ Log.e(TAG, "uncaughtException:"); e.printStackTrace(); } } Section 262.5: NetworkOnMainThreadException From the documentation: The exception that is thrown when an application attempts to perform a networking operation on its main thread. This is only thrown for applications targeting the Honeycomb SDK or higher. Applications targeting earlier SDK versions are allowed to do networking on their main event loop threads, but it's heavily discouraged. Here's an example of a code fragment that may cause that exception: public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); GoalKicker.com Android Notes for Professionals 1260 Uri.Builder builder = new Uri.Builder().scheme("http").authority("www.google.com"); HttpURLConnection urlConnection = null; BufferedReader reader = null; URL url; try { url = new URL(builder.build().toString()); urlConnection = (HttpURLConnection) url.openConnection(); urlConnection.setRequestMethod("GET"); urlConnection.connect(); } catch (IOException e) { Log.e("TAG","Connection error", e); } finally{ if (urlConnection != null) { urlConnection.disconnect(); } if (reader != null) { try { reader.close(); } catch (final IOException e) { Log.e("TAG", "Error closing stream", e); } } } } } Above code will throw NetworkOnMainThreadException for applications targeting Honeycomb SDK (Android v3.0) or higher as the application is trying to perform a network operation on the main thread. To avoid this exception, your network operations must always run in a background task via an AsyncTask, Thread, IntentService, etc. private class MyAsyncTask extends AsyncTask<String, Integer, Void> { @Override protected Void doInBackground(String[] params) { Uri.Builder builder = new Uri.Builder().scheme("http").authority("www.google.com"); HttpURLConnection urlConnection = null; BufferedReader reader = null; URL url; try { url = new URL(builder.build().toString()); urlConnection = (HttpURLConnection) url.openConnection(); urlConnection.setRequestMethod("GET"); urlConnection.connect(); } catch (IOException e) { Log.e("TAG","Connection error", e); } finally{ if (urlConnection != null) { urlConnection.disconnect(); } if (reader != null) { try { reader.close(); } catch (final IOException e) { Log.e("TAG", "Error closing stream", e); } } } return null; GoalKicker.com Android Notes for Professionals 1261 } } Section 262.6: DexException com.android.dex.DexException: Multiple dex files define Lcom/example/lib/Class; This error occurs because the app, when packaging, nds two .dex les that dene the same set of methods. Usually this happens because the app has accidentally acquired 2 separate dependencies on the same library. For instance, say you have a project, and you want to rely on two libraries: A and B, which each have their own dependencies. If library B already has a dependency on library A, this error will be thrown if library A is added to the project by itself. Compiling library B already gave the set of code from A, so when the compiler goes to bundle library A, it nds library A's methods already packaged. To resolve, make sure that none of your dependencies could accidentally be added twice in such a manner GoalKicker.com Android Notes for Professionals 1262 Chapter 263: Logging and using Logcat Option -b (buer) Description Loads an alternate log buer for viewing, such as events or radio. The main buer is used by default. See Viewing Alternative Log Buers. -c Clears (ushes) the entire log and exits. -d Dumps the log to the screen and exits. -f (lename) Writes log message output to (lename). The default is stdout. -g Prints the size of the specied log buer and exits. -n (count) Sets the maximum number of rotated logs to (count). The default value is 4. Requires the -r option. -r (kbytes) Rotates the log le every (kbytes) of output. The default value is 16. Requires the -f option. -s Sets the default lter spec to silent. -v (format) Sets the output format for log messages. The default is brief format. Section 263.1: Filtering the logcat output It is helpful to lter the logcat output because there are many messages which are not of interest. To lter the output, open the "Android Monitor" and click on the drop down on the top-right and select Edit Filter Conguration Now you can add custom lters to show messages which are of interest, as well as lter out well-known log lines which can safely be ignored. To ignore a part of the output you may dene a Regular Expression. Here is an example of excluding matching tags: ^(?!(HideMe|AndThis)) This can be entered by following this example: GoalKicker.com Android Notes for Professionals 1263 The above is a regular expression which excludes inputs. If you wanted to add another tag to the blacklist, add it after a pipe | character. For example, if you wanted to blacklist "GC", you would use a lter like this: ^(?!(HideMe|AndThis|GC)) For more documentation and examples visit Logging and using Logcat Section 263.2: Logging Any quality Android application will keep track of what it's doing through application logs. These logs allow easy debugging help for the developer to diagnose what's going on with the application. Full Android Documentation can be found here, but a summary follows: Basic Logging The Log class is the main source of writing developer logs, by specifying a tag and a message. The tag is what you can use to lter log messages by to identify which lines come from your particular Activity. Simply call Log.v(String tag, String msg); And the Android system will write a message to the logcat: 07-28 12:00:00.759 24812-24839/my.packagename V/MyAnimator: Some log messages time stamp | app.package | any tag | process & thread ids log level the log message TIP: Notice the process id and the thread id. If they are the same - the log is coming from the main/UI thread! Any tag can be used, but it is common to use the class name as a tag: GoalKicker.com Android Notes for Professionals 1264 public static final String tag = MyAnimator.class.getSimpleName(); Log Levels The Android logger has 6 dierent levels, each of which serve a certain purpose: ERROR: Log.e() Used to indicate critical failure, this is the level printed at when throwing an Exception. WARN: Log.w() Used to indicate a warning, mainly for recoverable failures INFO: Log.i() Used to indicate higher-level information about the state of the application DEBUG: Log.d() Used to log information that would be useful to know when debugging the application, but would get in the way when running the application VERBOSE: Log.v() Used to log information that reects the small details about the state of the application ASSERT: Log.wtf() Used to log information about a condition that should never happen. wtf stands for "What a Terrible Failure". Motivation For Logging The motivation for logging is to easily nd errors, warnings, and other information by glancing at the chain of events from the application. For instance, imagine an application that reads lines from a text le, but incorrectly assumes that the le will never be empty. The log trace (of an app that doesn't log) would look something like this: E/MyApplication: Process: com.example.myapplication, PID: 25788 com.example.SomeRandomException: Expected string, got 'null' instead Followed by a bunch of stack traces that would eventually lead to the oending line, where stepping through with a debugger would eventually lead to the problem However, the log trace of an application with logging enabled could look something like this: V/MyApplication: Looking for file myFile.txt on the SD card D/MyApplication: Found file myFile.txt at path <path> V/MyApplication: Opening file myFile.txt D/MyApplication: Finished reading myFile.txt, found 0 lines V/MyApplication: Closing file myFile.txt ... E/MyApplication: Process: com.example.myapplication, PID: 25788 com.example.SomeRandomException: Expected string, got 'null' instead A quick glance at the logs and it is obvious that the le was empty. Things To Considering When Logging: Although logging is a powerful tool that allows Android developers to gain a greater insight into the inner working of their application, logging does have some drawbacks. Log Readability: It is common for Android Applications to have several logs running synchronously. As such, it is very important that each log is easily readable and only contains relevant, necessary information. Performance: Logging does require a small amount of system resources. In general, this does not warrant concern, however, if GoalKicker.com Android Notes for Professionals 1265 overused, logging may have a negative impact on application performance. Security: Recently, several Android Applications have been added to the Google Play marketplace that allow the user to view logs of all running applications. This unintended display of data may allow users to view condential information. As a rule of thumb, always remove logs that contain on non-public data before publishing your application to the marketplace. Conclusion: Logging is an essential part of an Android application, because of the power it gives to developers. The ability to create a useful log trace is one of the most challenging aspects of software development, but Android's Log class helps to make it much easier. For more documentation and examples visit Logging and using Logcat Section 263.3: Using the Logcat Logcat is a command-line tool that dumps a log of system messages, including stack traces when the device throws an error and messages that you have written from your app with the Log class. The Logcat output can be displayed within Android Studio's Android Monitor or with adb command line. In Android Studio Show by clicking the "Android Monitor" icon: Or by pressing Alt + 6 on Windows/Linux or CMD + 6 on Mac. via command line: Simple usage: $ adb logcat With timestamps: $ adb logcat -v time Filter on specic text: $ adb logcat -v time | grep 'searchtext' There are many options and lters available to command line logcat, documented here. A simple but useful example is the following lter expression that displays all log messages with priority level "error", on all tags: $ adb logcat *:E GoalKicker.com Android Notes for Professionals 1266 Section 263.4: Log with link to source directly from Logcat This is a nice trick to add a link to code, so it will be easy to jump to the code that issued the log. With the following code, this call: MyLogger.logWithLink("MyTag","param="+param); Will result in: 07-26...012/com.myapp D/MyTag: MyFrag:onStart(param=3) this to a link to source! (MyFrag.java:2366) // << logcat converts This is the code (inside a class called MyLogger): static StringBuilder sb0 = new StringBuilder(); // reusable string object public static void logWithLink(String TAG, Object param) { StackTraceElement stack = Thread.currentThread().getStackTrace()[3]; sb0.setLength(0); String c = stack.getFileName().substring(0, stack.getFileName().length() - 5); // removes the ".java" sb0.append(c).append(":"); sb0.append(stack.getMethodName()).append('('); if (param != null) { sb0.append(param); } sb0.append(") "); sb0.append(" (").append(stack.getFileName()).append(':').append(stack.getLineNumber()).append(')'); Log.d(TAG, sb0.toString()); } This is a basic example, it can be easily extended to issue a link to the caller (hint: the stack will be [4] instead of [3]), and you can also add other relevant information. Section 263.5: Clear logs In order to clear (ush) the entire log: adb logcat -c Section 263.6: Android Studio usage 1. Hide/show printed information: GoalKicker.com Android Notes for Professionals 1267 2. Control verbosity of the logging: 3. Disable/enable opening log window when starting run/debug application Section 263.7: Generating Logging code Android Studio's Live templates can oer quite a few shortcuts for quick logging. To use Live templates, all you need to do is to start typing the template name, and hit TAB or enter to insert the statement. Examples: logi turns into android.util.Log.i(TAG, "$METHOD_NAME$: $content$"); $METHOD_NAME$ will automatically be replaced with your method name, and the cursor will wait for the content to be lled. loge same, for error etc. for the rest of the logging levels. Full list of templates can be found in Android Studio's settings ( ALT + s and type "live"). And it is possible to GoalKicker.com Android Notes for Professionals 1268 add your custom templates as well. If you nd Android Studio's Live templates not enough for your needs, you can consider Android Postx Plugin This is a very useful library which helps you to avoid writing the logging line manually. The syntax is absolutely simple: .log - Logging. If there is constant variable "TAG", it use "TAG" . Else it use class name. GoalKicker.com Android Notes for Professionals 1269 Chapter 264: ADB (Android Debug Bridge) ADB (Android Debug Bridge) is a command line tool that used to communicate with an emulator instance or connected Android device. Overview of ADB A large portion of this topic was split out to adb shell Section 264.1: Connect ADB to a device via WiFi The standard ADB conguration involves a USB connection to a physical device. If you prefer, you can switch over to TCP/IP mode, and connect ADB via WiFi instead. Not rooted device 1. Get on the same network: Make sure your device and your computer are on the same network. 2. Connect the device to the host computer with a USB cable. 3. Connect adb to device over network: While your device is connected to adb via USB, do the following command to listen for a TCP/IP connection on a port (default 5555): Type adb tcpip <port> (switch to TCP/IP mode). Disconnect the USB cable from the target device. Type adb connect <ip address>:<port> (port is optional; default 5555). For example: adb tcpip 5555 adb connect 192.168.0.101:5555 If you don't know your device's IP you can: check the IP in the WiFi settings of your device. use ADB to discover IP (via USB): 1. Connect the device to the computer via USB 2. In a command line, type adb shell ifconfig and copy your device's IP address To revert back to debugging via USB use the following command: adb usb You can also connect ADB via WiFi by installing a plugin to Android Studio. In order to do so, go to Settings > Plugins and Browse repositories, search for ADB WiFi, install it, and reopen Android Studio. You will see a new icon in your toolbar as shown in the following image. Connect the device to the host computer via USB and click on this AndroidWiFiADB icon. It will display a message whether your device is connected or not. Once it gets connected you can unplug your USB. GoalKicker.com Android Notes for Professionals 1270 Rooted device Note: Some devices which are rooted can use the ADB WiFi App from the Play Store to enable this in a simple way. Also, for certain devices (especially those with CyanogenMod ROMs) this option is present in the Developer Options among the Settings. Enabling it will give you the IP address and port number required to connect to adb by simply executing adb connect <ip address>:<port>. When you have a rooted device but don't have access to a USB cable The process is explained in detail in the following answer: http://stackoverow.com/questions/2604727/how-can-i-connect-to-android-with-adb-over-tcp/3623727#3623727 The most important commands are shown below. Open a terminal in the device and type the following: su setprop service.adb.tcp.port <a tcp port number> stop adbd start adbd For example: setprop service.adb.tcp.port 5555 And on your computer: adb connect <ip address>:<a tcp port number> For example: adb connect 192.168.1.2:5555 To turn it o: setprop service.adb.tcp.port -1 stop adbd start adbd Avoid timeout By default adb will timeout after 5000 ms. This can happen in some cases such as slow WiFi or large APK. A simple change in the Gradle conguration can do the trick: android { adbOptions { timeOutInMs 10 * 1000 } } GoalKicker.com Android Notes for Professionals 1271 Section 264.2: Direct ADB command to specic device in a multi-device setting 1. Target a device by serial number Use the -s option followed by a device name to select on which device the adb command should run. The -s options should be rst in line, before the command. adb -s <device> <command> Example: adb devices List of devices attached emulator-5554 device 02157df2d1faeb33 device adb -s emulator-5554 shell Example#2: adb devices -l List of devices attached 06157df65c6b2633 device usb:1-3 product:zerofltexx model:SM_G920F device:zeroflte LC62TB413962 device usb:1-5 product:a50mgp_dug_htc_emea model:HTC_Desire_820G_dual_sim device:htc_a50mgp_dug adb -s usb:1-3 shell 2. Target a device, when only one device type is connected You can target the only running emulator with -e adb -e <command> Or you can target the only connected USB device with -d adb -d <command> Section 264.3: Taking a screenshot and video (for kitkat only) from a device display Screen shot: Option 1 (pure adb) The shell adb command allows us to execute commands using a device's built-in shell. The screencap shell command captures the content currently visible on a device and saves it into a given image le, e.g. /sdcard/screen.png: adb shell screencap /sdcard/screen.png You can then use the pull command to download the le from the device into the current directory on you computer: GoalKicker.com Android Notes for Professionals 1272 adb pull /sdcard/screen.png Screen shot:Option 2 (faster) Execute the following one-liner: (Marshmallow and earlier): adb shell screencap -p | perl -pe 's/\x0D\x0A/\x0A/g' > screen.png (Nougat and later): adb shell screencap -p > screen.png The -p ag redirects the output of the screencap command to stdout. The Perl expression this is piped into cleans up some end-of-line issues on Marshmallow and earlier. The stream is then written to a le named screen.png within the current directory. See this article and this article for more information. Video this only work in KitKat and via ADB only. This not Working below Kitkat To start recording your devices screen, run the following command: adb shell screenrecord /sdcard/example.mp4, This command will start recording your devices screen using the default settings and save the resulting video to a le at /sdcard/example.mp4 le on your device. When youre done recording, press Ctrl+C (z in Linux) in the Command Prompt window to stop the screen recording. You can then nd the screen recording le at the location you specied. Note that the screen recording is saved to your devices internal storage, not to your computer. The default settings are to use your devices standard screen resolution, encode the video at a bitrate of 4Mbps, and set the maximum screen recording time to 180 seconds. For more information about the command-line options you can use, run the following command: adb shell screenrecord help, This works without rooting the device. Hope this helps. Section 264.4: Pull (push) les from (to) the device You may pull (download) les from the device by executing the following command: adb pull <remote> <local> For example: adb pull /sdcard/ ~/ You may also push (upload) les from your computer to the device: adb push <local> <remote> For example: adb push ~/image.jpg /sdcard/ Example to Retrieve Database from device GoalKicker.com Android Notes for Professionals 1273 sudo adb -d shell "run-as com.example.name cat /data/da/com.example.name /databases/DATABASE_NAME > /sdcard/file Section 264.5: Print verbose list of connected devices To get a verbose list of all devices connected to adb, write the following command in your terminal: adb devices -l Example Output List of devices attached ZX1G425DC6 device usb:336592896X product:shamu model:Nexus_6 device:shamu 013e4e127e59a868 device usb:337641472X product:bullhead model:Nexus_5X device:bullhead ZX1D229KCN device usb:335592811X product:titan_retde model:XT1068 device:titan_umtsds A50PL device usb:331592812X The rst column is the serial number of the device. If it starts with emulator-, this device is an emulator. usb: the path of the device in the USB subsystem. product: the product code of the device. This is very manufacturer-specic, and as you can see in the case of the Archos device A50PL above, it can be blank. model: the device model. Like product, can be empty. device: the device code. This is also very manufacturer-specic, and can be empty. Section 264.6: View logcat You can run logcat as an adb command or directly in a shell prompt of your emulator or connected device. To view log output using adb, navigate to your SDK platform-tools/ directory and execute: $ adb logcat Alternatively, you can create a shell connection to a device and then execute: $ adb shell $ logcat One useful command is: adb logcat -v threadtime This displays the date, invocation time, priority, tag, and the PID and TID of the thread issuing the message in a long message format. Filtering Logcat logs got so called log levels: V Verbose, D Debug, I Info, W Warning, E Error, F Fatal, S Silent You can lter logcat by log level as well. For instance if you want only to output Debug level: adb logcat *:D Logcat can be ltered by a package name, of course you can combine it with the log level lter: GoalKicker.com Android Notes for Professionals 1274 adb logcat <package-name>:<log level> You can also lter the log using grep (more on ltering logcat output here): adb logcat | grep <some text> In Windows, lter can be used using ndstr, for example: adb logcat | findstr <some text> To view alternative log buer [main|events|radio], run the logcat with the -b option: adb logcat -b radio Save output in le : adb logcat > logcat.txt Save output in le while also watching it: adb logcat | tee logcat.txt Cleaning the logs: adb logcat -c Section 264.7: View and pull cache les of an app You may use this command for listing the les for your own debuggable apk: adb shell run-as <sample.package.id> ls /data/data/sample.package.id/cache And this script for pulling from cache, this copy the content to sdcard rst, pull and then remove it at the end: #!/bin/sh adb shell "run-as <sample.package.id> cat '/data/data/<sample.package.id>/$1' > '/sdcard/$1'" adb pull "/sdcard/$1" adb shell "rm '/sdcard/$1'" Then you can pull a le from cache like this: ./pull.sh cache/someCachedData.txt Get Database le via ADB sudo adb -d shell "run-as com.example.name cat /data/da/com.example.name /databases/STUDENT_DATABASE > /sdcard/file Section 264.8: Clear application data One can clear the user data of a specic app using adb: adb shell pm clear <package> GoalKicker.com Android Notes for Professionals 1275 This is the same as to browse the settings on the phone, select the app and press on the clear data button. pm invokes the package manager on the device clear deletes all data associated with a package Section 264.9: View an app's internal data (data/data/<sample.package.id>) on a device First, make sure your app can be backed up in AndroidManifest.xml, i.e. android:allowBackup is not false. Backup command: adb -s <device_id> backup -noapk <sample.package.id> Create a tar with dd command: dd if=backup.ab bs=1 skip=24 | python -c "import zlib,sys;sys.stdout.write(zlib.decompress(sys.stdin.read()))" > backup.tar Extract the tar: tar -xvf backup.tar You may then view the extracted content. Section 264.10: Install and run an application To install an APK le, use the following command: adb install path/to/apk/file.apk or if the app is existing and we want to reinstall adb install -r path/to/apk/file.apk To uninstall an application, we have to specify its package adb uninstall application.package.name Use the following command to start an app with a provided package name (or a specic activity in an app): adb shell am start -n adb shell am start <package>/<activity> For example, to start Waze: adb shell am start -n adb shell am start com.waze/com.waze.FreeMapAppActivity Section 264.11: Sending broadcast It's possible to send broadcast to BroadcastReceiver with adb. In this example we are sending broadcast with action com.test.app.ACTION and string extra in bundle 'foo'='bar': GoalKicker.com Android Notes for Professionals 1276 adb shell am broadcast -a action com.test.app.ACTION --es foo "bar" You can put any other supported type to bundle, not only strings: --ez - boolean --ei - integer --el - long --ef - oat --eu - uri --eia - int array (separated by ',') --ela - long array (separated by ',') --efa - oat array (separated by ',') --esa - string array (separated by ',') To send intent to specic package/class -n or -p parameter can be used. Sending to package: -p com.test.app Sending to a specic component (SomeReceiver class in com.test.app package): -n com.test.app/.SomeReceiver Useful examples: Sending a "boot complete" broadcast Sending a "time changed" broadcast after setting time via adb command Section 264.12: Backup You can use the adb backup command to backup your device. adb backup [-f <file>] [-apk|-noapk] [-obb|-noobb] [-shared|-noshared] [-all] [-system|nosystem] [<packages...>] -f <filename> specify lename default: creates backup.ab in the current directory -apk|noapk enable/disable backup of .apks themself default: -noapk -obb|noobb enable/disable backup of additional les default: -noobb -shared|noshared backup device's shared storage / SD card contents default: -noshared -all backup all installed apllications -system|nosystem include system applications default: -system <packages> a list of packages to be backed up (e.g. com.example.android.myapp) (not needed if -all is specied) For a full device backup, including everything, use adb backup -apk -obb -shared -all -system -f fullbackup.ab GoalKicker.com Android Notes for Professionals 1277 Note: Doing a full backup can take a long time. In order to restore a backup, use adb restore backup.ab Section 264.13: View available devices Command: adb devices Result example: List of devices attached emulator-5554 device PhoneRT45Fr54 offline 123.454.67.45 no device First column - device serial number Second column - connection status Android documentation Section 264.14: Connect device by IP Enter these commands in Android device Terminal su setprop service.adb.tcp.port 5555 stop adbd start adbd After this, you can use CMD and ADB to connect using the following command adb connect 192.168.0.101.5555 And you can disable it and return ADB to listening on USB with setprop service.adb.tcp.port -1 stop adbd start adbd From a computer, if you have USB access already (no root required) It is even easier to switch to using Wi-Fi, if you already have USB. From a command line on the computer that has the device connected via USB, issue the commands adb tcpip 5555 adb connect 192.168.0.101:5555 Replace 192.168.0.101 with device IP GoalKicker.com Android Notes for Professionals 1278 Section 264.15: Install ADB on Linux system How to install the Android Debugging Bridge (ADB) to a Linux system with the terminal using your distro's repositories. Install to Ubuntu/Debian system via apt: sudo apt-get update sudo apt-get install adb Install to Fedora/CentOS system via yum: sudo yum check-update sudo yum install android-tools Install to Gentoo system with portage: sudo emerge --ask dev-util/android-tools Install to openSUSE system with zypper: sudo zypper refresh sudo zypper install android-tools Install to Arch system with pacman: sudo pacman -Syyu sudo pacman -S android-tools Section 264.16: View activity stack adb -s <serialNumber> shell dumpsys activity activities Very useful when used together with the watch unix command: watch -n 5 "adb -s <serialNumber> shell dumpsys activity activities | sed -En -e '/Stack #/p' -e '/Running activities/,/Run #0/p'" Section 264.17: Reboot device You can reboot your device by executing the following command: adb reboot Perform this command to reboot into bootloader: adb reboot bootloader Reboot to recovery mode: adb reboot recovery Be aware that the device won't shutdown rst! GoalKicker.com Android Notes for Professionals 1279 Section 264.18: Read device information Write the following command in your terminal: adb shell getprop This will print all available information in the form of key/value pairs. You can just read specic information by appending the name of a specic key to the command. For example: adb shell getprop ro.product.model Here are a few interesting pieces of information that you cat get: ro.product.model: Model name of the device (e.g. Nexus 6P) ro.build.version.sdk: API Level of the device (e.g. 23) ro.product.brand: Branding of the device (e.g. Samsung) Full Example Output Section 264.19: List all permissions that require runtime grant from users on Android 6.0 adb shell pm list permissions -g -d Section 264.20: Turn on/o Wi Turn on: adb shell svc wifi enable Turn o: adb shell svc wifi disable Section 264.21: Start/stop adb Start ADB: adb kill-server Stop ADB: adb start-server GoalKicker.com Android Notes for Professionals 1280 Chapter 265: Localization with resources in Android Section 265.1: Conguration types and qualier names for each folder under the "res" directory Each resource directory under the res folder (listed in the example above) can have dierent variations of the contained resources in similarly named directory suxed with dierent qualifier-values for each configuration-type. Example of variations of `` directory with dierent qualier values suxed which are often seen in our android projects: drawable/ drawable-en/ drawable-fr-rCA/ drawable-en-port/ drawable-en-notouch-12key/ drawable-port-ldpi/ drawable-port-notouch-12key/ Exhaustive list of all dierent conguration types and their qualier values for android resources: Conguration Qualier Values MCC and MNC Examples: mcc310 mcc310-mnc004 mcc208-mnc00 etc. Language and region Examples: en fr en-rUS fr-rFR fr-rCA Layout Direction ldrtl ldltr smallestWidth swdp Examples: sw320dp sw600dp sw720dp Available width wdp w720dp w1024dp Available height hdp h720dp h1024dp Screen size small GoalKicker.com Android Notes for Professionals 1281 normal large xlarge Screen aspect long notlong Round screen round notround Screen orientation port land UI mode car desk television appliancewatch Night mode night notnight Screen pixel density (dpi) ldpi mdpi hdpi xhdpi xxhdpi xxxhdpi nodpi tvdpi anydpi Touchscreen type notouch nger Keyboard availability keysexposed keyshidden keyssoft Primary text input method nokeys qwerty 12key Navigation key availability navexposed navhidden Primary non-touch navigation method nonav dpad trackball wheel Platform Version (API level) Examples: v3 v4 v7 Section 265.2: Adding translation to your Android app You have to create a dierent strings.xml le for every new language. GoalKicker.com Android Notes for Professionals 1282 1. Right-click on the res folder 2. Choose New Values resource le 3. Select a locale from the available qualiers 4. Click on the Next button (>>) 5. Select a language 6. Name the le strings.xml strings.xml <resources> <string name="app_name">Testing Application</string> <string name="hello">Hello World</string> </resources> strings.xml(hi) <resources> <string name="app_name"> </string> <string name="hello"> </string> </resources> Setting the language programmatically: public void setLocale(String locale) // Pass "en","hi", etc. { myLocale = new Locale(locale); // Saving selected locale to session - SharedPreferences. saveLocale(locale); // Changing locale. Locale.setDefault(myLocale); android.content.res.Configuration config = new android.content.res.Configuration(); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) { config.setLocale(myLocale); } else { config.locale = myLocale; } if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) { getBaseContext().createConfigurationContext(config); } else { getBaseContext().getResources().updateConfiguration(config, getBaseContext().getResources().getDisplayMetrics()); } } The function above will change the text elds which are referenced from strings.xml. For example, assume that you have the following two text views: <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/app_name"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/hello"/> Then, after changing the locale, the language strings having the ids app_name and hello will be changed accordingly. GoalKicker.com Android Notes for Professionals 1283 Section 265.3: Type of resource directories under the "res" folder When localizing dierent types of resources are required, each of which has its own home in the android project structure. Following are the dierent directories that we can place under the \res directory. The resource types placed in each of these directories are explained in the table below: Directory animator/ XML les that dene property animations. Resource Type anim/ XML les that dene tween animations. (Property animations can also be saved in this directory, but the animator/ directory is preferred for property animations to distinguish between the two types.) color/ XML les that dene a state list of colors. See Color State List Resource "Bitmap les (.png, .9.png, .jpg, .gif) or XML les that are compiled into the following drawable resource drawable/ subtypes: : Bitmap files - Nine-Patches (re-sizable bitmaps) - State lists - Shapes Animation drawables - Other drawables - " mipmap/ Drawable les for dierent launcher icon densities. For more information on managing launcher icons with mipmap/ folders, see Managing Projects Overview. layout/ XML les that dene a user interface layout. See Layout Resource. menu/ XML les that dene application menus, such as an Options Menu, Context Menu, or Sub Menu. See Menu Resource. raw/ Arbitrary les to save in their raw form. To open these resources with a raw InputStream, call Resources.openRawResource() with the resource ID, which is R.raw.lename. However, if you need access to original le names and le hierarchy, you might consider saving some resources in the assets/ directory (instead ofres/raw/). Files in assets/ are not given a resource ID, so you can read them only using AssetManager. values/ XML les that contain simple values, such as strings, integers, and colors, as well as styles and themes xml/ Arbitrary XML les that can be read at runtime by calling Resources.getXML(). Various XML conguration les must be saved here, such as a searchable conguration. Section 265.4: Change locale of android application programmatically In above examples you understand how to localize resources of application. Following example explain how to change the application locale within application, not from device. In order to change Application locale only, you can use below locale util. import android.app.Application; import android.content.Context; import android.content.SharedPreferences; import android.content.res.Configuration; import android.content.res.Resources; import android.os.Build; import android.preference.PreferenceManager; import android.view.ContextThemeWrapper; import java.util.Locale; /** * Created by Umesh on 10/10/16. */ public class LocaleUtils { private static Locale mLocale; public static void setLocale(Locale locale){ GoalKicker.com Android Notes for Professionals 1284 mLocale = locale; if(mLocale != null){ Locale.setDefault(mLocale); } } public static void updateConfiguration(ContextThemeWrapper wrapper){ if(mLocale != null && Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1){ Configuration configuration = new Configuration(); configuration.setLocale(mLocale); wrapper.applyOverrideConfiguration(configuration); } } public static void updateConfiguration(Application application, Configuration configuration){ if(mLocale != null && Build.VERSION.SDK_INT < Build.VERSION_CODES.JELLY_BEAN_MR1){ Configuration config = new Configuration(configuration); config.locale = mLocale; Resources res = application.getBaseContext().getResources(); res.updateConfiguration(configuration, res.getDisplayMetrics()); } } public static void updateConfiguration(Context context, String language, String country){ Locale locale = new Locale(language,country); setLocale(locale); if(mLocale != null){ Resources res = context.getResources(); Configuration configuration = res.getConfiguration(); configuration.locale = mLocale; res.updateConfiguration(configuration,res.getDisplayMetrics()); } } public static String getPrefLangCode(Context context) { return PreferenceManager.getDefaultSharedPreferences(context).getString("lang_code","en"); } public static void setPrefLangCode(Context context, String mPrefLangCode) { SharedPreferences.Editor editor = PreferenceManager.getDefaultSharedPreferences(context).edit(); editor.putString("lang_code",mPrefLangCode); editor.commit(); } public static String getPrefCountryCode(Context context) { return PreferenceManager.getDefaultSharedPreferences(context).getString("country_code","US"); } public static void setPrefCountryCode(Context context,String mPrefCountryCode) { SharedPreferences.Editor editor = PreferenceManager.getDefaultSharedPreferences(context).edit(); editor.putString("country_code",mPrefCountryCode); editor.commit(); } } GoalKicker.com Android Notes for Professionals 1285 Initialize locale that user preferred, from Application class. public class LocaleApp extends Application{ @Override public void onCreate() { super.onCreate(); LocaleUtils.setLocale(new Locale(LocaleUtils.getPrefLangCode(this), LocaleUtils.getPrefCountryCode(this))); LocaleUtils.updateConfiguration(this, getResources().getConfiguration()); } } You also need to create a base activity and extend this activity to all other activity so that you can change locale of application only one place as follows : public abstract class LocalizationActivity extends AppCompatActivity { public LocalizationActivity() { LocaleUtils.updateConfiguration(this); } // We only override onCreate @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); } } Note : Always initialize locale in constructor. Now you can use LocalizationActivity as follow. public class MainActivity extends LocalizationActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } } Note: When you change locale of application programmatically, need to restart your activity to take the eect of locale change In order to work properly for this solution you and use locale from shared preferences on app startup you android:name=".LocaleApp" in you Manifest.xml. Sometimes Lint checker prompt to create the release build. To solve such issue follow below options. First: If you want to disable translation for some strings only then add following attribute to default string.xml GoalKicker.com Android Notes for Professionals 1286 <string name="developer" translatable="false">Developer Name</string> Second: Ignore all missing translation from resource le add following attribute It's the ignore attribute of the tools namespace in your strings le, as follows: <?xml version="1.0" encoding="utf-8"?> <resources xmlns:tools="http://schemas.android.com/tools" tools:ignore="MissingTranslation" > http://stackoverflow.com/documentation/android/3345/localization-with-resources-in-android# <!-- your strings here; no need now for the translatable attribute --> </resources> Third: Another way to disable non-translatable string http://tools.android.com/recent/non-translatablestrings If you have a lot of resources that should not be translated, you can place them in a le named donottranslate.xml and lint will consider all of them non-translatable resources. Fourth: You can also add locale in resource le <resources xmlns:tools="http://schemas.android.com/tools" tools:locale="en" tools:ignore="MissingTranslation"> You can also disable missing translation check for lint from app/build.gradle lintOptions { disable 'MissingTranslation' } Section 265.5: Currency Currency currency = Currency.getInstance("USD"); NumberFormat format = NumberFormat.getCurrencyInstance(); format.setCurrency(currency); format.format(10.00); GoalKicker.com Android Notes for Professionals 1287 Chapter 266: Convert vietnamese string to english string Android Section 266.1: example: String myStr = convert("L Minh Thoi l ngi Vit Nam"); converted: "Le Minh Thoai la nguoi Viet Nam" Section 266.2: Chuyn chui Ting Vit thnh chui khng du public static String convert(String str) { str = str.replaceAll("||||||||||||||||", "a"); str = str.replaceAll("||||||||||", "e"); str = str.replaceAll("||||", "i"); str = str.replaceAll("||||||||||||||||", "o"); str = str.replaceAll("||||||||||", "u"); str = str.replaceAll("||||", "y"); str = str.replaceAll("", "d"); str = str.replaceAll("||||||||||||||||", "A"); str = str.replaceAll("||||||||||", "E"); str = str.replaceAll("||||", "I"); str = str.replaceAll("||||||||||||||||", "O"); str = str.replaceAll("||||||||||", "U"); str = str.replaceAll("||||", "Y"); str = str.replaceAll("", "D"); return str; } GoalKicker.com Android Notes for Professionals 1288 Credits Thank you greatly to all the people from Stack Overow Documentation who helped provide this content, more changes can be sent to <EMAIL> for new content to be published or updated 0xalihn 1SStorm 3VYZkz7t A A.A. a.ch. <NAME> abhi Ab<NAME> Abilash Ab_ adalPaRi <NAME> adao7000 <NAME> <NAME> AesSedai101 <NAME> ahmadalibaloch <NAME>. <NAME> alanv <NAME> <NAME> <NAME> <NAME> Chapters 191 and 231 Chapter 266 Chapter 205 Chapter 159 Chapter 7 Chapter 8 Chapter 231 Chapter 41 Chapter 97 Chapters 41, 47 and 253 Chapters 1, 16, 40, 47, 218, 227 and 258 Chapters 18, 235 and 262 Chapter 16 Chapter 62 Chapters 62 and 99 Chapters 78 and 263 Chapters 45, 78 and 264 Chapter 10 Chapters 33, 196 and 212 Chapters 63, 119 and 228 Chapters 51 and 96 Chapter 13 Chapter 8 Chapters 40, 44, 47, 53, 57, 84, 120, 156, 205, 250 and 264 Chapter 41 Chapter 125 Chapters 16 and 37 Chapter 248 Chapter 239 Chapter 1 Chapters 83 and 160 Chapter 74 Chapters 1, 8, 37, 43, 83 and 163 Chapter 59 Chapter 16 Chapters 37 and 82 Chapter 77 Chapter 95 Chapter 143 Chapters 69, 246 and 263 Chapter 21 Chapter 131 Chapter 42 Chapter 264 Chapter 24 Chapter 61 Chapter 16 GoalKicker.com Android Notes for Professionals 1289 Anax Chapter 146 <NAME> Chapter 96 AndiGeeky Chapters 126, 233, 236 and 248 <NAME> Chapter 253 Andrei T Chapter 95 <NAME> Chapters 1, 41, 50 and 78 <NAME> Chapter 41 AndroidMechanic Chapters 3, 40, 41, 47, 61, 250, 263, 264 and 265 AndroidRuntimeException Chapters 41, 47, 61, 76, 93, 97 and 219 AndyRoid Chapter 96 <NAME> Chapter 40 <NAME> Chapters 16, 37, 41, 47, 131 and 264 Anish Mittal Chapter 42 <NAME> Chapter 32 Ankit Popli Chapter 142 Ankit Sharma Chapter 47 Ankur Aggarwal Chapters 98 and 190 Anonsage Chapter 121 anoo_radha Chapter 34 antonio Chapters 40, 41, 56, 61 and 262 Anup Kulkarni Chapter 264 anupam_kamble Chapter 72 AnV Chapter 50 Apoorv Parmar Chapter 1 appersiano Chapter 80 aquib23 Chapter 96 Arnav M. Chapter 247 Arpit Gandhi Chapter 89 Arth Tilva Chapter 52 Aryan Naja Chapter 82 Ashish Ranjan Chapter 40 Ashish Rathee Chapter 250 astuter Chapters 92, 97, 227 and 262 Atef Hares Chapter 201 athor Chapter 114 Atif Farrukh Chapters 75 and 134 Aurasphere Chapters 8 and 112 auval Chapters 1, 2, 41, 42, 47, 131, 205, 258, 263 and 264 Avinash R Chapter 41 Axe Chapter 41 Ayush Bansal Chapters 133, 170 and 199 Chapter 160 B001 BadCash Chapters 45, 82 and 239 BalaramNayak Chapter 19 baozi Chapter 219 Barend Chapters 17, 28 and 264 Bartek Lipinski Chapters 8, 16, 37, 41, 47 and 81 Beena Chapter 4 Ben Chapters 88 and 258 Ben P. Chapter 42 ben75 Chapter 255 Beto Caldas Chapter 259 Bhargavi Yamanuri Chapters 91 and 216 GoalKicker.com Android Notes for Professionals 1290 biddulph.r Blackbelt BlitzKraig Blundell Blundering Philosopher bpoiss <NAME> bricklore BrickTop Bryan Bryan Bryce Buddy Burak Day Burhanuddin Rashid busradeniz <NAME> <NAME>vaq CaseyB <NAME> cdeange Char<NAME> Chip Chirag SolankI Chol <NAME> Panneer Code.IT Code_Life Cold Fire commonSenseCode CptEric cricket_007 dakshbhatt21 Dale <NAME> <NAME>. DanielDiSu <NAME> Chapter 38 Chapters 2, 40 and 264 Chapter 43 Chapters 187 and 188 Chapters 81 and 147 Chapter 41 Chapter 18 Chapter 47 Chapter 69 Chapter 10 Chapters 9, 16 and 18 Chapter 39 Chapters 40 and 57 Chapters 205 and 264 Chapter 101 Chapter 58 Chapter 112 Chapter 39 Chapters 65 and 206 Chapter 14 Chapters 43 and 242 Chapter 53 Chapters 96 and 108 Chapter 96 Chapters 47, 88, 110, 111 and 219 Chapters 1, 2, 8, 37, 41, 76, 78, 81, 103, 125, 239 and 264 Chapter 229 Chapters 21, 23 and 91 Chapter 16 Chapter 61 Chapter 264 Chapters 225, 226 and 240 Chapters 34 and 41 Chapter 219 Chapter 41 Chapter 258 Chapter 239 Chapters 42 and 47 Chapters 37, 227 and 243 Chapter 264 Chapters 38 and 41 Chapters 40, 42 and 227 Chapters 14, 96 and 113 Chapters 37 and 40 Chapter 41 Chapters 3, 8, 9, 12, 13, 16, 24, 38, 40, 41, 42, 45, 61, 62, 69, 70, 72, 74, 82, 88, 92, 96, 103, 114, 115, 143, 146, 218, 227, 229, 239, 243, 249, 250, 251, 258 and 263 Chapter 174 Chapters 21, 41 and 219 Chapter 1 Chapter 39 Chapter 264 GoalKicker.com Android Notes for Professionals 1291 <NAME> davidgiga1993 DeKaNszn dev.mi devnull69 <NAME> Disk Crasher Dmide <NAME> Douglas Drumond drulabs <NAME> <NAME> Eixx Ekin EKN <NAME> Enrique <NAME> EpicPandaForce <NAME> <NAME> <NAME> <NAME> <NAME> FlyingPumba <NAME> FredMaggiowski <NAME> FromTheSeventhSky <NAME> fyfyone Google g4s8 gaara87 <NAME> <NAME> <NAME> Chapters 17 and 112 Chapter 27 Chapter 220 Chapter 37 Chapters 70 and 219 Chapter 96 Chapter 250 Chapters 73 and 111 Chapter 86 Chapter 25 Chapter 77 Chapter 1 Chapters 55 and 185 Chapter 7 Chapter 231 Chapters 74 and 219 Chapters 69 and 238 Chapter 241 Chapter 81 Chapter 125 Chapters 41 and 131 Chapter 135 Chapter 98 Chapter 210 Chapters 112, 113 and 127 Chapters 14 and 195 Chapters 1, 41, 42 and 57 Chapters 184 and 263 Chapter 250 Chapters 47, 139, 148, 194, 205, 264 and 265 Chapter 204 Chapter 19 Chapter 59 Chapters 8, 34, 38, 41, 47 and 71 Chapters 9, 47 and 218 Chapter 158 Chapter 41 Chapters 137 and 250 Chapter 250 Chapter 18 Chapter 142 Chapter 18 Chapters 44, 156, 205 and 264 Chapters 24, 28, 34, 41, 42, 72, 130, 187 and 264 Chapters 4, 39, 163, 172 and 176 Chapter 262 Chapters 1, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 26, 37, 40, 47, 50, 61, 66, 76, 78, 83, 88, 92, 103, 117, 118, 132, 144, 153, 154, 218, 219, 227, 229, 230, 231, 233, 244, 250, 251, 252, 255, 258 and 264 Chapter 1 Chapter 37 Chapter 263 Chapter 8 GoalKicker.com Android Notes for Professionals 1292 gbansal Geert GensaGames gerard <NAME> <NAME> <NAME> <NAME> grebulon <NAME> GurpreetSK95 gus27 <NAME> h22 <NAME>ide <NAME> hello_world herrmartell Hi I'm <NAME> honk <NAME> Ic2h I<NAME> Ichthyocentaurs iDevRoids I<NAME> I<NAME> I<NAME>ad IncrediApp inetphantom insomniac Inzimam Tariq IT iravul Irfan Raza Chapters 40, 69 and 126 Chapter 100 Chapters 16 and 255 Chapter 218 Chapter 199 Chapter 209 Chapter 235 Chapter 219 Chapter 180 Chapter 227 Chapter 219 Chapter 264 Chapters 40, 67, 70, 96, 149, 160 and 262 Chapter 31 Chapter 39 Chapter 37 Chapter 176 Chapters 109 and 115 Chapter 114 Chapter 48 Chapter 166 Chapter 108 Chapters 1 and 47 Chapters 205 and 264 Chapters 1, 5, 28, 38, 41, 99, 134, 201, 219 and 265 Chapter 76 Chapter 66 Chapter 71 Chapter 34 Chapter 84 Chapter 98 Chapters 41, 42, 113, 137 and 250 Chapters 2, 4, 25, 34, 42, 53, 81, 82, 117, 118, 128, 141, 150, 152, 160 and 239 Chapter 155 Chapters 4, 11, 18, 33, 37, 38, 44, 57, 58, 63, 74, 75, 86, 87, 91, 98, 100, 112, 116, 118, 119, 122, 133, 134, 139, 145, 149, 152, 155, 163, 165, 179, 194, 195, 200, 224, 228, 231, 233, 235, 238, 240, 242, 248, 252, 257, 258, 259, 264 and 265 Chapter 102 Chapter 218 Chapters 37, 62, 126 and 250 Chapters 1, 47, 72, 92 and 179 Chapter 261 Chapter 227 Chapter 61 Chapter 91 Chapter 77 Chapters 40 and 218 Chapter 1 Chapter 41 Chapter 2 Chapters 22 and 152 Chapter 41 GoalKicker.com Android Notes for Professionals 1293 <NAME> Jj j2ko Jacob jagapathi Jaggs James_Parsons <NAME> <NAME> <NAME> jasonlam604 <NAME> JensV jgm <NAME> JJ86 jlynch630 <NAME> <NAME> <NAME> johnrao07 <NAME> <NAME>Cz <NAME> <NAME> <NAME> judepereira k3b kalan kann Karan Nagpal Karan Razdan KATHYxx Kaushik NP Kayvan N KDeogharkar <NAME> KeLiuyue <NAME> kId Kingsher Phuoc Chapters 16, 41 and 99 Chapter 79 Chapters 37, 60 and 83 Chapter 39 Chapter 41 Chapter 131 Chapter 24 Chapter 16 Chapter 237 Chapter 224 Chapter 43 Chapter 57 Chapter 125 Chapters 83 and 251 Chapter 24 Chapter 62 Chapter 41 Chapters 1, 250, 262 and 263 Chapter 107 Chapters 16, 42, 47 and 239 Chapter 264 Chapters 76, 160 and 265 Chapter 227 Chapters 62 and 227 Chapter 37 Chapter 28 Chapter 250 Chapter 46 Chapters 34 and 85 Chapters 2, 5, 28, 46, 56, 98, 129, 131, 134 and 263 Chapter 256 Chapters 24, 40, 45 and 146 Chapter 49 Chapters 40 and 120 Chapter 43 Chapter 72 Chapter 16 Chapter 250 Chapter 203 Chapters 47, 135 and 250 Chapter 31 Chapter 41 Chapter 41 Chapter 58 Chapter 96 Chapter 154 Chapters 16, 40, 41, 57, 66 and 141 Chapters 38 and 74 Chapter 15 Chapters 217 and 221 Chapter 16 Chapters 14, 96 and 218 Chapter 258 GoalKicker.com Android Notes for Professionals 1294 <NAME> Kling Klang Knossos krishan Krishnakanth kRiZ krunal patel KuroObi L. Swifter Laurel Lazy Ninja Leo Leo.Han Lewis McGeary Lokesh Desai Long Ranger LordSidious Lucas Paolillo Lukas MDP MM <NAME> I<NAME> <NAME> <NAME> MarGenDo <NAME> marshmallow MashukKhan <NAME> MathaN mattfred Mauker Max <NAME> mayojava <NAME> mhenryk <NAME> MidasLefko MiguelHincapieC Mikael Ohlson Chapters 1 and 145 Chapter 72 Chapter 118 Chapter 38 Chapter 151 Chapter 215 Chapter 207 Chapter 168 Chapters 64 and 229 Chapter 86 Chapter 219 Chapter 156 Chapters 72 and 227 Chapter 229 Chapter 72 Chapters 8, 61 and 163 Chapters 186 and 260 Chapter 39 Chapters 40, 78 and 178 Chapters 45, 61 and 227 Chapters 78, 120 and 176 Chapters 62 and 262 Chapter 244 Chapter 125 Chapter 134 Chapters 82 and 234 Chapter 144 Chapter 175 Chapters 40 and 84 Chapter 130 Chapter 182 Chapter 263 Chapter 41 Chapter 1 Chapter 76 Chapter 235 Chapter 1 Chapters 1, 16, 37, 41 and 250 Chapter 112 Chapters 41, 218 and 219 Chapters 37, 41, 61, 97 and 131 Chapter 40 Chapter 46 Chapters 72 and 136 Chapters 38, 46 and 264 Chapter 61 Chapter 1 Chapters 43, 45 and 218 Chapter 251 Chapter 4 Chapters 19 and 82 Chapters 65 and 153 Chapter 258 GoalKicker.com Android Notes for Professionals 1295 Mike Chapter 88 <NAME> Chapter 250 Mike Scamell Chapters 71 and 76 Milad Nouri Chapters 92 and 99 <NAME> Chapter 239 miss C Chapter 76 mklimek Chapter 16 mmBs Chapters 37 and 97 Mochamad Tauk Hidayat Chapters 62 and 76 Monish Kamble Chapter 218 MPhil Chapter 227 mpkuth Chapters 31, 37, 78, 117, 138 and 265 Mr.7 Chapters 7 and 8 MrSalmon Chapter 78 mrtuovinen Chapters 96, 176 and 258 mshukla Chapter 47 Muhammad Umair Chapter 85 Shaque Muhammad Younas Chapter 20 Muhammed Refaat Chapters 41, 43, 71 and 218 <NAME> Chapter 101 Murali Chapter 58 Muthukrishnan Rajendran Chapters 17, 101, 137, 154 and 185 Myon Chapter 56 N Chapter 106 NJ Chapters 2, 41, 47, 218, 251 and 253 Nambi Chapters 55 and 218 Namnodorel Chapter 253 Narayan Acharya Chapter 42 narko Chapter 239 NashHorn Chapter 96 Natali Chapters 205 and 264 Neeraj Chapter 259 Nepster Chapter 8 nibarius Chapter 120 Nick Chapter 1 <NAME> Chapters 38, 41, 43, 95 and 160 <NAME> Chapter 50 Nicolai Weitkemper Chapter 169 Nicolas Maltais Chapter 198 <NAME> Chapter 37 niknetniko Chapter 41 Nilanchala Panigrahy Chapter 250 Nilesh Singh Chapter 143 Nissim R Chapter 151 NitZRobotKoder Chapter 165 noob Chapter 123 noongiya95 Chapters 37 and 193 Nougat Lover Chapters 14, 45, 97 and 118 null pointer Chapter 113 Oknesif Chapter 209 Oleksandr Chapter 219 Olu Chapter 57 GoalKicker.com Android Notes for Professionals 1296 <NAME> Chapter 229 <NAME> Chapter 96 once2go Chapter 92 Onik Chapter 59 Onur Chapter 31 orelzion Chapter 73 Oren Chapter 258 originx Chapter 252 oshurmamadov Chapters 37 and 92 <NAME> Chapters 16, 35, 82, 98 and 239 <NAME> Chapter 26 <NAME> Chapter 97 <NAME> Chapter 42 <NAME> Chapters 8, 76 and 253 <NAME> Chapter 41 <NAME> Chapters 56, 205 and 264 <NAME> Chapter 47 Pavneet_Singh Chapters 1, 41 and 47 <NAME> Chapter 40 <NAME> Chapter 197 <NAME> Chapter 120 <NAME> Chapters 2 and 8 Phil Chapter 72 PhilLab Chapters 90 and 262 <NAME> Chapter 96 piotrek1543 Chapter 69 Piyush Chapters 76 and 83 Pongpat Chapters 88 and 132 <NAME> Chapter 200 <NAME> Chapter 24 pRaNaY Chapters 92, 125 and 263 <NAME> Chapters 9, 21, 26, 147, 202 and 245 privatestaticint Chapter 157 PRIYA PARASHAR Chapter 249 Priyank Patel Chapters 6, 26 and 162 Pro Mode Chapters 1 and 102 Prownage Chapter 219 PSN Chapter 1 <NAME> Chapters 2, 38, 47, 56, 65, 73, 83, 84, 88, 131, 176, 205, 255 and 264 rajan ks Chapter 40 Rajesh Chapters 8, 37, 41, 62, 72 and 93 <NAME> Chapter 211 Raman Chapter 250 <NAME> Chapter 61 Ranveer Chapter 247 <NAME> Chapter 76 <NAME> Chapters 39 and 62 rciovati Chapters 41 and 47 Reaz Murshed Chapters 16, 41, 47 and 56 RediOne1 Chapters 13, 41, 53, 62 and 80 Redman Chapter 84 reective_mind Chapter 171 rekire Chapters 8, 15, 41, 44, 47, 218, 255 and 263 GoalKicker.com Android Notes for Professionals 1297 <NAME> Chapter 47 reVerse Chapter 243 ReverseCold Chapter 241 <NAME> Chapters 43, 131 and 135 ridsatrio Chapters 37 and 97 Risch Chapter 258 RishbhSharma Chapter 264 Robert Chapter 122 <NAME> Chapter 192 <NAME> Chapter 213 <NAME> Chapter 227 Rohan Arora Chapter 99 Rohit Arya Chapters 40, 61, 78 and 123 Chapter 253 Rolf <NAME> Chapter 181 <NAME> Chapters 76 and 139 <NAME> Chapters 44, 98 and 154 Rupali Chapters 69, 139 and 165 russjr08 Chapter 41 russt Chapters 1 and 263 S.D. Chapter 90 S.R Chapters 14, 68 and 161 Saeed Chapters 7 and 229 Sagar Chavada Chapters 16 and 37 Sam Judd Chapter 61 sameera lakshitha Chapter 98 samgak Chapter 185 Sammy T Chapter 78 SANAT Chapter 15 Sanket Berde Chapters 16, 92 and 229 Sanoop Chapters 37 and 117 Sasank Sunkavalli Chapter 16 Sashabrava Chapter 22 saul Chapter 1 saurav Chapter 78 Saveen Chapter 167 Segun Famisa Chapter 39 Sevle Chapter 15 shadygoneinsane Chapter 222 shahharshil46 Chapter 229 ShahiM Chapter 162 shalini Chapter 232 Shantanu Paul Chapter 74 Shashanth Chapter 250 Shekhar Chapter 189 shikhar bansal Chapter 230 Shinil M S Chapters 40, 92 and 164 Shirane85 Chapter 88 ShivBuyya Chapters 40 and 62 shtolik Chapters 24 and 117 Shubham Shukla Chapter 230 Siddharth Venu Chapter 1 Simon Chapters 41 and 116 GoalKicker.com Android Notes for Professionals 1298 <NAME> <NAME> SimplyProgrammer Sir SC Smit.<NAME>ushA spaceplane ssimm St<NAME> Steve.P still_learning stkent sud007 <NAME> Sup Suragch <NAME> Tg tainy <NAME> Tanis.7x TDG <NAME> theFunkyEngineer <NAME> thiag<NAME> ThomasThiebaud <NAME> <NAME> Tot Zam tpk TR4Android TRINADH KOYA T<NAME> ubuntudroid Ufkoku <NAME> user1506104 USKMobility Uttam Panchasara Chapter 118 Chapters 74 and 262 Chapters 1 and 41 Chapter 37 Chapters 8 and 97 Chapter 18 Chapters 1, 9, 14, 16, 37, 61, 96 and 257 Chapter 4 Chapters 41, 129 and 147 Chapter 41 Chapter 173 Chapter 118 Chapter 41 Chapter 33 Chapter 59 Chapters 29 and 264 Chapters 14, 15, 25, 26, 37 and 153 Chapters 2, 4, 28, 118, 140 and 165 Chapters 177 and 242 Chapters 61, 183, 205, 239 and 264 Chapter 21 Chapter 28 Chapters 67, 183 and 239 Chapter 71 Chapter 59 Chapter 41 Chapter 8 Chapter 126 Chapters 38, 39, 124 and 247 Chapters 116 and 263 Chapter 250 Chapters 41 and 264 Chapter 262 Chapter 16 Chapters 59, 263 and 264 Chapter 64 Chapters 2, 8, 38, 41, 44, 47, 218 and 262 Chapter 264 Chapters 8 and 219 Chapter 254 Chapter 151 Chapter 16 Chapter 98 Chapters 28, 69 and 70 Chapter 146 Chapter 30 Chapters 106, 109, 114, 211 and 214 Chapter 8 Chapter 92 Chapters 81 and 227 Chapter 250 Chapter 265 Chapters 50 and 87 GoalKicker.com Android Notes for Professionals 1299 V <NAME> <NAME> <NAME> <NAME>. vipsy <NAME> N <NAME> vrbsm Vucko W0rmH0le W3hri WarrenFaith webo80 weston <NAME> III Xaver Kapeller Xavier xDragonZ y.feizi Y<NAME> yennsarah Yogesh Umesh V<NAME> younes zeboudj Yousha Aleayoub yuku <NAME> Yvette Colomb Zachary <NAME> ZeroOne Zilk Zoe Chapter 56 Chapter 251 Chapter 16 Chapter 227 Chapter 78 Chapter 45 Chapter 41 Chapter 219 Chapters 40 and 250 Chapter 36 Chapter 98 Chapter 121 Chapters 1, 5, 38 and 41 Chapters 39, 47 and 83 Chapters 40 and 97 Chapter 92 Chapters 78 and 123 Chapters 72 and 133 Chapter 128 Chapter 14 Chapters 43 and 54 Chapter 69 Chapters 12, 37 and 238 Chapters 37, 142, 156, 218, 227 and 264 Chapter 38 Chapter 223 Chapter 135 Chapter 146 Chapters 17 and 53 Chapters 47 and 264 Chapters 39 and 231 Chapter 161 Chapter 62 Chapters 41, 218 and 264 Chapter 154 Chapter 47 Chapters 1, 8, 9, 14, 28, 40, 41, 47, 205, 218, 219, 227, 263 and 264 Chapters 57 and 104 Chapter 263 Chapters 34 and 235 Chapter 110 Chapters 13 and 61 Chapters 16 and 250 Chapters 1, 3, 62, 94, 105, 145, 148, 208 and 242 Chapters 4 and 43 Chapters 1 and 250 GoalKicker.com Android Notes for Professionals 1300 You may also like
msdf-sys
rust
Rust
Struct msdf_sys::_IO_FILE === ``` #[repr(C)]pub struct _IO_FILE { pub _flags: c_int, pub _IO_read_ptr: *mutc_char, pub _IO_read_end: *mutc_char, pub _IO_read_base: *mutc_char, pub _IO_write_base: *mutc_char, pub _IO_write_ptr: *mutc_char, pub _IO_write_end: *mutc_char, pub _IO_buf_base: *mutc_char, pub _IO_buf_end: *mutc_char, pub _IO_save_base: *mutc_char, pub _IO_backup_base: *mutc_char, pub _IO_save_end: *mutc_char, pub _markers: *mut_IO_marker, pub _chain: *mut_IO_FILE, pub _fileno: c_int, pub _flags2: c_int, pub _old_offset: __off_t, pub _cur_column: c_ushort, pub _vtable_offset: c_schar, pub _shortbuf: [c_char; 1], pub _lock: *mut_IO_lock_t, pub _offset: __off64_t, pub _codecvt: *mut_IO_codecvt, pub _wide_data: *mut_IO_wide_data, pub _freeres_list: *mut_IO_FILE, pub _freeres_buf: *mutc_void, pub __pad5: size_t, pub _mode: c_int, pub _unused2: [c_char; 20], } ``` Fields --- `_flags: c_int``_IO_read_ptr: *mutc_char``_IO_read_end: *mutc_char``_IO_read_base: *mutc_char``_IO_write_base: *mutc_char``_IO_write_ptr: *mutc_char``_IO_write_end: *mutc_char``_IO_buf_base: *mutc_char``_IO_buf_end: *mutc_char``_IO_save_base: *mutc_char``_IO_backup_base: *mutc_char``_IO_save_end: *mutc_char``_markers: *mut_IO_marker``_chain: *mut_IO_FILE``_fileno: c_int``_flags2: c_int``_old_offset: __off_t``_cur_column: c_ushort``_vtable_offset: c_schar``_shortbuf: [c_char; 1]``_lock: *mut_IO_lock_t``_offset: __off64_t``_codecvt: *mut_IO_codecvt``_wide_data: *mut_IO_wide_data``_freeres_list: *mut_IO_FILE``_freeres_buf: *mutc_void``__pad5: size_t``_mode: c_int``_unused2: [c_char; 20]`Trait Implementations --- source### impl Clone for _IO_FILE source#### fn clone(&self) -> _IO_FILE Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for _IO_FILE source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for _IO_FILE Auto Trait Implementations --- ### impl RefUnwindSafe for _IO_FILE ### impl !Send for _IO_FILE ### impl !Sync for _IO_FILE ### impl Unpin for _IO_FILE ### impl UnwindSafe for _IO_FILE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::_IO_codecvt === ``` #[repr(C)]pub struct _IO_codecvt { /* private fields */ } ``` Trait Implementations --- source### impl Clone for _IO_codecvt source#### fn clone(&self) -> _IO_codecvt Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for _IO_codecvt source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for _IO_codecvt Auto Trait Implementations --- ### impl RefUnwindSafe for _IO_codecvt ### impl Send for _IO_codecvt ### impl Sync for _IO_codecvt ### impl Unpin for _IO_codecvt ### impl UnwindSafe for _IO_codecvt Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::_IO_marker === ``` #[repr(C)]pub struct _IO_marker { /* private fields */ } ``` Trait Implementations --- source### impl Clone for _IO_marker source#### fn clone(&self) -> _IO_marker Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for _IO_marker source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for _IO_marker Auto Trait Implementations --- ### impl RefUnwindSafe for _IO_marker ### impl Send for _IO_marker ### impl Sync for _IO_marker ### impl Unpin for _IO_marker ### impl UnwindSafe for _IO_marker Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::_IO_wide_data === ``` #[repr(C)]pub struct _IO_wide_data { /* private fields */ } ``` Trait Implementations --- source### impl Clone for _IO_wide_data source#### fn clone(&self) -> _IO_wide_data Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for _IO_wide_data source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for _IO_wide_data Auto Trait Implementations --- ### impl RefUnwindSafe for _IO_wide_data ### impl Send for _IO_wide_data ### impl Sync for _IO_wide_data ### impl Unpin for _IO_wide_data ### impl UnwindSafe for _IO_wide_data Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Bitmap === ``` #[repr(C)]pub struct msdfgen_Bitmap<T> { pub pixels: *mut T, pub w: c_int, pub h: c_int, pub _phantom_0: PhantomData<UnsafeCell<T>>, } ``` Fields --- `pixels: *mut T``w: c_int``h: c_int``_phantom_0: PhantomData<UnsafeCell<T>>`Trait Implementations --- source### impl<T: Clone> Clone for msdfgen_Bitmap<Tsource#### fn clone(&self) -> msdfgen_Bitmap<TReturns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl<T: Debug> Debug for msdfgen_Bitmap<Tsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl<T: Copy> Copy for msdfgen_Bitmap<TAuto Trait Implementations --- ### impl<T> !RefUnwindSafe for msdfgen_Bitmap<T### impl<T> !Send for msdfgen_Bitmap<T### impl<T> !Sync for msdfgen_Bitmap<T### impl<T> Unpin for msdfgen_Bitmap<T> where    T: Unpin, ### impl<T> UnwindSafe for msdfgen_Bitmap<T> where    T: UnwindSafe + RefUnwindSafe, Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_BitmapConstRef === ``` #[repr(C)]pub struct msdfgen_BitmapConstRef<T> { pub pixels: *mut T, pub w: c_int, pub h: c_int, pub _phantom_0: PhantomData<UnsafeCell<T>>, } ``` Fields --- `pixels: *mut T``w: c_int``h: c_int``_phantom_0: PhantomData<UnsafeCell<T>>`Trait Implementations --- source### impl<T: Clone> Clone for msdfgen_BitmapConstRef<Tsource#### fn clone(&self) -> msdfgen_BitmapConstRef<TReturns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl<T: Debug> Debug for msdfgen_BitmapConstRef<Tsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl<T: Copy> Copy for msdfgen_BitmapConstRef<TAuto Trait Implementations --- ### impl<T> !RefUnwindSafe for msdfgen_BitmapConstRef<T### impl<T> !Send for msdfgen_BitmapConstRef<T### impl<T> !Sync for msdfgen_BitmapConstRef<T### impl<T> Unpin for msdfgen_BitmapConstRef<T> where    T: Unpin, ### impl<T> UnwindSafe for msdfgen_BitmapConstRef<T> where    T: UnwindSafe + RefUnwindSafe, Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_BitmapRef === ``` #[repr(C)]pub struct msdfgen_BitmapRef<T> { pub pixels: *mut T, pub w: c_int, pub h: c_int, pub _phantom_0: PhantomData<UnsafeCell<T>>, } ``` Fields --- `pixels: *mut T``w: c_int``h: c_int``_phantom_0: PhantomData<UnsafeCell<T>>`Trait Implementations --- source### impl<T: Clone> Clone for msdfgen_BitmapRef<Tsource#### fn clone(&self) -> msdfgen_BitmapRef<TReturns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl<T: Debug> Debug for msdfgen_BitmapRef<Tsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl<T: Copy> Copy for msdfgen_BitmapRef<TAuto Trait Implementations --- ### impl<T> !RefUnwindSafe for msdfgen_BitmapRef<T### impl<T> !Send for msdfgen_BitmapRef<T### impl<T> !Sync for msdfgen_BitmapRef<T### impl<T> Unpin for msdfgen_BitmapRef<T> where    T: Unpin, ### impl<T> UnwindSafe for msdfgen_BitmapRef<T> where    T: UnwindSafe + RefUnwindSafe, Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Contour === ``` #[repr(C)]pub struct msdfgen_Contour { pub edges: [u64; 3], } ``` A single closed contour of a shape. Fields --- `edges: [u64; 3]`The sequence of edges that make up the contour. Implementations --- source### impl msdfgen_Contour source#### pub unsafe fn addEdge(&mut self, edge: *constmsdfgen_EdgeHolder) source#### pub unsafe fn addEdge1(&mut self) -> *mutmsdfgen_EdgeHolder source#### pub unsafe fn bound(&self, l: *mutf64, b: *mutf64, r: *mutf64, t: *mutf64) source#### pub unsafe fn boundMiters(    &self,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64,     border: f64,     miterLimit: f64,     polarity: c_int) source#### pub unsafe fn winding(&self) -> c_int source#### pub unsafe fn reverse(&mut self) Trait Implementations --- source### impl Debug for msdfgen_Contour source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Contour ### impl Send for msdfgen_Contour ### impl Sync for msdfgen_Contour ### impl Unpin for msdfgen_Contour ### impl UnwindSafe for msdfgen_Contour Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_CubicSegment === ``` #[repr(C)]pub struct msdfgen_CubicSegment { pub _base: msdfgen_EdgeSegment, pub p: [msdfgen_Point2; 4], } ``` A cubic Bezier curve. Fields --- `_base: msdfgen_EdgeSegment``p: [msdfgen_Point2; 4]`Implementations --- source### impl msdfgen_CubicSegment source#### pub unsafe fn deconverge(&mut self, param: c_int, amount: f64) source#### pub unsafe fn new(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     p3: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self Trait Implementations --- source### impl Debug for msdfgen_CubicSegment source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_CubicSegment ### impl !Send for msdfgen_CubicSegment ### impl !Sync for msdfgen_CubicSegment ### impl Unpin for msdfgen_CubicSegment ### impl UnwindSafe for msdfgen_CubicSegment Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_EdgeHolder === ``` #[repr(C)]pub struct msdfgen_EdgeHolder { pub edgeSegment: *mutmsdfgen_EdgeSegment, } ``` Container for a single edge of dynamic type. Fields --- `edgeSegment: *mutmsdfgen_EdgeSegment`Implementations --- source### impl msdfgen_EdgeHolder source#### pub unsafe fn swap(a: *mutmsdfgen_EdgeHolder, b: *mutmsdfgen_EdgeHolder) source#### pub unsafe fn new() -> Self source#### pub unsafe fn new1(segment: *mutmsdfgen_EdgeSegment) -> Self source#### pub unsafe fn new2(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self source#### pub unsafe fn new3(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self source#### pub unsafe fn new4(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     p3: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self source#### pub unsafe fn new5(orig: *constmsdfgen_EdgeHolder) -> Self source#### pub unsafe fn destruct(&mut self) Trait Implementations --- source### impl Debug for msdfgen_EdgeHolder source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_EdgeHolder ### impl !Send for msdfgen_EdgeHolder ### impl !Sync for msdfgen_EdgeHolder ### impl Unpin for msdfgen_EdgeHolder ### impl UnwindSafe for msdfgen_EdgeHolder Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_EdgeSegment === ``` #[repr(C)]pub struct msdfgen_EdgeSegment { pub vtable_: *constmsdfgen_EdgeSegment__bindgen_vtable, pub color: msdfgen_EdgeColor, } ``` An abstract edge segment. Fields --- `vtable_: *constmsdfgen_EdgeSegment__bindgen_vtable``color: msdfgen_EdgeColor`Trait Implementations --- source### impl Debug for msdfgen_EdgeSegment source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_EdgeSegment ### impl !Send for msdfgen_EdgeSegment ### impl !Sync for msdfgen_EdgeSegment ### impl Unpin for msdfgen_EdgeSegment ### impl UnwindSafe for msdfgen_EdgeSegment Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_EdgeSegment__bindgen_vtable === ``` #[repr(C)]pub struct msdfgen_EdgeSegment__bindgen_vtable(_); ``` Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_EdgeSegment__bindgen_vtable ### impl Send for msdfgen_EdgeSegment__bindgen_vtable ### impl Sync for msdfgen_EdgeSegment__bindgen_vtable ### impl Unpin for msdfgen_EdgeSegment__bindgen_vtable ### impl UnwindSafe for msdfgen_EdgeSegment__bindgen_vtable Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_ErrorCorrectionConfig === ``` #[repr(C)]pub struct msdfgen_ErrorCorrectionConfig { pub mode: msdfgen_ErrorCorrectionConfig_Mode, pub distanceCheckMode: msdfgen_ErrorCorrectionConfig_DistanceCheckMode, pub minDeviationRatio: f64, pub minImproveRatio: f64, pub buffer: *mutmsdfgen_byte, } ``` The configuration of the MSDF error correction pass. Fields --- `mode: msdfgen_ErrorCorrectionConfig_Mode``distanceCheckMode: msdfgen_ErrorCorrectionConfig_DistanceCheckMode``minDeviationRatio: f64`The minimum ratio between the actual and maximum expected distance delta to be considered an error. `minImproveRatio: f64`The minimum ratio between the pre-correction distance error and the post-correction distance error. Has no effect for DO_NOT_CHECK_DISTANCE. `buffer: *mutmsdfgen_byte`An optional buffer to avoid dynamic allocation. Must have at least as many bytes as the MSDF has pixels. Trait Implementations --- source### impl Clone for msdfgen_ErrorCorrectionConfig source#### fn clone(&self) -> msdfgen_ErrorCorrectionConfig Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_ErrorCorrectionConfig source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_ErrorCorrectionConfig Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_ErrorCorrectionConfig ### impl !Send for msdfgen_ErrorCorrectionConfig ### impl !Sync for msdfgen_ErrorCorrectionConfig ### impl Unpin for msdfgen_ErrorCorrectionConfig ### impl UnwindSafe for msdfgen_ErrorCorrectionConfig Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_GeneratorConfig === ``` #[repr(C)]pub struct msdfgen_GeneratorConfig { pub overlapSupport: bool, } ``` The configuration of the distance field generator algorithm. Fields --- `overlapSupport: bool`Specifies whether to use the version of the algorithm that supports overlapping contours with the same winding. May be set to false to improve performance when no such contours are present. Trait Implementations --- source### impl Clone for msdfgen_GeneratorConfig source#### fn clone(&self) -> msdfgen_GeneratorConfig Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_GeneratorConfig source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_GeneratorConfig Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_GeneratorConfig ### impl Send for msdfgen_GeneratorConfig ### impl Sync for msdfgen_GeneratorConfig ### impl Unpin for msdfgen_GeneratorConfig ### impl UnwindSafe for msdfgen_GeneratorConfig Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_LinearSegment === ``` #[repr(C)]pub struct msdfgen_LinearSegment { pub _base: msdfgen_EdgeSegment, pub p: [msdfgen_Point2; 2], } ``` A line segment. Fields --- `_base: msdfgen_EdgeSegment``p: [msdfgen_Point2; 2]`Implementations --- source### impl msdfgen_LinearSegment source#### pub unsafe fn length(&self) -> f64 source#### pub unsafe fn new(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self Trait Implementations --- source### impl Debug for msdfgen_LinearSegment source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_LinearSegment ### impl !Send for msdfgen_LinearSegment ### impl !Sync for msdfgen_LinearSegment ### impl Unpin for msdfgen_LinearSegment ### impl UnwindSafe for msdfgen_LinearSegment Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_MSDFGeneratorConfig === ``` #[repr(C)]pub struct msdfgen_MSDFGeneratorConfig { pub _base: msdfgen_GeneratorConfig, pub errorCorrection: msdfgen_ErrorCorrectionConfig, } ``` The configuration of the multi-channel distance field generator algorithm. Fields --- `_base: msdfgen_GeneratorConfig``errorCorrection: msdfgen_ErrorCorrectionConfig`Configuration of the error correction pass. Trait Implementations --- source### impl Clone for msdfgen_MSDFGeneratorConfig source#### fn clone(&self) -> msdfgen_MSDFGeneratorConfig Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_MSDFGeneratorConfig source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_MSDFGeneratorConfig Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_MSDFGeneratorConfig ### impl !Send for msdfgen_MSDFGeneratorConfig ### impl !Sync for msdfgen_MSDFGeneratorConfig ### impl Unpin for msdfgen_MSDFGeneratorConfig ### impl UnwindSafe for msdfgen_MSDFGeneratorConfig Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Projection === ``` #[repr(C)]pub struct msdfgen_Projection { pub scale: msdfgen_Vector2, pub translate: msdfgen_Vector2, } ``` A transformation from shape coordinates to pixel coordinates. Fields --- `scale: msdfgen_Vector2``translate: msdfgen_Vector2`Implementations --- source### impl msdfgen_Projection source#### pub unsafe fn project(&self, coord: *constmsdfgen_Point2) -> msdfgen_Point2 source#### pub unsafe fn unproject(&self, coord: *constmsdfgen_Point2) -> msdfgen_Point2 source#### pub unsafe fn projectVector(    &self,     vector: *constmsdfgen_Vector2) -> msdfgen_Vector2 source#### pub unsafe fn unprojectVector(    &self,     vector: *constmsdfgen_Vector2) -> msdfgen_Vector2 source#### pub unsafe fn projectX(&self, x: f64) -> f64 source#### pub unsafe fn projectY(&self, y: f64) -> f64 source#### pub unsafe fn unprojectX(&self, x: f64) -> f64 source#### pub unsafe fn unprojectY(&self, y: f64) -> f64 source#### pub unsafe fn new() -> Self source#### pub unsafe fn new1(    scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2) -> Self Trait Implementations --- source### impl Clone for msdfgen_Projection source#### fn clone(&self) -> msdfgen_Projection Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_Projection source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_Projection Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Projection ### impl Send for msdfgen_Projection ### impl Sync for msdfgen_Projection ### impl Unpin for msdfgen_Projection ### impl UnwindSafe for msdfgen_Projection Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_QuadraticSegment === ``` #[repr(C)]pub struct msdfgen_QuadraticSegment { pub _base: msdfgen_EdgeSegment, pub p: [msdfgen_Point2; 3], } ``` A quadratic Bezier curve. Fields --- `_base: msdfgen_EdgeSegment``p: [msdfgen_Point2; 3]`Implementations --- source### impl msdfgen_QuadraticSegment source#### pub unsafe fn length(&self) -> f64 source#### pub unsafe fn convertToCubic(&self) -> *mutmsdfgen_EdgeSegment source#### pub unsafe fn new(    p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor) -> Self Trait Implementations --- source### impl Debug for msdfgen_QuadraticSegment source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_QuadraticSegment ### impl !Send for msdfgen_QuadraticSegment ### impl !Sync for msdfgen_QuadraticSegment ### impl Unpin for msdfgen_QuadraticSegment ### impl UnwindSafe for msdfgen_QuadraticSegment Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Scanline === ``` #[repr(C)]pub struct msdfgen_Scanline { pub intersections: [u64; 3], pub lastIndex: c_int, } ``` Represents a horizontal scanline intersecting a shape. Fields --- `intersections: [u64; 3]``lastIndex: c_int`Implementations --- source### impl msdfgen_Scanline source#### pub unsafe fn overlap(    a: *constmsdfgen_Scanline,     b: *constmsdfgen_Scanline,     xFrom: f64,     xTo: f64,     fillRule: msdfgen_FillRule) -> f64 source#### pub unsafe fn setIntersections(&mut self, intersections: *const[u64; 3]) source#### pub unsafe fn countIntersections(&self, x: f64) -> c_int source#### pub unsafe fn sumIntersections(&self, x: f64) -> c_int source#### pub unsafe fn filled(&self, x: f64, fillRule: msdfgen_FillRule) -> bool source#### pub unsafe fn new() -> Self Trait Implementations --- source### impl Debug for msdfgen_Scanline source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Scanline ### impl Send for msdfgen_Scanline ### impl Sync for msdfgen_Scanline ### impl Unpin for msdfgen_Scanline ### impl UnwindSafe for msdfgen_Scanline Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Scanline_Intersection === ``` #[repr(C)]pub struct msdfgen_Scanline_Intersection { pub x: f64, pub direction: c_int, } ``` An intersection with the scanline. Fields --- `x: f64`X coordinate. `direction: c_int`Normalized Y direction of the oriented edge at the point of intersection. Trait Implementations --- source### impl Clone for msdfgen_Scanline_Intersection source#### fn clone(&self) -> msdfgen_Scanline_Intersection Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_Scanline_Intersection source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_Scanline_Intersection Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Scanline_Intersection ### impl Send for msdfgen_Scanline_Intersection ### impl Sync for msdfgen_Scanline_Intersection ### impl Unpin for msdfgen_Scanline_Intersection ### impl UnwindSafe for msdfgen_Scanline_Intersection Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Shape === ``` #[repr(C)]pub struct msdfgen_Shape { pub contours: [u64; 3], pub inverseYAxis: bool, } ``` Vector shape representation. Fields --- `contours: [u64; 3]`The list of contours the shape consists of. `inverseYAxis: bool`Specifies whether the shape uses bottom-to-top (false) or top-to-bottom (true) Y coordinates. Implementations --- source### impl msdfgen_Shape source#### pub unsafe fn addContour(&mut self, contour: *constmsdfgen_Contour) source#### pub unsafe fn addContour1(&mut self) -> *mutmsdfgen_Contour source#### pub unsafe fn normalize(&mut self) source#### pub unsafe fn validate(&self) -> bool source#### pub unsafe fn bound(&self, l: *mutf64, b: *mutf64, r: *mutf64, t: *mutf64) source#### pub unsafe fn boundMiters(    &self,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64,     border: f64,     miterLimit: f64,     polarity: c_int) source#### pub unsafe fn getBounds(    &self,     border: f64,     miterLimit: f64,     polarity: c_int) -> msdfgen_Shape_Bounds source#### pub unsafe fn scanline(&self, line: *mutmsdfgen_Scanline, y: f64) source#### pub unsafe fn edgeCount(&self) -> c_int source#### pub unsafe fn orientContours(&mut self) source#### pub unsafe fn new() -> Self Trait Implementations --- source### impl Debug for msdfgen_Shape source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Shape ### impl Send for msdfgen_Shape ### impl Sync for msdfgen_Shape ### impl Unpin for msdfgen_Shape ### impl UnwindSafe for msdfgen_Shape Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Shape_Bounds === ``` #[repr(C)]pub struct msdfgen_Shape_Bounds { pub l: f64, pub b: f64, pub r: f64, pub t: f64, } ``` Fields --- `l: f64``b: f64``r: f64``t: f64`Trait Implementations --- source### impl Clone for msdfgen_Shape_Bounds source#### fn clone(&self) -> msdfgen_Shape_Bounds Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_Shape_Bounds source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_Shape_Bounds Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Shape_Bounds ### impl Send for msdfgen_Shape_Bounds ### impl Sync for msdfgen_Shape_Bounds ### impl Unpin for msdfgen_Shape_Bounds ### impl UnwindSafe for msdfgen_Shape_Bounds Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_SignedDistance === ``` #[repr(C)]pub struct msdfgen_SignedDistance { pub distance: f64, pub dot: f64, } ``` Represents a signed distance and alignment, which together can be compared to uniquely determine the closest edge segment. Fields --- `distance: f64``dot: f64`Implementations --- source### impl msdfgen_SignedDistance source#### pub unsafe fn new() -> Self source#### pub unsafe fn new1(dist: f64, d: f64) -> Self Trait Implementations --- source### impl Clone for msdfgen_SignedDistance source#### fn clone(&self) -> msdfgen_SignedDistance Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_SignedDistance source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_SignedDistance Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_SignedDistance ### impl Send for msdfgen_SignedDistance ### impl Sync for msdfgen_SignedDistance ### impl Unpin for msdfgen_SignedDistance ### impl UnwindSafe for msdfgen_SignedDistance Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::msdfgen_Vector2 === ``` #[repr(C)]pub struct msdfgen_Vector2 { pub x: f64, pub y: f64, } ``` A 2-dimensional euclidean vector with double precision. Implementation based on the Vector2 template from Artery Engine. @author <NAME> Fields --- `x: f64``y: f64`Implementations --- source### impl msdfgen_Vector2 source#### pub unsafe fn reset(&mut self) source#### pub unsafe fn set(&mut self, x: f64, y: f64) source#### pub unsafe fn length(&self) -> f64 source#### pub unsafe fn direction(&self) -> f64 source#### pub unsafe fn normalize(&self, allowZero: bool) -> msdfgen_Vector2 source#### pub unsafe fn getOrthogonal(&self, polarity: bool) -> msdfgen_Vector2 source#### pub unsafe fn getOrthonormal(    &self,     polarity: bool,     allowZero: bool) -> msdfgen_Vector2 source#### pub unsafe fn project(    &self,     vector: *constmsdfgen_Vector2,     positive: bool) -> msdfgen_Vector2 source#### pub unsafe fn new(val: f64) -> Self source#### pub unsafe fn new1(x: f64, y: f64) -> Self Trait Implementations --- source### impl Clone for msdfgen_Vector2 source#### fn clone(&self) -> msdfgen_Vector2 Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for msdfgen_Vector2 source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for msdfgen_Vector2 Auto Trait Implementations --- ### impl RefUnwindSafe for msdfgen_Vector2 ### impl Send for msdfgen_Vector2 ### impl Sync for msdfgen_Vector2 ### impl Unpin for msdfgen_Vector2 ### impl UnwindSafe for msdfgen_Vector2 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::std_allocator === ``` #[repr(C)]pub struct std_allocator { pub _address: u8, } ``` Fields --- `_address: u8`Trait Implementations --- source### impl Clone for std_allocator source#### fn clone(&self) -> std_allocator Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for std_allocator source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for std_allocator Auto Trait Implementations --- ### impl RefUnwindSafe for std_allocator ### impl Send for std_allocator ### impl Sync for std_allocator ### impl Unpin for std_allocator ### impl UnwindSafe for std_allocator Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::std_allocator_rebind === ``` #[repr(C)]pub struct std_allocator_rebind { pub _address: u8, } ``` Fields --- `_address: u8`Trait Implementations --- source### impl Clone for std_allocator_rebind source#### fn clone(&self) -> std_allocator_rebind Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for std_allocator_rebind source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for std_allocator_rebind Auto Trait Implementations --- ### impl RefUnwindSafe for std_allocator_rebind ### impl Send for std_allocator_rebind ### impl Sync for std_allocator_rebind ### impl Unpin for std_allocator_rebind ### impl UnwindSafe for std_allocator_rebind Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::std_vector === ``` #[repr(C)]pub struct std_vector { pub _address: u8, } ``` Fields --- `_address: u8`Trait Implementations --- source### impl Clone for std_vector source#### fn clone(&self) -> std_vector Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for std_vector source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for std_vector Auto Trait Implementations --- ### impl RefUnwindSafe for std_vector ### impl Send for std_vector ### impl Sync for std_vector ### impl Unpin for std_vector ### impl UnwindSafe for std_vector Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct msdf_sys::std_vector__Temporary_value === ``` #[repr(C)]pub struct std_vector__Temporary_value { pub _address: u8, } ``` Fields --- `_address: u8`Trait Implementations --- source### impl Clone for std_vector__Temporary_value source#### fn clone(&self) -> std_vector__Temporary_value Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for std_vector__Temporary_value source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Copy for std_vector__Temporary_value Auto Trait Implementations --- ### impl RefUnwindSafe for std_vector__Temporary_value ### impl Send for std_vector__Temporary_value ### impl Sync for std_vector__Temporary_value ### impl Unpin for std_vector__Temporary_value ### impl UnwindSafe for std_vector__Temporary_value Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Constant msdf_sys::msdfgen_EdgeColor_BLACK === ``` pub const msdfgen_EdgeColor_BLACK: msdfgen_EdgeColor = 0; ``` Constant msdf_sys::msdfgen_EdgeColor_BLUE === ``` pub const msdfgen_EdgeColor_BLUE: msdfgen_EdgeColor = 4; ``` Constant msdf_sys::msdfgen_EdgeColor_CYAN === ``` pub const msdfgen_EdgeColor_CYAN: msdfgen_EdgeColor = 6; ``` Constant msdf_sys::msdfgen_EdgeColor_GREEN === ``` pub const msdfgen_EdgeColor_GREEN: msdfgen_EdgeColor = 2; ``` Constant msdf_sys::msdfgen_EdgeColor_MAGENTA === ``` pub const msdfgen_EdgeColor_MAGENTA: msdfgen_EdgeColor = 5; ``` Constant msdf_sys::msdfgen_EdgeColor_RED === ``` pub const msdfgen_EdgeColor_RED: msdfgen_EdgeColor = 1; ``` Constant msdf_sys::msdfgen_EdgeColor_WHITE === ``` pub const msdfgen_EdgeColor_WHITE: msdfgen_EdgeColor = 7; ``` Constant msdf_sys::msdfgen_EdgeColor_YELLOW === ``` pub const msdfgen_EdgeColor_YELLOW: msdfgen_EdgeColor = 3; ``` Constant msdf_sys::msdfgen_ErrorCorrectionConfig_DistanceCheckMode_ALWAYS_CHECK_DISTANCE === ``` pub const msdfgen_ErrorCorrectionConfig_DistanceCheckMode_ALWAYS_CHECK_DISTANCE: msdfgen_ErrorCorrectionConfig_DistanceCheckMode = 2; ``` Computes and compares the exact shape distance for each suspected artifact. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_DistanceCheckMode_CHECK_DISTANCE_AT_EDGE === ``` pub const msdfgen_ErrorCorrectionConfig_DistanceCheckMode_CHECK_DISTANCE_AT_EDGE: msdfgen_ErrorCorrectionConfig_DistanceCheckMode = 1; ``` Only computes exact shape distance at edges. Provides a good balance between speed and precision. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_DistanceCheckMode_DO_NOT_CHECK_DISTANCE === ``` pub const msdfgen_ErrorCorrectionConfig_DistanceCheckMode_DO_NOT_CHECK_DISTANCE: msdfgen_ErrorCorrectionConfig_DistanceCheckMode = 0; ``` Never computes exact shape distance. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_Mode_DISABLED === ``` pub const msdfgen_ErrorCorrectionConfig_Mode_DISABLED: msdfgen_ErrorCorrectionConfig_Mode = 0; ``` Skips error correction pass. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_Mode_EDGE_ONLY === ``` pub const msdfgen_ErrorCorrectionConfig_Mode_EDGE_ONLY: msdfgen_ErrorCorrectionConfig_Mode = 3; ``` Only corrects artifacts at edges. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_Mode_EDGE_PRIORITY === ``` pub const msdfgen_ErrorCorrectionConfig_Mode_EDGE_PRIORITY: msdfgen_ErrorCorrectionConfig_Mode = 2; ``` Corrects artifacts at edges and other discontinuous distances only if it does not affect edges or corners. Constant msdf_sys::msdfgen_ErrorCorrectionConfig_Mode_INDISCRIMINATE === ``` pub const msdfgen_ErrorCorrectionConfig_Mode_INDISCRIMINATE: msdfgen_ErrorCorrectionConfig_Mode = 1; ``` Corrects all discontinuities of the distance field regardless if edges are adversely affected. Constant msdf_sys::msdfgen_FillRule_FILL_NEGATIVE === ``` pub const msdfgen_FillRule_FILL_NEGATIVE: msdfgen_FillRule = 3; ``` Constant msdf_sys::msdfgen_FillRule_FILL_NONZERO === ``` pub const msdfgen_FillRule_FILL_NONZERO: msdfgen_FillRule = 0; ``` Constant msdf_sys::msdfgen_FillRule_FILL_ODD === ``` pub const msdfgen_FillRule_FILL_ODD: msdfgen_FillRule = 1; ``` Constant msdf_sys::msdfgen_FillRule_FILL_POSITIVE === ``` pub const msdfgen_FillRule_FILL_POSITIVE: msdfgen_FillRule = 2; ``` Static msdf_sys::msdfgen_ErrorCorrectionConfig_defaultMinDeviationRatio === ``` pub static msdfgen_ErrorCorrectionConfig_defaultMinDeviationRatio: f64 ``` The default value of minDeviationRatio. Static msdf_sys::msdfgen_ErrorCorrectionConfig_defaultMinImproveRatio === ``` pub static msdfgen_ErrorCorrectionConfig_defaultMinImproveRatio: f64 ``` The default value of minImproveRatio. Function msdf_sys::msdfgen_Contour_addEdge === ``` pub unsafe extern "C" fn msdfgen_Contour_addEdge(     this: *mutmsdfgen_Contour,     edge: *constmsdfgen_EdgeHolder ) ``` Adds an edge to the contour. Function msdf_sys::msdfgen_Contour_addEdge1 === ``` pub unsafe extern "C" fn msdfgen_Contour_addEdge1(     this: *mutmsdfgen_Contour ) -> *mutmsdfgen_EdgeHolder ``` Creates a new edge in the contour and returns its reference. Function msdf_sys::msdfgen_Contour_bound === ``` pub unsafe extern "C" fn msdfgen_Contour_bound(     this: *constmsdfgen_Contour,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64 ) ``` Adjusts the bounding box to fit the contour. Function msdf_sys::msdfgen_Contour_boundMiters === ``` pub unsafe extern "C" fn msdfgen_Contour_boundMiters(     this: *constmsdfgen_Contour,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64,     border: f64,     miterLimit: f64,     polarity: c_int ) ``` Adjusts the bounding box to fit the contour border’s mitered corners. Function msdf_sys::msdfgen_Contour_reverse === ``` pub unsafe extern "C" fn msdfgen_Contour_reverse(     this: *mutmsdfgen_Contour ) ``` Reverses the sequence of edges on the contour. Function msdf_sys::msdfgen_Contour_winding === ``` pub unsafe extern "C" fn msdfgen_Contour_winding(     this: *constmsdfgen_Contour ) -> c_int ``` Computes the winding of the contour. Returns 1 if positive, -1 if negative. Function msdf_sys::msdfgen_CubicSegment_CubicSegment === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_CubicSegment(     this: *mutmsdfgen_CubicSegment,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     p3: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_CubicSegment_bound === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_bound(     this: *mutc_void,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64 ) ``` Function msdf_sys::msdfgen_CubicSegment_clone === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_clone(     this: *mutc_void ) -> *mutmsdfgen_CubicSegment ``` Function msdf_sys::msdfgen_CubicSegment_deconverge === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_deconverge(     this: *mutmsdfgen_CubicSegment,     param: c_int,     amount: f64 ) ``` Function msdf_sys::msdfgen_CubicSegment_direction === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_direction(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_CubicSegment_directionChange === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_directionChange(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_CubicSegment_moveEndPoint === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_moveEndPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_CubicSegment_moveStartPoint === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_moveStartPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_CubicSegment_point === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_point(     this: *mutc_void,     param: f64 ) -> msdfgen_Point2 ``` Function msdf_sys::msdfgen_CubicSegment_reverse === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_reverse(     this: *mutc_void ) ``` Function msdf_sys::msdfgen_CubicSegment_scanlineIntersections === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_scanlineIntersections(     this: *mutc_void,     x: *mutf64,     dy: *mutc_int,     y: f64 ) -> c_int ``` Function msdf_sys::msdfgen_CubicSegment_signedDistance === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_signedDistance(     this: *mutc_void,     origin: msdfgen_Point2,     param: *mutf64 ) -> msdfgen_SignedDistance ``` Function msdf_sys::msdfgen_CubicSegment_splitInThirds === ``` pub unsafe extern "C" fn msdfgen_CubicSegment_splitInThirds(     this: *mutc_void,     part1: *mut*mutmsdfgen_EdgeSegment,     part2: *mut*mutmsdfgen_EdgeSegment,     part3: *mut*mutmsdfgen_EdgeSegment ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder(     this: *mutmsdfgen_EdgeHolder ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder1 === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder1(     this: *mutmsdfgen_EdgeHolder,     segment: *mutmsdfgen_EdgeSegment ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder2 === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder2(     this: *mutmsdfgen_EdgeHolder,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder3 === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder3(     this: *mutmsdfgen_EdgeHolder,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder4 === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder4(     this: *mutmsdfgen_EdgeHolder,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     p3: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder5 === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder5(     this: *mutmsdfgen_EdgeHolder,     orig: *constmsdfgen_EdgeHolder ) ``` Function msdf_sys::msdfgen_EdgeHolder_EdgeHolder_destructor === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_EdgeHolder_destructor(     this: *mutmsdfgen_EdgeHolder ) ``` Function msdf_sys::msdfgen_EdgeHolder_swap === ``` pub unsafe extern "C" fn msdfgen_EdgeHolder_swap(     a: *mutmsdfgen_EdgeHolder,     b: *mutmsdfgen_EdgeHolder ) ``` Swaps the edges held by a and b. Function msdf_sys::msdfgen_EdgeSegment_distanceToPseudoDistance === ``` pub unsafe extern "C" fn msdfgen_EdgeSegment_distanceToPseudoDistance(     this: *mutc_void,     distance: *mutmsdfgen_SignedDistance,     origin: msdfgen_Point2,     param: f64 ) ``` Converts a previously retrieved signed distance from origin to pseudo-distance. Function msdf_sys::msdfgen_LinearSegment_LinearSegment === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_LinearSegment(     this: *mutmsdfgen_LinearSegment,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_LinearSegment_bound === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_bound(     this: *mutc_void,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64 ) ``` Function msdf_sys::msdfgen_LinearSegment_clone === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_clone(     this: *mutc_void ) -> *mutmsdfgen_LinearSegment ``` Function msdf_sys::msdfgen_LinearSegment_direction === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_direction(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_LinearSegment_directionChange === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_directionChange(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_LinearSegment_length === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_length(     this: *constmsdfgen_LinearSegment ) -> f64 ``` Function msdf_sys::msdfgen_LinearSegment_moveEndPoint === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_moveEndPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_LinearSegment_moveStartPoint === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_moveStartPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_LinearSegment_point === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_point(     this: *mutc_void,     param: f64 ) -> msdfgen_Point2 ``` Function msdf_sys::msdfgen_LinearSegment_reverse === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_reverse(     this: *mutc_void ) ``` Function msdf_sys::msdfgen_LinearSegment_scanlineIntersections === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_scanlineIntersections(     this: *mutc_void,     x: *mutf64,     dy: *mutc_int,     y: f64 ) -> c_int ``` Function msdf_sys::msdfgen_LinearSegment_signedDistance === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_signedDistance(     this: *mutc_void,     origin: msdfgen_Point2,     param: *mutf64 ) -> msdfgen_SignedDistance ``` Function msdf_sys::msdfgen_LinearSegment_splitInThirds === ``` pub unsafe extern "C" fn msdfgen_LinearSegment_splitInThirds(     this: *mutc_void,     part1: *mut*mutmsdfgen_EdgeSegment,     part2: *mut*mutmsdfgen_EdgeSegment,     part3: *mut*mutmsdfgen_EdgeSegment ) ``` Function msdf_sys::msdfgen_Projection_Projection === ``` pub unsafe extern "C" fn msdfgen_Projection_Projection(     this: *mutmsdfgen_Projection ) ``` Function msdf_sys::msdfgen_Projection_Projection1 === ``` pub unsafe extern "C" fn msdfgen_Projection_Projection1(     this: *mutmsdfgen_Projection,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2 ) ``` Function msdf_sys::msdfgen_Projection_project === ``` pub unsafe extern "C" fn msdfgen_Projection_project(     this: *constmsdfgen_Projection,     coord: *constmsdfgen_Point2 ) -> msdfgen_Point2 ``` Converts the shape coordinate to pixel coordinate. Function msdf_sys::msdfgen_Projection_projectVector === ``` pub unsafe extern "C" fn msdfgen_Projection_projectVector(     this: *constmsdfgen_Projection,     vector: *constmsdfgen_Vector2 ) -> msdfgen_Vector2 ``` Converts the vector to pixel coordinate space. Function msdf_sys::msdfgen_Projection_projectX === ``` pub unsafe extern "C" fn msdfgen_Projection_projectX(     this: *constmsdfgen_Projection,     x: f64 ) -> f64 ``` Converts the X-coordinate from shape to pixel coordinate space. Function msdf_sys::msdfgen_Projection_projectY === ``` pub unsafe extern "C" fn msdfgen_Projection_projectY(     this: *constmsdfgen_Projection,     y: f64 ) -> f64 ``` Converts the Y-coordinate from shape to pixel coordinate space. Function msdf_sys::msdfgen_Projection_unproject === ``` pub unsafe extern "C" fn msdfgen_Projection_unproject(     this: *constmsdfgen_Projection,     coord: *constmsdfgen_Point2 ) -> msdfgen_Point2 ``` Converts the pixel coordinate to shape coordinate. Function msdf_sys::msdfgen_Projection_unprojectVector === ``` pub unsafe extern "C" fn msdfgen_Projection_unprojectVector(     this: *constmsdfgen_Projection,     vector: *constmsdfgen_Vector2 ) -> msdfgen_Vector2 ``` Converts the vector from pixel coordinate space. Function msdf_sys::msdfgen_Projection_unprojectX === ``` pub unsafe extern "C" fn msdfgen_Projection_unprojectX(     this: *constmsdfgen_Projection,     x: f64 ) -> f64 ``` Converts the X-coordinate from pixel to shape coordinate space. Function msdf_sys::msdfgen_Projection_unprojectY === ``` pub unsafe extern "C" fn msdfgen_Projection_unprojectY(     this: *constmsdfgen_Projection,     y: f64 ) -> f64 ``` Converts the Y-coordinate from pixel to shape coordinate space. Function msdf_sys::msdfgen_QuadraticSegment_QuadraticSegment === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_QuadraticSegment(     this: *mutmsdfgen_QuadraticSegment,     p0: msdfgen_Point2,     p1: msdfgen_Point2,     p2: msdfgen_Point2,     edgeColor: msdfgen_EdgeColor ) ``` Function msdf_sys::msdfgen_QuadraticSegment_bound === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_bound(     this: *mutc_void,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64 ) ``` Function msdf_sys::msdfgen_QuadraticSegment_clone === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_clone(     this: *mutc_void ) -> *mutmsdfgen_QuadraticSegment ``` Function msdf_sys::msdfgen_QuadraticSegment_convertToCubic === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_convertToCubic(     this: *constmsdfgen_QuadraticSegment ) -> *mutmsdfgen_EdgeSegment ``` Function msdf_sys::msdfgen_QuadraticSegment_direction === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_direction(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_QuadraticSegment_directionChange === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_directionChange(     this: *mutc_void,     param: f64 ) -> msdfgen_Vector2 ``` Function msdf_sys::msdfgen_QuadraticSegment_length === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_length(     this: *constmsdfgen_QuadraticSegment ) -> f64 ``` Function msdf_sys::msdfgen_QuadraticSegment_moveEndPoint === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_moveEndPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_QuadraticSegment_moveStartPoint === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_moveStartPoint(     this: *mutc_void,     to: msdfgen_Point2 ) ``` Function msdf_sys::msdfgen_QuadraticSegment_point === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_point(     this: *mutc_void,     param: f64 ) -> msdfgen_Point2 ``` Function msdf_sys::msdfgen_QuadraticSegment_reverse === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_reverse(     this: *mutc_void ) ``` Function msdf_sys::msdfgen_QuadraticSegment_scanlineIntersections === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_scanlineIntersections(     this: *mutc_void,     x: *mutf64,     dy: *mutc_int,     y: f64 ) -> c_int ``` Function msdf_sys::msdfgen_QuadraticSegment_signedDistance === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_signedDistance(     this: *mutc_void,     origin: msdfgen_Point2,     param: *mutf64 ) -> msdfgen_SignedDistance ``` Function msdf_sys::msdfgen_QuadraticSegment_splitInThirds === ``` pub unsafe extern "C" fn msdfgen_QuadraticSegment_splitInThirds(     this: *mutc_void,     part1: *mut*mutmsdfgen_EdgeSegment,     part2: *mut*mutmsdfgen_EdgeSegment,     part3: *mut*mutmsdfgen_EdgeSegment ) ``` Function msdf_sys::msdfgen_Scanline_Scanline === ``` pub unsafe extern "C" fn msdfgen_Scanline_Scanline(     this: *mutmsdfgen_Scanline ) ``` Function msdf_sys::msdfgen_Scanline_countIntersections === ``` pub unsafe extern "C" fn msdfgen_Scanline_countIntersections(     this: *constmsdfgen_Scanline,     x: f64 ) -> c_int ``` Returns the number of intersections left of x. Function msdf_sys::msdfgen_Scanline_filled === ``` pub unsafe extern "C" fn msdfgen_Scanline_filled(     this: *constmsdfgen_Scanline,     x: f64,     fillRule: msdfgen_FillRule ) -> bool ``` Decides whether the scanline is filled at x based on fill rule. Function msdf_sys::msdfgen_Scanline_overlap === ``` pub unsafe extern "C" fn msdfgen_Scanline_overlap(     a: *constmsdfgen_Scanline,     b: *constmsdfgen_Scanline,     xFrom: f64,     xTo: f64,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_Scanline_setIntersections === ``` pub unsafe extern "C" fn msdfgen_Scanline_setIntersections(     this: *mutmsdfgen_Scanline,     intersections: *const[u64; 3] ) ``` Populates the intersection list. Function msdf_sys::msdfgen_Scanline_sumIntersections === ``` pub unsafe extern "C" fn msdfgen_Scanline_sumIntersections(     this: *constmsdfgen_Scanline,     x: f64 ) -> c_int ``` Returns the total sign of intersections left of x. Function msdf_sys::msdfgen_Shape_Shape === ``` pub unsafe extern "C" fn msdfgen_Shape_Shape(this: *mutmsdfgen_Shape) ``` Function msdf_sys::msdfgen_Shape_addContour === ``` pub unsafe extern "C" fn msdfgen_Shape_addContour(     this: *mutmsdfgen_Shape,     contour: *constmsdfgen_Contour ) ``` Adds a contour. Function msdf_sys::msdfgen_Shape_addContour1 === ``` pub unsafe extern "C" fn msdfgen_Shape_addContour1(     this: *mutmsdfgen_Shape ) -> *mutmsdfgen_Contour ``` Adds a blank contour and returns its reference. Function msdf_sys::msdfgen_Shape_bound === ``` pub unsafe extern "C" fn msdfgen_Shape_bound(     this: *constmsdfgen_Shape,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64 ) ``` Adjusts the bounding box to fit the shape. Function msdf_sys::msdfgen_Shape_boundMiters === ``` pub unsafe extern "C" fn msdfgen_Shape_boundMiters(     this: *constmsdfgen_Shape,     l: *mutf64,     b: *mutf64,     r: *mutf64,     t: *mutf64,     border: f64,     miterLimit: f64,     polarity: c_int ) ``` Adjusts the bounding box to fit the shape border’s mitered corners. Function msdf_sys::msdfgen_Shape_edgeCount === ``` pub unsafe extern "C" fn msdfgen_Shape_edgeCount(     this: *constmsdfgen_Shape ) -> c_int ``` Returns the total number of edge segments Function msdf_sys::msdfgen_Shape_getBounds === ``` pub unsafe extern "C" fn msdfgen_Shape_getBounds(     this: *constmsdfgen_Shape,     border: f64,     miterLimit: f64,     polarity: c_int ) -> msdfgen_Shape_Bounds ``` Computes the minimum bounding box that fits the shape, optionally with a (mitered) border. Function msdf_sys::msdfgen_Shape_normalize === ``` pub unsafe extern "C" fn msdfgen_Shape_normalize(     this: *mutmsdfgen_Shape ) ``` Normalizes the shape geometry for distance field generation. Function msdf_sys::msdfgen_Shape_orientContours === ``` pub unsafe extern "C" fn msdfgen_Shape_orientContours(     this: *mutmsdfgen_Shape ) ``` Assumes its contours are unoriented (even-odd fill rule). Attempts to orient them to conform to the non-zero winding rule. Function msdf_sys::msdfgen_Shape_scanline === ``` pub unsafe extern "C" fn msdfgen_Shape_scanline(     this: *constmsdfgen_Shape,     line: *mutmsdfgen_Scanline,     y: f64 ) ``` Outputs the scanline that intersects the shape at y. Function msdf_sys::msdfgen_Shape_validate === ``` pub unsafe extern "C" fn msdfgen_Shape_validate(     this: *constmsdfgen_Shape ) -> bool ``` Performs basic checks to determine if the object represents a valid shape. Function msdf_sys::msdfgen_SignedDistance_SignedDistance === ``` pub unsafe extern "C" fn msdfgen_SignedDistance_SignedDistance(     this: *mutmsdfgen_SignedDistance ) ``` Function msdf_sys::msdfgen_SignedDistance_SignedDistance1 === ``` pub unsafe extern "C" fn msdfgen_SignedDistance_SignedDistance1(     this: *mutmsdfgen_SignedDistance,     dist: f64,     d: f64 ) ``` Function msdf_sys::msdfgen_Vector2_Vector2 === ``` pub unsafe extern "C" fn msdfgen_Vector2_Vector2(     this: *mutmsdfgen_Vector2,     val: f64 ) ``` Function msdf_sys::msdfgen_Vector2_Vector21 === ``` pub unsafe extern "C" fn msdfgen_Vector2_Vector21(     this: *mutmsdfgen_Vector2,     x: f64,     y: f64 ) ``` Function msdf_sys::msdfgen_Vector2_direction === ``` pub unsafe extern "C" fn msdfgen_Vector2_direction(     this: *constmsdfgen_Vector2 ) -> f64 ``` Returns the angle of the vector in radians (atan2). Function msdf_sys::msdfgen_Vector2_getOrthogonal === ``` pub unsafe extern "C" fn msdfgen_Vector2_getOrthogonal(     this: *constmsdfgen_Vector2,     polarity: bool ) -> msdfgen_Vector2 ``` Returns a vector with the same length that is orthogonal to this one. Function msdf_sys::msdfgen_Vector2_getOrthonormal === ``` pub unsafe extern "C" fn msdfgen_Vector2_getOrthonormal(     this: *constmsdfgen_Vector2,     polarity: bool,     allowZero: bool ) -> msdfgen_Vector2 ``` Returns a vector with unit length that is orthogonal to this one. Function msdf_sys::msdfgen_Vector2_length === ``` pub unsafe extern "C" fn msdfgen_Vector2_length(     this: *constmsdfgen_Vector2 ) -> f64 ``` Returns the vector’s length. Function msdf_sys::msdfgen_Vector2_normalize === ``` pub unsafe extern "C" fn msdfgen_Vector2_normalize(     this: *constmsdfgen_Vector2,     allowZero: bool ) -> msdfgen_Vector2 ``` Returns the normalized vector - one that has the same direction but unit length. Function msdf_sys::msdfgen_Vector2_project === ``` pub unsafe extern "C" fn msdfgen_Vector2_project(     this: *constmsdfgen_Vector2,     vector: *constmsdfgen_Vector2,     positive: bool ) -> msdfgen_Vector2 ``` Returns a vector projected along this one. Function msdf_sys::msdfgen_Vector2_reset === ``` pub unsafe extern "C" fn msdfgen_Vector2_reset(     this: *mutmsdfgen_Vector2 ) ``` Sets the vector to zero. Function msdf_sys::msdfgen_Vector2_set === ``` pub unsafe extern "C" fn msdfgen_Vector2_set(     this: *mutmsdfgen_Vector2,     x: f64,     y: f64 ) ``` Sets individual elements of the vector. Function msdf_sys::msdfgen_distanceSignCorrection === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     fillRule: msdfgen_FillRule ) ``` Fixes the sign of the input signed distance field, so that it matches the shape’s rasterized fill. Function msdf_sys::msdfgen_distanceSignCorrection1 === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection1(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_distanceSignCorrection2 === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection2(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_distanceSignCorrection3 === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection3(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_distanceSignCorrection4 === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection4(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_distanceSignCorrection5 === ``` pub unsafe extern "C" fn msdfgen_distanceSignCorrection5(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_edgeColoringByDistance === ``` pub unsafe extern "C" fn msdfgen_edgeColoringByDistance(     shape: *mutmsdfgen_Shape,     angleThreshold: f64,     seed: c_ulonglong ) ``` The alternative coloring by distance tries to use different colors for edges that are close together. This should theoretically be the best strategy on average. However, since it needs to compute the distance between all pairs of edges, and perform a graph optimization task, it is much slower than the rest. Function msdf_sys::msdfgen_edgeColoringInkTrap === ``` pub unsafe extern "C" fn msdfgen_edgeColoringInkTrap(     shape: *mutmsdfgen_Shape,     angleThreshold: f64,     seed: c_ulonglong ) ``` The alternative “ink trap” coloring strategy is designed for better results with typefaces that use ink traps as a design feature. It guarantees that even if all edges that are shorter than both their neighboring edges are removed, the coloring remains consistent with the established rules. Function msdf_sys::msdfgen_edgeColoringSimple === ``` pub unsafe extern "C" fn msdfgen_edgeColoringSimple(     shape: *mutmsdfgen_Shape,     angleThreshold: f64,     seed: c_ulonglong ) ``` Assigns colors to edges of the shape in accordance to the multi-channel distance field technique. May split some edges if necessary. angleThreshold specifies the maximum angle (in radians) to be considered a corner, for example 3 (~172 degrees). Values below 1/2 PI will be treated as the external angle. Function msdf_sys::msdfgen_estimateSDFError === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Estimates the portion of the area that will be filled incorrectly when rendering using the SDF. Function msdf_sys::msdfgen_estimateSDFError1 === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError1(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_estimateSDFError2 === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError2(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_estimateSDFError3 === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError3(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_estimateSDFError4 === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError4(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_estimateSDFError5 === ``` pub unsafe extern "C" fn msdfgen_estimateSDFError5(     sdf: *constmsdfgen_BitmapConstRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     scanlinesPerRow: c_int,     fillRule: msdfgen_FillRule ) -> f64 ``` Function msdf_sys::msdfgen_generateMSDF === ``` pub unsafe extern "C" fn msdfgen_generateMSDF(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_MSDFGeneratorConfig ) ``` Generates a multi-channel signed distance field. Edge colors must be assigned first! (See edgeColoringSimple) Function msdf_sys::msdfgen_generateMSDF1 === ``` pub unsafe extern "C" fn msdfgen_generateMSDF1(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     errorCorrectionConfig: *constmsdfgen_ErrorCorrectionConfig,     overlapSupport: bool ) ``` Function msdf_sys::msdfgen_generateMSDF_legacy === ``` pub unsafe extern "C" fn msdfgen_generateMSDF_legacy(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     errorCorrectionConfig: msdfgen_ErrorCorrectionConfig ) ``` Function msdf_sys::msdfgen_generateMTSDF === ``` pub unsafe extern "C" fn msdfgen_generateMTSDF(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_MSDFGeneratorConfig ) ``` Generates a multi-channel signed distance field with true distance in the alpha channel. Edge colors must be assigned first. Function msdf_sys::msdfgen_generateMTSDF1 === ``` pub unsafe extern "C" fn msdfgen_generateMTSDF1(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     errorCorrectionConfig: *constmsdfgen_ErrorCorrectionConfig,     overlapSupport: bool ) ``` Function msdf_sys::msdfgen_generateMTSDF_legacy === ``` pub unsafe extern "C" fn msdfgen_generateMTSDF_legacy(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     errorCorrectionConfig: msdfgen_ErrorCorrectionConfig ) ``` Function msdf_sys::msdfgen_generatePseudoSDF === ``` pub unsafe extern "C" fn msdfgen_generatePseudoSDF(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_GeneratorConfig ) ``` Generates a single-channel signed pseudo-distance field. Function msdf_sys::msdfgen_generatePseudoSDF1 === ``` pub unsafe extern "C" fn msdfgen_generatePseudoSDF1(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     overlapSupport: bool ) ``` Function msdf_sys::msdfgen_generatePseudoSDF_legacy === ``` pub unsafe extern "C" fn msdfgen_generatePseudoSDF_legacy(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2 ) ``` Function msdf_sys::msdfgen_generateSDF === ``` pub unsafe extern "C" fn msdfgen_generateSDF(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_GeneratorConfig ) ``` Generates a conventional single-channel signed distance field. Function msdf_sys::msdfgen_generateSDF1 === ``` pub unsafe extern "C" fn msdfgen_generateSDF1(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     overlapSupport: bool ) ``` Function msdf_sys::msdfgen_generateSDF_legacy === ``` pub unsafe extern "C" fn msdfgen_generateSDF_legacy(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     range: f64,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2 ) ``` Function msdf_sys::msdfgen_interpretFillRule === ``` pub unsafe extern "C" fn msdfgen_interpretFillRule(     intersections: c_int,     fillRule: msdfgen_FillRule ) -> bool ``` Resolves the number of intersection into a binary fill value based on fill rule. Function msdf_sys::msdfgen_msdfErrorCorrection === ``` pub unsafe extern "C" fn msdfgen_msdfErrorCorrection(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_MSDFGeneratorConfig ) ``` Predicts potential artifacts caused by the interpolation of the MSDF and corrects them by converting nearby texels to single-channel. Function msdf_sys::msdfgen_msdfErrorCorrection1 === ``` pub unsafe extern "C" fn msdfgen_msdfErrorCorrection1(     sdf: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     range: f64,     config: *constmsdfgen_MSDFGeneratorConfig ) ``` Function msdf_sys::msdfgen_msdfErrorCorrection_legacy === ``` pub unsafe extern "C" fn msdfgen_msdfErrorCorrection_legacy(     output: *constmsdfgen_BitmapRef<f32>,     threshold: *constmsdfgen_Vector2 ) ``` The original version of the error correction algorithm. Function msdf_sys::msdfgen_msdfErrorCorrection_legacy1 === ``` pub unsafe extern "C" fn msdfgen_msdfErrorCorrection_legacy1(     output: *constmsdfgen_BitmapRef<f32>,     threshold: *constmsdfgen_Vector2 ) ``` Function msdf_sys::msdfgen_msdfFastDistanceErrorCorrection === ``` pub unsafe extern "C" fn msdfgen_msdfFastDistanceErrorCorrection(     sdf: *constmsdfgen_BitmapRef<f32>,     projection: *constmsdfgen_Projection,     range: f64,     minDeviationRatio: f64 ) ``` Applies the simplified error correction to all discontiunous distances (INDISCRIMINATE mode). Does not need shape or translation. Function msdf_sys::msdfgen_msdfFastDistanceErrorCorrection1 === ``` pub unsafe extern "C" fn msdfgen_msdfFastDistanceErrorCorrection1(     sdf: *constmsdfgen_BitmapRef<f32>,     projection: *constmsdfgen_Projection,     range: f64,     minDeviationRatio: f64 ) ``` Function msdf_sys::msdfgen_msdfFastEdgeErrorCorrection === ``` pub unsafe extern "C" fn msdfgen_msdfFastEdgeErrorCorrection(     sdf: *constmsdfgen_BitmapRef<f32>,     projection: *constmsdfgen_Projection,     range: f64,     minDeviationRatio: f64 ) ``` Applies the simplified error correction to edges only (EDGE_ONLY mode). Does not need shape or translation. Function msdf_sys::msdfgen_msdfFastEdgeErrorCorrection1 === ``` pub unsafe extern "C" fn msdfgen_msdfFastEdgeErrorCorrection1(     sdf: *constmsdfgen_BitmapRef<f32>,     projection: *constmsdfgen_Projection,     range: f64,     minDeviationRatio: f64 ) ``` Function msdf_sys::msdfgen_rasterize === ``` pub unsafe extern "C" fn msdfgen_rasterize(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     projection: *constmsdfgen_Projection,     fillRule: msdfgen_FillRule ) ``` Rasterizes the shape into a monochrome bitmap. Function msdf_sys::msdfgen_rasterize1 === ``` pub unsafe extern "C" fn msdfgen_rasterize1(     output: *constmsdfgen_BitmapRef<f32>,     shape: *constmsdfgen_Shape,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     fillRule: msdfgen_FillRule ) ``` Function msdf_sys::msdfgen_readShapeDescription === ``` pub unsafe extern "C" fn msdfgen_readShapeDescription(     input: *mutFILE,     output: *mutmsdfgen_Shape,     colorsSpecified: *mutbool ) -> bool ``` Deserializes a text description of a vector shape into output. Function msdf_sys::msdfgen_readShapeDescription1 === ``` pub unsafe extern "C" fn msdfgen_readShapeDescription1(     input: *constc_char,     output: *mutmsdfgen_Shape,     colorsSpecified: *mutbool ) -> bool ``` Function msdf_sys::msdfgen_renderSDF === ``` pub unsafe extern "C" fn msdfgen_renderSDF(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Reconstructs the shape’s appearance into output from the distance field sdf. Function msdf_sys::msdfgen_renderSDF1 === ``` pub unsafe extern "C" fn msdfgen_renderSDF1(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Function msdf_sys::msdfgen_renderSDF2 === ``` pub unsafe extern "C" fn msdfgen_renderSDF2(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Function msdf_sys::msdfgen_renderSDF3 === ``` pub unsafe extern "C" fn msdfgen_renderSDF3(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Function msdf_sys::msdfgen_renderSDF4 === ``` pub unsafe extern "C" fn msdfgen_renderSDF4(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Function msdf_sys::msdfgen_renderSDF5 === ``` pub unsafe extern "C" fn msdfgen_renderSDF5(     output: *constmsdfgen_BitmapRef<f32>,     sdf: *constmsdfgen_BitmapConstRef<f32>,     pxRange: f64,     midValue: f32 ) ``` Function msdf_sys::msdfgen_saveBmp === ``` pub unsafe extern "C" fn msdfgen_saveBmp(     bitmap: *constmsdfgen_BitmapConstRef<msdfgen_byte>,     filename: *constc_char ) -> bool ``` Saves the bitmap as a BMP file. Function msdf_sys::msdfgen_saveBmp1 === ``` pub unsafe extern "C" fn msdfgen_saveBmp1(     bitmap: *constmsdfgen_BitmapConstRef<msdfgen_byte>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveBmp2 === ``` pub unsafe extern "C" fn msdfgen_saveBmp2(     bitmap: *constmsdfgen_BitmapConstRef<msdfgen_byte>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveBmp3 === ``` pub unsafe extern "C" fn msdfgen_saveBmp3(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveBmp4 === ``` pub unsafe extern "C" fn msdfgen_saveBmp4(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveBmp5 === ``` pub unsafe extern "C" fn msdfgen_saveBmp5(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveTiff === ``` pub unsafe extern "C" fn msdfgen_saveTiff(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Saves the bitmap as an uncompressed floating-point TIFF file. Function msdf_sys::msdfgen_saveTiff1 === ``` pub unsafe extern "C" fn msdfgen_saveTiff1(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_saveTiff2 === ``` pub unsafe extern "C" fn msdfgen_saveTiff2(     bitmap: *constmsdfgen_BitmapConstRef<f32>,     filename: *constc_char ) -> bool ``` Function msdf_sys::msdfgen_scanlineSDF === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     projection: *constmsdfgen_Projection,     y: f64,     inverseYAxis: bool ) ``` Analytically constructs a scanline at y evaluating fill by linear interpolation of the SDF. Function msdf_sys::msdfgen_scanlineSDF1 === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF1(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     projection: *constmsdfgen_Projection,     y: f64,     inverseYAxis: bool ) ``` Function msdf_sys::msdfgen_scanlineSDF2 === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF2(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     projection: *constmsdfgen_Projection,     y: f64,     inverseYAxis: bool ) ``` Function msdf_sys::msdfgen_scanlineSDF3 === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF3(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     inverseYAxis: bool,     y: f64 ) ``` Function msdf_sys::msdfgen_scanlineSDF4 === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF4(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     inverseYAxis: bool,     y: f64 ) ``` Function msdf_sys::msdfgen_scanlineSDF5 === ``` pub unsafe extern "C" fn msdfgen_scanlineSDF5(     line: *mutmsdfgen_Scanline,     sdf: *constmsdfgen_BitmapConstRef<f32>,     scale: *constmsdfgen_Vector2,     translate: *constmsdfgen_Vector2,     inverseYAxis: bool,     y: f64 ) ``` Function msdf_sys::msdfgen_simulate8bit === ``` pub unsafe extern "C" fn msdfgen_simulate8bit(     bitmap: *constmsdfgen_BitmapRef<f32) ``` Snaps the values of the floating-point bitmaps into one of the 256 values representable in a standard 8-bit bitmap. Function msdf_sys::msdfgen_simulate8bit1 === ``` pub unsafe extern "C" fn msdfgen_simulate8bit1(     bitmap: *constmsdfgen_BitmapRef<f32) ``` Function msdf_sys::msdfgen_simulate8bit2 === ``` pub unsafe extern "C" fn msdfgen_simulate8bit2(     bitmap: *constmsdfgen_BitmapRef<f32) ``` Function msdf_sys::msdfgen_writeShapeDescription === ``` pub unsafe extern "C" fn msdfgen_writeShapeDescription(     output: *mutFILE,     shape: *constmsdfgen_Shape ) -> bool ``` Serializes a shape object into a text description. Type Definition msdf_sys::FILE === ``` pub type FILE = _IO_FILE; ``` Type Definition msdf_sys::_IO_lock_t === ``` pub type _IO_lock_t = c_void; ``` Type Definition msdf_sys::__off64_t === ``` pub type __off64_t = c_long; ``` Type Definition msdf_sys::__off_t === ``` pub type __off_t = c_long; ``` Type Definition msdf_sys::msdfgen_EdgeColor === ``` pub type msdfgen_EdgeColor = c_uint; ``` Edge color specifies which color channels an edge belongs to. Type Definition msdf_sys::msdfgen_ErrorCorrectionConfig_DistanceCheckMode === ``` pub type msdfgen_ErrorCorrectionConfig_DistanceCheckMode = c_uint; ``` Configuration of whether to use an algorithm that computes the exact shape distance at the positions of suspected artifacts. This algorithm can be much slower. Type Definition msdf_sys::msdfgen_ErrorCorrectionConfig_Mode === ``` pub type msdfgen_ErrorCorrectionConfig_Mode = c_uint; ``` Mode of operation. Type Definition msdf_sys::msdfgen_FillRule === ``` pub type msdfgen_FillRule = c_uint; ``` Fill rule dictates how intersection total is interpreted during rasterization. Type Definition msdf_sys::msdfgen_Point2 === ``` pub type msdfgen_Point2 = msdfgen_Vector2; ``` A 2-dimensional euclidean vector with double precision. Implementation based on the Vector2 template from Artery Engine. @author <NAME> Type Definition msdf_sys::msdfgen_byte === ``` pub type msdfgen_byte = c_uchar; ``` Type Definition msdf_sys::size_t === ``` pub type size_t = c_ulong; ``` Type Definition msdf_sys::std_allocator_const_pointer === ``` pub type std_allocator_const_pointer = u8; ``` Type Definition msdf_sys::std_allocator_const_reference === ``` pub type std_allocator_const_reference = u8; ``` Type Definition msdf_sys::std_allocator_difference_type === ``` pub type std_allocator_difference_type = u64; ``` Type Definition msdf_sys::std_allocator_is_always_equal === ``` pub type std_allocator_is_always_equal = u8; ``` Type Definition msdf_sys::std_allocator_pointer === ``` pub type std_allocator_pointer = u8; ``` Type Definition msdf_sys::std_allocator_propagate_on_container_move_assignment === ``` pub type std_allocator_propagate_on_container_move_assignment = u8; ``` Type Definition msdf_sys::std_allocator_rebind_other === ``` pub type std_allocator_rebind_other = u8; ``` Type Definition msdf_sys::std_allocator_reference === ``` pub type std_allocator_reference = u8; ``` Type Definition msdf_sys::std_allocator_size_type === ``` pub type std_allocator_size_type = u64; ``` Type Definition msdf_sys::std_allocator_value_type === ``` pub type std_allocator_value_type = u8; ``` Type Definition msdf_sys::std_vector__Alloc_traits === ``` pub type std_vector__Alloc_traits = u8; ``` Type Definition msdf_sys::std_vector__Base === ``` pub type std_vector__Base = u8; ``` Type Definition msdf_sys::std_vector__Tp_alloc_type === ``` pub type std_vector__Tp_alloc_type = u8; ``` Type Definition msdf_sys::std_vector_allocator_type === ``` pub type std_vector_allocator_type = u8; ``` Type Definition msdf_sys::std_vector_const_iterator === ``` pub type std_vector_const_iterator = u8; ``` Type Definition msdf_sys::std_vector_const_pointer === ``` pub type std_vector_const_pointer = u8; ``` Type Definition msdf_sys::std_vector_const_reference === ``` pub type std_vector_const_reference = u8; ``` Type Definition msdf_sys::std_vector_const_reverse_iterator === ``` pub type std_vector_const_reverse_iterator = u8; ``` Type Definition msdf_sys::std_vector_difference_type === ``` pub type std_vector_difference_type = u64; ``` Type Definition msdf_sys::std_vector_iterator === ``` pub type std_vector_iterator = u8; ``` Type Definition msdf_sys::std_vector_pointer === ``` pub type std_vector_pointer = u8; ``` Type Definition msdf_sys::std_vector_reference === ``` pub type std_vector_reference = u8; ``` Type Definition msdf_sys::std_vector_reverse_iterator === ``` pub type std_vector_reverse_iterator = u8; ``` Type Definition msdf_sys::std_vector_size_type === ``` pub type std_vector_size_type = u64; ``` Type Definition msdf_sys::std_vector_value_type === ``` pub type std_vector_value_type = u8; ```
statprograms
cran
R
Package ‘statprograms’ October 14, 2022 Title Graduate Statistics Program Datasets Version 0.2.0 Description A small collection of data on graduate statistics programs from the United States. URL http://brettklamer.com/work/statprograms/ License MIT + file LICENSE Depends R (>= 2.10.0) LazyData TRUE RoxygenNote 6.0.1 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2018-06-17 04:10:58 UTC R topics documented: degreesawarde... 1 statprogram... 2 degreesawarded Degrees Awarded by Year Description This dataset contains the number of degrees awarded per year. It’s based on data from the National Center for Education Statistics as retrieved by <NAME>. See http://community.amstat. org/blogs/steve-pierson/2014/07/28/categorization-of-statistics-degrees for more information. Usage degreesawarded Format A data.frame with 4606 observations and 5 columns. The columns are defined as follows: school The college program_category The program type categorized as either "Statistics" or "Biostatistics" degree_category The degree categorized as either "Master" or "Doctorate" year The year the degrees were awarded count The number of degrees awarded Source "Statistics and Biostatistics Degree Data.", www.amstat.org/asa/education/Statistics-and-Biostatistics-Degree- aspx Examples ## Not run: data(degreesawarded) summary(degreesawarded) # In wide format as provided by Steve Pierson library(tidyr) spread(degreesawarded, key = year, value = count) ## End(Not run) statprograms Graduate Statistics Program Data Description This dataset contains various information from the majority of graduate statistics programs in the United States. Usage statprograms Format A data.frame with 490 observations and 16 columns. The columns are defined as follows: school The college program The program type as advertised by the department program_category The program type categorized as either "Statistics" or "Biostatistics" degree The degree given by the department degree_category The degree categorized as either "Master" or "Doctorate" state The state city The city square_miles The square miles of the city (or region) from https://www.wikipedia.org/ population The population of the city (or region) from https://www.wikipedia.org/ or https: //www.census.gov/programs-surveys/popest/data/data-sets.html. Most are estimates from 2010 to 2014. density The population density average_winter The average winter temperature from http://weatherdb.com average_summer The average summer temperature from http://weatherdb.com latitude The latitude of the department’s building (or as close as possible) from http://www. gps-coordinates.net longitude The longitude of the department’s building (or as close as possible) from http://www. gps-coordinates.net link The URL of the department’s website date_collected The date the information was recorded Author(s) <NAME> Examples ## Not run: data(statprograms) summary(statprograms) #---------------------------------------------------------------------------- # Plot locations on a map #---------------------------------------------------------------------------- library(maps) library(ggplot2) library(mapproj) us_states <- map_data("state") ggplot( data = statprograms[statprograms$state != "Alaska", ], 4 statprograms mapping = aes(x = longitude, y = latitude) ) + geom_polygon( data = us_states, aes(x = long, y = lat, group = group), fill = "white", color = "gray50", size = 0.5 ) + geom_point() + guides(fill = FALSE) + coord_map( projection = "albers", lat0 = 39, lat1 = 45 ) + theme_bw() ## End(Not run)
github.com/clickvisual/clickvisual
go
Go
README [¶](#section-readme) --- ### ClickVisual [![GitHub stars](https://img.shields.io/github/stars/clickvisual/clickvisual)](https://github.com/clickvisual/clickvisual/stargazers) [![GitHub issues](https://img.shields.io/github/issues/clickvisual/clickvisual)](https://github.com/clickvisual/clickvisual/issues) [![GitHub license](https://img.shields.io/github/license/clickvisual/clickvisual)](https://github.com/clickvisual/clickvisual/raw/master/LICENSE) [![Release](https://img.shields.io/github/v/release/clickvisual/clickvisual.svg)](https://github.com/clickvisual/clickvisual) [![Go Report Card](https://goreportcard.com/badge/github.com/clickvisual/clickvisual)](https://goreportcard.com/report/github.com/clickvisual/clickvisual) [![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/clickvisual/clickvisual?tab=doc) [![All Contributors](https://img.shields.io/badge/all_contributors-9-orange.svg?style=flat-square)](#readme-contributors-) [English](https://github.com/clickvisual/clickvisual/blob/master/README.md) | [中文](https://github.com/clickvisual/clickvisual/blob/master/README-CN.md) ClickVisual is a lightweight browser-based logs analytics and logs search platform for ClickHouse. ##### Documentation See <https://clickvisual.gocn.vip##### Log Query Demonstration ![log-search](https://cdn.gocn.vip/clickvisual/assets/img/logs.b24e990e.gif) ##### Alarm Process Demonstration ![log-search](https://cdn.gocn.vip/clickvisual/assets/img/alarm.c7d6042a.gif) ##### DAG Workflow ![log-search](https://cdn.gocn.vip/clickvisual/assets/img/dag.f8977497.png) ##### Configuration Page ![log-search](https://cdn.gocn.vip/clickvisual/assets/img/visual-configuration.62ebf9ad.png) #### Features * Support visual query dashboard, query histogram and raw logs for SQL. * Support showing percentage for specified fields. * Support vscode style configuration board, you can easily emit your fluent-bit configuration to Kubernetes ConfigMap. * Out of the box, easily deployment with `kubectl`. * Support for GitHub and GitLab Authentication. #### Architecture ![image](https://cdn.gocn.vip/clickvisual/assets/img/technical-architecture.f3cf8d04.png) #### Installation * For Docker ``` # clone clickvisual source code. git clone https://github.com/clickvisual/clickvisual.git # you may need to set docker image mirror, visit <https://github.com/yeasy/docker_practice/blob/master/install/mirror.md> for details. docker-compose up # then go to browser and visit http://localhost:19001. # login username: clickvisual # login password: clickvisual ``` * For host ``` # download release. # get latest version. latest=$(curl -sL https://api.github.com/repos/clickvisual/clickvisual/releases/latest | grep ".tag_name" | sed -E 's/.*"([^"]+)".*/\1/') # for MacOS amd64. wget "https://github.com/clickvisual/clickvisual/releases/download/${latest}/clickvisual-${latest}-darwin-amd64.tar.gz" -O clickvisual-${latest}.tar.gz # for Linux amd64. wget "https://github.com/clickvisual/clickvisual/releases/download/${latest}/clickvisual-${latest}-linux-amd64.tar.gz" -O clickvisual-$(latest).tar.gz # extract zip file to current directory. mkdir -p ./clickvisual-${latest} && tar -zxvf clickvisual-${latest}.tar.gz -C ./clickvisual-${latest} # open config/default.toml, then change database and redis or other section configuration # execute migration latest sql script in scripts/migration directory # start clickvisual cd ./clickvisual-${latest} && ./clickvisual -config config/default.toml # then go to browser and visit http://localhost:19001 # login username: clickvisual # login password: clickvisual ``` #### Document Contribution If you want to participate in <https://clickvisual.gocn.vip> document updating activities Please refer to this document <https://github.com/clickvisual/clickvisual/tree/master/docs#### Join Us Join us, please add the "cv" keyword in the verification information. ![](https://helpcenter.shimonote.com/uploads/0LNQ550801CF2.png) Wechat id is "MEXES_" #### Contributors Thanks for these wonderful people: | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | [**MEX7**](https://kl7sn.github.io) | [**m1666**](https://m1666.github.io) | [**askuy**](https://github.com/askuy) | [**sevennt**](https://github.com/sevennt) | [**LincolnZhou**](http://blog.lincolnzhou.com/) | [**Link Duan**](https://www.duanlv.ltd) | [**梁桂锋**](https://findcat.cn/) | | [**qingbozhang**](https://github.com/qingbozhang) | [**qianque7**](https://github.com/qianque7) | [**<NAME>**](https://github.com/rotk2022) | [**antony**](https://github.com/antonyaz) | [**ArthurQ**](https://github.com/ArthurQiuys) | [**<NAME>**](http://laojianzi.github.io) | [**<NAME>**](http://www.asarea.cn) | | [**Jeremy**](https://cloudsjhan.github.io/) | [**csy**](https://github.com/pigcsy) | [**zackzhangkai**](https://github.com/zackzhangkai) | [**kl**](http://www.kailing.pub/) | #### Thank You * [Jetbrains](https://www.jetbrains.com) * [腾源会/WeOpen](https://cloud.tencent.com/act/pro/weopen-home) #### Friends * [DBM - An awesome database management tool specified for ClickHouse](https://github.com/EdurtIO/dbm) None
snarkos-node
rust
Rust
Struct snarkos_node::Validator === ``` pub struct Validator<N: Network, C: ConsensusStorage<N>> { /* private fields */ } ``` A validator is a full node, capable of validating blocks. Implementations --- ### impl<N: Network, C: ConsensusStorage<N>> Validator<N, C#### pub async fn new( node_ip: SocketAddr, rest_ip: Option<SocketAddr>, bft_ip: Option<SocketAddr>, account: Account<N>, trusted_peers: &[SocketAddr], trusted_validators: &[SocketAddr], genesis: Block<N>, cdn: Option<String>, dev: Option<u16> ) -> Result<SelfInitializes a new validator node. #### pub fn ledger(&self) -> &Ledger<N, CReturns the ledger. #### pub fn rest(&self) -> &Option<Rest<N, C, Self>Returns the REST server. ### impl<N: Network, C: ConsensusStorage<N>> Validator<N, C#### pub fn spawn<T: Future<Output = ()> + Send + 'static>(&self, future: T) Spawns a task with the given future; it should only be used for long-running tasks. Trait Implementations --- ### impl<N: Clone + Network, C: Clone + ConsensusStorage<N>> Clone for Validator<N, C#### fn clone(&self) -> Validator<N, CReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. &'life0 self, peer_addr: SocketAddr ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Any extra operations to be performed during a disconnect. #### fn enable_disconnect<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Attaches the behavior specified in `Disconnect::handle_disconnect` to every occurrence of the node disconnecting from a peer.### impl<N: Network, C: ConsensusStorage<N>> Handshake for Validator<N, C#### fn perform_handshake<'life0, 'async_trait>( &'life0 self, connection: Connection ) -> Pin<Box<dyn Future<Output = Result<Connection>> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Performs the handshake protocol. #### const TIMEOUT_MS: u64 = 3_000u64 The maximum time allowed for a connection to perform a handshake before it is rejected. &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Prepares the node to perform specified network handshakes.#### fn borrow_stream<'a>(&self, conn: &'a mut Connection) -> &'a mut TcpStream Borrows the full connection stream to be used in the implementation of `Handshake::perform_handshake`.#### fn take_stream(&self, conn: &mut Connection) -> TcpStream Assumes full control of a connection’s stream in the implementation of `Handshake::perform_handshake`, by the end of which it *must* be followed by `Handshake::return_stream`.#### fn return_stream<T>(&self, conn: &mut Connection, stream: T)where T: AsyncRead + AsyncWrite + Send + Sync + 'static, This method only needs to be called if `Handshake::take_stream` had been called before; it is used to return a (potentially modified) stream back to the applicable connection.### impl<N: Network, C: ConsensusStorage<N>> Heartbeat<N> for Validator<N, C#### const MAXIMUM_NUMBER_OF_PEERS: usize = 200usize The maximum number of peers permitted to maintain connections with. #### const HEARTBEAT_IN_SECS: u64 = 15u64 The duration in seconds to sleep in between heartbeat executions.#### const MINIMUM_NUMBER_OF_PEERS: usize = 3usize The minimum number of peers required to maintain connections with.#### const MEDIAN_NUMBER_OF_PEERS: usize = _ The median number of peers to maintain connections with.#### fn heartbeat(&self) Handles the heartbeat request.#### fn safety_check_minimum_number_of_peers(&self) TODO (howardwu): Consider checking minimum number of validators, to exclude clients and provers. This function performs safety checks on the setting for the minimum number of peers.#### fn log_connected_peers(&self) This function logs the connected peers.#### fn remove_stale_connected_peers(&self) This function removes any connected peers that have not communicated within the predefined time.#### fn remove_oldest_connected_peer(&self) This function removes the oldest connected peer, to keep the connections fresh. This function only triggers if the router is above the minimum number of connected peers.#### fn handle_connected_peers(&self) TODO (howardwu): If the node is a validator, keep the validator. This function keeps the number of connected peers within the allowed range.#### fn handle_bootstrap_peers(&self) This function keeps the number of bootstrap peers within the allowed range.#### fn handle_trusted_peers(&self) This function attempts to connect to any disconnected trusted peers.#### fn handle_puzzle_request(&self) This function updates the coinbase puzzle if network has updated.### impl<N: Network, C: ConsensusStorage<N>> Inbound<N> for Validator<N, C#### fn block_request(&self, peer_ip: SocketAddr, message: BlockRequest) -> bool Retrieves the blocks within the block request range, and returns the block response to the peer. #### fn block_response(&self, peer_ip: SocketAddr, blocks: Vec<Block<N>>) -> bool Handles a `BlockResponse` message. #### fn ping(&self, peer_ip: SocketAddr, message: Ping<N>) -> bool Processes the block locators and sends back a `Pong` message. #### fn pong(&self, peer_ip: SocketAddr, _message: Pong) -> bool Sleeps for a period and then sends a `Ping` message to the peer. #### fn puzzle_request(&self, peer_ip: SocketAddr) -> bool Retrieves the latest epoch challenge and latest block header, and returns the puzzle response to the peer. #### fn puzzle_response( &self, peer_ip: SocketAddr, _epoch_challenge: EpochChallenge<N>, _header: Header<N> ) -> bool Disconnects on receipt of a `PuzzleResponse` message. #### fn unconfirmed_solution<'life0, 'async_trait>( &'life0 self, peer_ip: SocketAddr, serialized: UnconfirmedSolution<N>, solution: ProverSolution<N> ) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Propagates the unconfirmed solution to all connected validators. #### fn unconfirmed_transaction<'life0, 'async_trait>( &'life0 self, peer_ip: SocketAddr, serialized: UnconfirmedTransaction<N>, transaction: Transaction<N> ) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Handles an `UnconfirmedTransaction` message. #### const MAXIMUM_PUZZLE_REQUESTS_PER_INTERVAL: usize = 5usize The maximum number of puzzle requests per interval.#### const PING_SLEEP_IN_SECS: u64 = 9u64 The duration in seconds to sleep in between ping requests with a connected peer.#### fn inbound<'life0, 'async_trait>( &'life0 self, peer_addr: SocketAddr, message: Message<N> ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Handles the inbound message from the peer.#### fn peer_request(&self, peer_ip: SocketAddr) -> bool Handles a `PeerRequest` message.#### fn peer_response(&self, _peer_ip: SocketAddr, peers: &[SocketAddr]) -> bool Handles a `PeerResponse` message.### impl<N: Network, C: ConsensusStorage<N>> NodeInterface<N> for Validator<N, C#### fn shut_down<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Shuts down the node. #### fn node_type(&self) -> NodeType Returns the node type.#### fn private_key(&self) -> &PrivateKey<NReturns the account private key of the node.#### fn view_key(&self) -> &ViewKey<NReturns the account view key of the node.#### fn address(&self) -> Address<NReturns the account address of the node.#### fn is_dev(&self) -> bool Returns `true` if the node is in development mode.#### fn handle_signals() -> Arc<OnceCell<Self>Handles OS signals for the node to intercept and perform a clean shutdown. Note: Only Ctrl-C is supported; it should work on both Unix-family systems and Windows.### impl<N: Network, C: ConsensusStorage<N>> OnConnect for Validator<N, C>where Self: Outbound<N>, #### fn on_connect<'life0, 'async_trait>( &'life0 self, peer_addr: SocketAddr ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Any initial actions to be executed after the handshake is concluded; in order to be able to communicate with the peer in the usual manner (i.e. via [`Writing`]), only its `SocketAddr` (as opposed to the related [`Connection`] object) is provided as an argument.#### fn enable_on_connect<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Attaches the behavior specified in `OnConnect::on_connect` right after every successful handshake.### impl<N: Network, C: ConsensusStorage<N>> Outbound<N> for Validator<N, C#### fn router(&self) -> &Router<NReturns a reference to the router. #### fn send_ping( &self, peer_ip: SocketAddr, block_locators: Option<BlockLocators<N>> ) Sends a “Ping” message to the given peer.#### fn send( &self, peer_ip: SocketAddr, message: Message<N> ) -> Option<Receiver<Result<(), Error>>Sends the given message to specified peer. Sends the given message to every connected peer, excluding the sender and any specified peer IPs.#### fn propagate_to_validators( &self, message: Message<N>, excluded_peers: &[SocketAddr] ) Sends the given message to every connected validator, excluding the sender and any specified IPs.#### fn can_send(&self, peer_ip: SocketAddr, message: &Message<N>) -> bool Returns `true` if the message can be sent.### impl<N: Network, C: ConsensusStorage<N>> P2P for Validator<N, C#### fn tcp(&self) -> &Tcp Returns a reference to the TCP instance. ### impl<N: Network, C: ConsensusStorage<N>> Reading for Validator<N, C#### fn codec(&self, _peer_addr: SocketAddr, _side: ConnectionSide) -> Self::Codec Creates a [`Decoder`] used to interpret messages from the network. The `side` param indicates the connection side **from the node’s perspective**. #### fn process_message<'life0, 'async_trait>( &'life0 self, peer_addr: SocketAddr, message: Self::Message ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Processes a message received from the network. #### type Codec = MessageCodec<NThe user-supplied `Decoder` used to interpret inbound messages.#### type Message = Message<NThe final (deserialized) type of inbound messages.#### const MESSAGE_QUEUE_DEPTH: usize = 1_024usize The depth of per-connection queues used to process inbound messages; the greater it is, the more inbound messages the node can enqueue, but setting it to a large value can make the node more susceptible to DoS attacks. The initial size of a per-connection buffer for reading inbound messages. Can be set to the maximum expected size of the inbound message in order to only allocate it once. &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Prepares the node to receive messages.### impl<N: Network, C: ConsensusStorage<N>> Routing<N> for Validator<N, C#### fn initialize_routing<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Initialize the routing.#### fn enable_listener<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, #### fn initialize_heartbeat(&self) Initialize a new instance of the heartbeat.#### fn initialize_report(&self) Initialize a new instance of the report.### impl<N: Network, C: ConsensusStorage<N>> Writing for Validator<N, C#### fn codec(&self, _addr: SocketAddr, _side: ConnectionSide) -> Self::Codec Creates an [`Encoder`] used to write the outbound messages to the target stream. The `side` parameter indicates the connection side **from the node’s perspective**. #### type Codec = MessageCodec<NThe user-supplied `Encoder` used to write outbound messages to the target stream.#### type Message = Message<NThe type of the outbound messages; unless their serialization is expensive and the message is broadcasted (in which case it would get serialized multiple times), serialization should be done in the implementation of `Self::Codec`.#### const MESSAGE_QUEUE_DEPTH: usize = 1_024usize The depth of per-connection queues used to send outbound messages; the greater it is, the more outbound messages the node can enqueue. Setting it to a large value is not recommended, as doing it might obscure potential issues with your implementation (like slow serialization) or network. &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: Sync + 'async_trait, Prepares the node to send messages.#### fn unicast( &self, addr: SocketAddr, message: Self::Message ) -> Result<Receiver<Result<(), Error>>, ErrorSends the provided message to the specified `SocketAddr`. Returns as soon as the message is queued to be sent, without waiting for the actual delivery; instead, the caller is provided with a `oneshot::Receiver` which can be used to determine when and whether the message has been delivered. Read moreAuto Trait Implementations --- ### impl<N, C> !RefUnwindSafe for Validator<N, C### impl<N, C> Send for Validator<N, C### impl<N, C> Sync for Validator<N, C### impl<N, C> Unpin for Validator<N, C>where C: Unpin, N: Unpin, <N as Network>::BlockHash: Unpin, <N as Environment>::Field: Unpin, <<N as Environment>::PairingCurve as PairingEngine>::G1Affine: Unpin, <N as Environment>::Projective: Unpin, <N as Network>::RatificationID: Unpin, <N as Environment>::Scalar: Unpin, <N as Network>::StateRoot: Unpin, <N as Network>::TransactionID: Unpin, <N as Network>::TransitionID: Unpin, ### impl<N, C> !UnwindSafe for Validator<N, CBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> FromRef<T> for Twhere T: Clone, #### fn from_ref(input: &T) -> T Converts to this type from a reference to the input type.### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Trait snarkos_node::NodeInterface === ``` pub trait NodeInterface<N: Network>: Routing<N> { // Required method fn shut_down<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>> where Self: 'async_trait, 'life0: 'async_trait; // Provided methods fn node_type(&self) -> NodeType { ... } fn private_key(&self) -> &PrivateKey<N> { ... } fn view_key(&self) -> &ViewKey<N> { ... } fn address(&self) -> Address<N> { ... } fn is_dev(&self) -> bool { ... } fn handle_signals() -> Arc<OnceCell<Self>> { ... } } ``` Required Methods --- #### fn shut_down<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Shuts down the node. Provided Methods --- #### fn node_type(&self) -> NodeType Returns the node type. #### fn private_key(&self) -> &PrivateKey<NReturns the account private key of the node. #### fn view_key(&self) -> &ViewKey<NReturns the account view key of the node. #### fn address(&self) -> Address<NReturns the account address of the node. #### fn is_dev(&self) -> bool Returns `true` if the node is in development mode. #### fn handle_signals() -> Arc<OnceCell<Self>Handles OS signals for the node to intercept and perform a clean shutdown. Note: Only Ctrl-C is supported; it should work on both Unix-family systems and Windows. Implementors --- ### impl<N: Network, C: ConsensusStorage<N>> NodeInterface<N> for Client<N, C### impl<N: Network, C: ConsensusStorage<N>> NodeInterface<N> for Prover<N, C### impl<N: Network, C: ConsensusStorage<N>> NodeInterface<N> for Validator<N, CFunction snarkos_node::log_clean_error === ``` pub fn log_clean_error(dev: Option<u16>) ``` A helper to log instructions to recover. Function snarkos_node::notification_message === ``` pub fn notification_message() -> String ``` Returns the notification message as a string. Function snarkos_node::start_notification_message_loop === ``` pub fn start_notification_message_loop() -> JoinHandle<()> ``` Starts the notification message loop.
github.com/onsi/ginkgo/reporters/stenographer/support/go-isatty
go
Go
README [¶](#section-readme) --- ### go-isatty isatty for golang #### Usage ``` package main import ( "fmt" "github.com/mattn/go-isatty" "os" ) func main() { if isatty.IsTerminal(os.Stdout.Fd()) { fmt.Println("Is Terminal") } else { fmt.Println("Is Not Terminal") } } ``` #### Installation ``` $ go get github.com/mattn/go-isatty ``` ### License MIT ### Author <NAME> (a.k.a mattn) Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) Package isatty implements interface to isatty ### Index [¶](#pkg-index) * [func IsTerminal(fd uintptr) bool](#IsTerminal) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [IsTerminal](https://github.com/onsi/ginkgo/blob/v1.16.5/reporters/stenographer/support/go-isatty/isatty_linux.go#L14) [¶](#IsTerminal) ``` func IsTerminal(fd [uintptr](/builtin#uintptr)) [bool](/builtin#bool) ``` IsTerminal return true if the file descriptor is terminal. ### Types [¶](#pkg-types) This section is empty.
campsis
cran
R
Package ‘campsis’ April 24, 2023 Type Package Title Generic PK/PD Simulation Platform CAMPSIS Version 1.4.1 Description A generic, easy-to-use and intuitive pharmacokinetic/pharmacodynamic (PK/PD) simulation platform based on R packages 'rxode2', 'RxODE' and 'mrgsolve'. CAMPSIS provides an abstraction layer over the underlying processes of writing a PK/PD model, assembling a custom dataset and running a simulation. CAMPSIS has a strong dependency to the R package 'campsismod', which allows to read/write a model from/to files and adapt it further on the fly in the R environment. Package 'campsis' allows the user to assemble a dataset in an intuitive manner. Once the user’s dataset is ready, the package is in charge of preparing the simulation, calling 'rxode2', 'RxODE' or 'mrgsolve' (at the user's choice) and returning the results, for the given model, dataset and desired simulation settings. License GPL (>= 3) URL https://github.com/Calvagone/campsis, https://calvagone.github.io/ BugReports https://github.com/Calvagone/campsis/issues Depends campsismod (>= 1.0.0), R (>= 4.0.0) Imports assertthat, digest, dplyr, ggplot2, furrr, future, MASS, methods, plyr, progressr, purrr, rlang, stats, tibble, tidyr Suggests bookdown, devtools, gridExtra, knitr, mrgsolve, pkgdown, rmarkdown, roxygen2, rxode2, stringr, testthat, tictoc VignetteBuilder knitr Encoding UTF-8 Language en-US RoxygenNote 7.1.2 Collate 'global.R' 'utilities.R' 'check.R' 'generic.R' 'seed.R' 'distribution.R' 'dataset_config.R' 'time_entry.R' 'occasion.R' 'occasions.R' 'treatment_iov.R' 'treatment_iovs.R' 'dose_adaptation.R' 'dose_adaptations.R' 'treatment_entry.R' 'treatment.R' 'observations.R' 'observations_set.R' 'covariate.R' 'covariates.R' 'bootstrap.R' 'protocol.R' 'arm.R' 'arms.R' 'event.R' 'events.R' 'scenario.R' 'scenarios.R' 'simulation_engine.R' 'dataset.R' 'parameter_uncertainty.R' 'event_logic.R' 'dataset_summary.R' 'hardware_settings.R' 'simulation_progress.R' 'solver_settings.R' 'nocb_settings.R' 'declare_settings.R' 'internal_settings.R' 'simulation_settings.R' 'plan_setup.R' 'simulate_preprocess.R' 'simulate.R' 'results_processing.R' 'default_plot.R' NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-04-24 12:40:09 UTC R topics documented: applyCompartmentCharacteristic... 4 Ar... 5 arm-clas... 5 arms-clas... 6 Bolu... 6 bolus-clas... 7 Bootstra... 7 bootstrap-clas... 8 BootstrapDistributio... 8 bootstrap_distribution-clas... 9 campsis_handle... 9 ConstantDistributio... 9 constant_distribution-clas... 10 Covariat... 10 covariate-clas... 10 covariates-clas... 11 Datase... 11 dataset-clas... 11 DatasetConfi... 12 dataset_config-clas... 12 Declar... 13 declare_settings-clas... 13 DiscreteDistributio... 13 distribution-clas... 14 DoseAdaptatio... 14 dose_adaptation-clas... 15 dose_adaptations-clas... 15 dosingOnl... 15 EtaDistributio... 16 Even... 16 event-clas... 17 EventCovariat... 17 Event... 18 events-clas... 18 event_covariate-clas... 18 FixedDistributio... 19 fixed_covariate-clas... 19 fixed_distribution-clas... 19 FunctionDistributio... 20 function_distribution-clas... 20 generateII... 21 generateIIV... 21 getCovariate... 22 getEventCovariate... 22 getFixedCovariate... 23 getIOV... 24 getOccasion... 24 getSeedForDatasetExpor... 25 getSeedForIteratio... 25 getSeedForParametersSamplin... 26 getSplittingConfiguratio... 26 getTime... 27 getTimeVaryingCovariate... 27 Hardwar... 28 hardware_settings-clas... 29 Infusio... 30 infusion-clas... 31 internal_settings-clas... 31 IO... 31 leftJoinII... 32 length,arm-metho... 32 length,dataset-metho... 33 LogNormalDistributio... 33 mrgsolve_engine-clas... 34 NOC... 34 nocb_settings-clas... 34 NormalDistributio... 35 Observation... 35 observations-clas... 36 observations_set-clas... 36 obsOnl... 36 Occasio... 37 occasion-clas... 37 occasions-clas... 37 ParameterDistributio... 38 P... 38 protocol-clas... 39 retrieveParameterValue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 rxode_engine-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 scatterPlo... 41 Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 scenario-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 scenarios-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 setLabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 setSubjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 setupPlanDefault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 setupPlanSequential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 shadedPlot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 simulate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 SimulationProgress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 simulation_engine-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 simulation_progress-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 simulation_settings-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 solver_settings-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 spaghettiPlot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 TimeVaryingCovariate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 time_varying_covariate-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 treatment-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 treatment_iov-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 treatment_iovs-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 undefined_distribution-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 UniformDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 vpcPlot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 applyCompartmentCharacteristics Apply compartment characteristics from model. In practice, only com- partment infusion duration needs to be applied. Description Apply compartment characteristics from model. In practice, only compartment infusion duration needs to be applied. Usage applyCompartmentCharacteristics(table, properties) Arguments table current dataset properties compartment properties from model Value updated dataset Arm Create a treatment arm. Description Create a treatment arm. Usage Arm(id = as.integer(NA), subjects = 1, label = as.character(NA)) Arguments id unique identifier for this arm (available trough dataset), integer. If NA (default), this identifier is auto-incremented. subjects number of subjects in arm, integer label arm label, single character string. If set, this label will be output in the ARM column of CAMPSIS instead of the identifier. Value an arm arm-class Arm class. Description Arm class. Slots id arm unique ID, integer subjects number of subjects in arm, integer label arm label, single character string protocol protocol covariates covariates bootstrap covariates to be bootstrapped arms-class Arms class. Description Arms class. Bolus Create one or several bolus(es). Description Create one or several bolus(es). Usage Bolus( time, amount, compartment = NA, f = NULL, lag = NULL, ii = NULL, addl = NULL ) Arguments time treatment time(s), numeric value or vector. First treatment time if used together with ii and addl. amount amount to give as bolus, single numeric value compartment compartment index, single integer value f fraction of dose amount, distribution lag dose lag time, distribution ii interdose interval, requires argument ’time’ to be a single numeric value addl number of additional doses, requires argument ’time’ to be a single integer value Value a single bolus or a list of boluses bolus-class Bolus class. Description Bolus class. Bootstrap Create a bootstrap object. Description Create a bootstrap object. Usage Bootstrap( data, id = "BS_ID", replacement = FALSE, random = FALSE, export_id = FALSE ) Arguments data data frame to be bootstrapped. It must have a unique identifier column named ac- cording to the specified argument ’id’ (default value is ’BS_ID’). Other columns are covariates to bootstrap. They must all be numeric. Whatever the configura- tion of the bootstrap, these covariates are always read row by row and belong to a same individual. id unique identifier column name in data replacement values can be reused or not when drawn, logical random values are drawn randomly, logical export_id tell CAMPSIS if the identifier ’BS_ID’ must be output or not, logical Value a bootstrap object bootstrap-class Bootstrap class. Description Bootstrap class. Slots data data frame to be bootstrapped. Column ’BS_ID’ is mandatory and corresponds to the original row ID from the bootstrap. It must be numeric and unique. Other columns are covariates to be bootstrapped (row by row). replacement values can be reused or not, logical random values are drawn randomly, logical export_id tell CAMPSIS if ’BS_ID’ must be exported into the dataset, logical BootstrapDistribution Create a bootstrap distribution. During function sampling, CAMPSIS will generate values depending on the given data and arguments. Description Create a bootstrap distribution. During function sampling, CAMPSIS will generate values depend- ing on the given data and arguments. Usage BootstrapDistribution(data, replacement = FALSE, random = FALSE) Arguments data values to draw, numeric vector replacement values can be reused or not, logical random values are drawn randomly, logical Value a bootstrap distribution bootstrap_distribution-class Bootstrap distribution class. Description Bootstrap distribution class. Slots data values to draw, numeric vector replacement values can be reused or not, logical random values are drawn randomly, logical campsis_handler Suggested Campsis handler for showing the progress bar. Description Suggested Campsis handler for showing the progress bar. Usage campsis_handler() Value a progressr handler list ConstantDistribution Create a constant distribution. Its value will be constant across all generated samples. Description Create a constant distribution. Its value will be constant across all generated samples. Usage ConstantDistribution(value) Arguments value covariate value, single numeric value Value a constant distribution (same value for all samples) constant_distribution-class Constant distribution class. Description Constant distribution class. Slots value covariate value, single numeric value Covariate Create a non time-varying (fixed) covariate. Description Create a non time-varying (fixed) covariate. Usage Covariate(name, distribution) Arguments name covariate name, single character value distribution covariate distribution Value a fixed covariate covariate-class Covariate class. Description Covariate class. Slots name covariate name, single character value distribution covariate distribution covariates-class Covariates class. Description Covariates class. Dataset Create a dataset. Description Create a dataset. Usage Dataset(subjects = NULL) Arguments subjects number of subjects in the default arm Value a dataset dataset-class Dataset class. Description Dataset class. Slots arms a list of treatment arms config dataset configuration for export iiv data frame containing the inter-individual variability (all ETAS) for the export DatasetConfig Create a dataset configuration. This configuration allows CAMPSIS to know which are the default depot and observed compartments. Description Create a dataset configuration. This configuration allows CAMPSIS to know which are the default depot and observed compartments. Usage DatasetConfig( defDepotCmt = 1, defObsCmt = 1, exportTSLD = FALSE, exportTDOS = FALSE ) Arguments defDepotCmt default depot compartment, integer defObsCmt default observation compartment, integer exportTSLD export column TSLD (time since last dose), logical exportTDOS export column TDOS (time of last dose), logical Value a dataset configuration dataset_config-class Dataset configuration class. Description Dataset configuration class. Slots def_depot_cmt default depot compartment, integer def_obs_cmt default observation compartment, integer export_tsld export column TSLD, logical export_tdos export column TDOS, logical Declare Create declare settings. Description Create declare settings. Usage Declare(variables = character(0)) Arguments variables uninitialized variables to be declared, only needed with mrgsolve Value Declare settings declare_settings-class Declare settings class. Description Declare settings class. Slots variables uninitialized variables to be declared, only needed with mrgsolve DiscreteDistribution Discrete distribution. Description Discrete distribution. Usage DiscreteDistribution(x, prob, replace = TRUE) Arguments x vector of one or more integers from which to choose prob a vector of probability weights for obtaining the elements of the vector being sampled replace should sampling be with replacement, default is TRUE Value a discrete distribution distribution-class Distribution class. See this class as an interface. Description Distribution class. See this class as an interface. DoseAdaptation Create a dose adaptation. Description Create a dose adaptation. Usage DoseAdaptation(formula, compartments = integer(0)) Arguments formula formula to apply, single character string, e.g. "AMT*WT" compartments compartment numbers where the formula needs to be applied, integer vector. Default is integer(0) (formula applied on all compartments) Value a fixed covariate dose_adaptation-class Dose adaptation class. Description Dose adaptation class. Slots formula formula to apply, single character string, e.g. "AMT*WT" compartments compartment numbers where the formula needs to be applied dose_adaptations-class Dose adaptations class. Description Dose adaptations class. dosingOnly Filter CAMPSIS output on dosing rows. Description Filter CAMPSIS output on dosing rows. Usage dosingOnly(x) Arguments x data frame, CAMPSIS output Value a data frame with the dosing rows EtaDistribution Create an ETA distribution. The resulting distribution is a normal distribution, with mean=0 and sd=sqrt(OMEGA). Description Create an ETA distribution. The resulting distribution is a normal distribution, with mean=0 and sd=sqrt(OMEGA). Usage EtaDistribution(model, omega) Arguments model model omega corresponding THETA name, character Value an ETA distribution Event Create an interruption event. Description Create an interruption event. Usage Event(name = NULL, times, fun, debug = FALSE) Arguments name event name, character value times interruption times, numeric vector fun event function to apply at each interruption debug output the variables that were changed through this event Value an event definition event-class Event class. Description Event class. Slots name event name, character value times interruption times, numeric vector fun event function to apply at each interruption debug output the variables that were changed through this event EventCovariate Create an event covariate. These covariates can be modified further in interruption events. Description Create an event covariate. These covariates can be modified further in interruption events. Usage EventCovariate(name, distribution) Arguments name covariate name, character distribution covariate distribution at time 0 Value a time-varying covariate Events Create a list of interruption events. Description Create a list of interruption events. Usage Events() Value a events object events-class Events class. Description Events class. event_covariate-class Event covariate class. Description Event covariate class. FixedDistribution Create a fixed distribution. Each sample will be assigned a fixed value coming from vector ’values’. Description Create a fixed distribution. Each sample will be assigned a fixed value coming from vector ’values’. Usage FixedDistribution(values) Arguments values covariate values, numeric vector (1 value per sample) Value a fixed distribution (1 value per sample) fixed_covariate-class Fixed covariate class. Description Fixed covariate class. fixed_distribution-class Fixed distribution class. Description Fixed distribution class. Slots values covariate values, numeric vector (1 value per sample) FunctionDistribution Create a function distribution. During distribution sampling, the pro- vided function will be responsible for generating values for each sam- ple. If first argument of this function is not the size (n), please tell which argument corresponds to the size ’n’ (e.g. list(size="n")). Description Create a function distribution. During distribution sampling, the provided function will be respon- sible for generating values for each sample. If first argument of this function is not the size (n), please tell which argument corresponds to the size ’n’ (e.g. list(size="n")). Usage FunctionDistribution(fun, args) Arguments fun function name, character (e.g. ’rnorm’) args list of arguments (e.g list(mean=70, sd=10)) Value a function distribution function_distribution-class Function distribution class. Description Function distribution class. Slots fun function name, character (e.g. ’rnorm’) args list of arguments (e.g list(mean=70, sd=10)) generateIIV Generate IIV matrix for the given Campsis model. Description Generate IIV matrix for the given Campsis model. Usage generateIIV(model, n, offset = 0) Arguments model Campsis model n number of subjects offset if specified, resulting ID will be ID + offset Value IIV data frame with ID column generateIIV_ Generate IIV matrix for the given OMEGA matrix. Description Generate IIV matrix for the given OMEGA matrix. Usage generateIIV_(omega, n) Arguments omega omega matrix n number of subjects Value IIV data frame getCovariates Get all covariates (fixed / time-varying / event covariates). Description Get all covariates (fixed / time-varying / event covariates). Usage getCovariates(object) ## S4 method for signature 'covariates' getCovariates(object) ## S4 method for signature 'arm' getCovariates(object) ## S4 method for signature 'arms' getCovariates(object) ## S4 method for signature 'dataset' getCovariates(object) Arguments object any object Value all covariates from object getEventCovariates Get all event-related covariates. Description Get all event-related covariates. Usage getEventCovariates(object) ## S4 method for signature 'covariates' getEventCovariates(object) ## S4 method for signature 'arm' getFixedCovariates 23 getEventCovariates(object) ## S4 method for signature 'arms' getEventCovariates(object) ## S4 method for signature 'dataset' getEventCovariates(object) Arguments object any object Value all event-related covariates from object getFixedCovariates Get all fixed covariates. Description Get all fixed covariates. Usage getFixedCovariates(object) ## S4 method for signature 'covariates' getFixedCovariates(object) ## S4 method for signature 'arm' getFixedCovariates(object) ## S4 method for signature 'arms' getFixedCovariates(object) ## S4 method for signature 'dataset' getFixedCovariates(object) Arguments object any object Value all fixed covariates from object getIOVs Get all IOV objects. Description Get all IOV objects. Usage getIOVs(object) ## S4 method for signature 'arm' getIOVs(object) ## S4 method for signature 'arms' getIOVs(object) ## S4 method for signature 'dataset' getIOVs(object) Arguments object any object Value all IOV’s from object getOccasions Get all occasions. Description Get all occasions. Usage getOccasions(object) ## S4 method for signature 'arm' getOccasions(object) ## S4 method for signature 'arms' getOccasions(object) ## S4 method for signature 'dataset' getOccasions(object) Arguments object any object Value all occasions from object getSeedForDatasetExport Get seed for dataset export. Description Get seed for dataset export. Usage getSeedForDatasetExport(seed, progress) Arguments seed original seed progress simulation progress Value the seed value used to export the dataset getSeedForIteration Get seed for iteration. Description Get seed for iteration. Usage getSeedForIteration(seed, progress) Arguments seed original seed progress simulation progress Value the seed value to be used for the given replicate number and iteration getSeedForParametersSampling Get seed for parameter uncertainty sampling. Description Get seed for parameter uncertainty sampling. Usage getSeedForParametersSampling(seed) Arguments seed original seed Value the seed value used to sample parameter uncertainty getSplittingConfiguration Get splitting configuration for parallel export. Description Get splitting configuration for parallel export. Usage getSplittingConfiguration(dataset, hardware) Arguments dataset Campsis dataset to export hardware hardware configuration Value splitting configuration list (if ’parallel_dataset’ is enabled) or NA (if ’parallel_dataset’ disabled or if the length of the dataset is less than the dataset export slice size) getTimes Get all distinct times for the specified object. Description Get all distinct times for the specified object. Usage getTimes(object) ## S4 method for signature 'observations_set' getTimes(object) ## S4 method for signature 'arm' getTimes(object) ## S4 method for signature 'arms' getTimes(object) ## S4 method for signature 'events' getTimes(object) ## S4 method for signature 'dataset' getTimes(object) Arguments object any object Value numeric vector with all unique times, sorted getTimeVaryingCovariates Get all time-varying covariates. Description Get all time-varying covariates. Usage getTimeVaryingCovariates(object) ## S4 method for signature 'covariates' getTimeVaryingCovariates(object) ## S4 method for signature 'arm' getTimeVaryingCovariates(object) ## S4 method for signature 'arms' getTimeVaryingCovariates(object) ## S4 method for signature 'dataset' getTimeVaryingCovariates(object) Arguments object any object Value all time-varying covariates from object Hardware Create hardware settings. Description Create hardware settings. Usage Hardware( cpu = 1, replicate_parallel = FALSE, scenario_parallel = FALSE, slice_parallel = FALSE, slice_size = NULL, dataset_parallel = FALSE, dataset_slice_size = 500, auto_setup_plan = NULL ) Arguments cpu number of CPU cores to use, default is 1 replicate_parallel enable parallel computing for replicates, default is FALSE scenario_parallel enable parallel computing for scenarios, default is FALSE slice_parallel enable parallel computing for slices, default is FALSE slice_size number of subjects per simulated slice, default is NULL (auto-configured by Campsis depending on the specified engine) dataset_parallel enable parallelisation when exporting dataset into a table, default is FALSE dataset_slice_size dataset slice size when exporting subjects to a table, default is 500. Only appli- cable if ’dataset_parallel’ is enabled. auto_setup_plan auto-setup plan with the library future, if not set (i.e. =NULL), plan will be setup automatically if the number of CPU’s > 1. Value hardware settings hardware_settings-class Hardware settings class. Description Hardware settings class. Slots cpu number of CPU cores to use, default is 1 replicate_parallel enable parallel computing for replicates, default is FALSE scenario_parallel enable parallel computing for scenarios, default is FALSE slice_parallel enable parallel computing for slices, default is FALSE slice_size number of subjects per simulated slice, default is NULL (auto-configured by Campsis depending on the specified engine) dataset_parallel enable parallelisation when exporting dataset into a table, default is FALSE dataset_slice_size dataset slice size when exporting subjects to a table, default is 500. Only applicable if ’dataset_parallel’ is enabled. auto_setup_plan auto-setup plan with the library future, default is FALSE Infusion Create one or several infusion(s). Description Create one or several infusion(s). Usage Infusion( time, amount, compartment = NA, f = NULL, lag = NULL, duration = NULL, rate = NULL, ii = NULL, addl = NULL ) Arguments time treatment time(s), numeric value or vector. First treatment time if used together with ii and addl. amount total amount to infuse, numeric compartment compartment index, integer f fraction of infusion amount, distribution lag infusion lag time, distribution duration infusion duration, distribution rate infusion rate, distribution ii interdose interval, requires argument ’time’ to be a single numeric value addl number of additional doses, requires argument ’time’ to be a single integer value Value a single infusion or a list of infusions. infusion-class Infusion class. Description Infusion class. Slots duration infusion duration, distribution rate infusion rate, distribution internal_settings-class Internal settings class (transient object from the simulation settings). Description Internal settings class (transient object from the simulation settings). Slots dataset_summary dataset summary progress simulation progress iterations list of event iterations IOV Define inter-occasion variability (IOV) into the dataset. A new vari- able of name ’colname’ will be output into the dataset and will vary at each dose number according to the given distribution. Description Define inter-occasion variability (IOV) into the dataset. A new variable of name ’colname’ will be output into the dataset and will vary at each dose number according to the given distribution. Usage IOV(colname, distribution, doseNumbers = NULL) Arguments colname name of the column that will be output in dataset distribution distribution doseNumbers dose numbers, if provided, IOV is generated at these doses only. By default, IOV is generated for all doses. Value an IOV object leftJoinIIV Left-join IIV matrix. Description Left-join IIV matrix. Usage leftJoinIIV(table, iiv) Arguments table dataset, tabular form iiv IIV matrix Value updated table with IIV matrix length,arm-method Return the number of subjects contained in this arm. Description Return the number of subjects contained in this arm. Usage ## S4 method for signature 'arm' length(x) Arguments x arm Value a number length,dataset-method Return the number of subjects contained in this dataset. Description Return the number of subjects contained in this dataset. Usage ## S4 method for signature 'dataset' length(x) Arguments x dataset Value a number LogNormalDistribution Create a log normal distribution. Description Create a log normal distribution. Usage LogNormalDistribution(meanlog, sdlog) Arguments meanlog mean value of distribution in log domain sdlog standard deviation of distribution in log domain Value a log normal distribution mrgsolve_engine-class mrgsolve engine class. Description mrgsolve engine class. NOCB Create NOCB settings. Description Create NOCB settings. Usage NOCB(enable = NULL, variables = character(0)) Arguments enable enable/disable next-observation carried backward mode (NOCB), default value is TRUE for mrgsolve, FALSE for RxODE variables variable names subject to NOCB behavior (see vignette for more info) Value NOCB settings nocb_settings-class NOCB settings class. Description NOCB settings class. Slots enable enable/disable next-observation carried backward mode (NOCB), default value is TRUE for mrgsolve, FALSE for RxODE variables variable names subject to NOCB behavior (see vignette for more info) NormalDistribution Create a normal distribution. Description Create a normal distribution. Usage NormalDistribution(mean, sd) Arguments mean mean value of distribution sd standard deviation of distribution Value a normal distribution Observations Create an observations list. Please note that the provided ’times’ will automatically be sorted. Duplicated times will be removed. Description Create an observations list. Please note that the provided ’times’ will automatically be sorted. Duplicated times will be removed. Usage Observations(times, compartment = NA) Arguments times observation times, numeric vector compartment compartment index, integer Value an observations list observations-class Observations class. Description Observations class. Slots times observation times, numeric vector compartment compartment index, integer dv observed values, numeric vector (FOR EXTERNAL USE) observations_set-class Observations set class. Description Observations set class. obsOnly Filter CAMPSIS output on observation rows. Description Filter CAMPSIS output on observation rows. Usage obsOnly(x) Arguments x data frame, CAMPSIS output Value a data frame with the observation rows Occasion Define a new occasion. Occasions are defined by mapping occasion values to dose numbers. A new column will automatically be created in the exported dataset. Description Define a new occasion. Occasions are defined by mapping occasion values to dose numbers. A new column will automatically be created in the exported dataset. Usage Occasion(colname, values, doseNumbers) Arguments colname name of the column that will be output in dataset values the occasion numbers, any integer vector doseNumbers the related dose numbers, any integer vector of same length as ’values’ Value occasion object occasion-class Occasion class. Description Occasion class. Slots colname single character value representing the column name related to this occasion values occasion values, integer vector, same length as dose_numbers dose_numbers associated dose numbers, integer vector, same length as values occasions-class Occasions class. Description Occasions class. ParameterDistribution Create a parameter distribution. The resulting distribution is a log-normal distribution, with meanlog=log(THETA) and sd- log=sqrt(OMEGA). Description Create a parameter distribution. The resulting distribution is a log-normal distribution, with mean- log=log(THETA) and sdlog=sqrt(OMEGA). Usage ParameterDistribution(model, theta, omega = NULL) Arguments model model theta corresponding THETA name, character omega corresponding OMEGA name, character, NULL if not defined Value a parameter distribution PI Compute the prediction interval summary over time. Description Compute the prediction interval summary over time. Usage PI(x, output, scenarios = NULL, level = 0.9, gather = TRUE) Arguments x data frame output variable to show, character value scenarios scenarios, character vector, NULL is default level PI level, default is 0.9 (90% PI) gather FALSE: med, low & up columns, TRUE: metric column Value a summary table protocol-class Protocol class. Description Protocol class. retrieveParameterValue Retrieve the parameter value (standardized) for the specified parame- ter name. Description Retrieve the parameter value (standardized) for the specified parameter name. Usage retrieveParameterValue(model, paramName, default = NULL, mandatory = FALSE) Arguments model model paramName parameter name default default value if not found mandatory must be in model or not Value the standardized parameter value or the given default value if not found rxode_engine-class RxODE/rxode2 engine class. Description RxODE/rxode2 engine class. Slots rxode2 logical field to indicate if CAMPSIS should use rxode2 (field set to TRUE) or RxODE (field set to FALSE). Default is TRUE. sample Sample generic object. Description Sample generic object. Usage sample(object, n, ...) ## S4 method for signature 'constant_distribution,integer' sample(object, n) ## S4 method for signature 'fixed_distribution,integer' sample(object, n) ## S4 method for signature 'function_distribution,integer' sample(object, n) ## S4 method for signature 'bootstrap_distribution,integer' sample(object, n) ## S4 method for signature 'bolus,integer' sample(object, n, ...) ## S4 method for signature 'infusion,integer' sample(object, n, ...) ## S4 method for signature 'observations,integer' sample(object, n, ...) ## S4 method for signature 'covariate,integer' sample(object, n) ## S4 method for signature 'bootstrap,integer' sample(object, n) ## S4 method for signature 'campsis_model,integer' sample(object, n) Arguments object generic object n number of samples required ... extra arguments Value sampling result scatterPlot Scatter plot (or X vs Y plot). Description Scatter plot (or X vs Y plot). Usage scatterPlot(x, output, scenarios = NULL, time = NULL) Arguments x data frame output the 2 variables to show, character vector scenarios scenarios time the time to look at those 2 variables, if NULL, min time is used (usually 0) Value a ggplot object Scenario Create an scenario. Description Create an scenario. Usage Scenario(name = NULL, model = NULL, dataset = NULL) Arguments name scenario name, single character string model either a CAMPSIS model, a function or lambda-style formula dataset either a CAMPSIS dataset, a function or lambda-style formula Value a new scenario scenario-class Scenario class. Description Scenario class. Slots name scenario name, single character string model either a CAMPSIS model, a function or lambda-style formula dataset either a CAMPSIS dataset, a function or lambda-style formula Scenarios Create a list of scenarios. Description Create a list of scenarios. Usage Scenarios() Value a scenarios object scenarios-class Scenarios class. Description Scenarios class. setLabel Set the label. Description Set the label. Usage setLabel(object, x) ## S4 method for signature 'arm,character' setLabel(object, x) Arguments object any object that has a label x the new label Value the updated object setSubjects Set the number of subjects. Description Set the number of subjects. Usage setSubjects(object, x) ## S4 method for signature 'arm,integer' setSubjects(object, x) ## S4 method for signature 'dataset,integer' setSubjects(object, x) Arguments object any object x the new number of subjects Value the updated object Settings Create advanced simulation settings. Description Create advanced simulation settings. Usage Settings(...) Arguments ... any user-required settings: see ?Hardware, ?Solver, ?NOCB or ?Declare settings Value advanced simulation settings setupPlanDefault Setup default plan for the given simulation or hardware settings. This plan will prioritise the ditribution of workers in the following order: 1) Replicates (if ’replicate_parallel’ is enabled) 2) Scenar- ios (if ’scenario_parallel’ is enabled) 3) Dataset export / slices (if ’dataset_export’ or ’slice_parallel’ is enabled) Description Setup default plan for the given simulation or hardware settings. This plan will prioritise the ditribu- tion of workers in the following order: 1) Replicates (if ’replicate_parallel’ is enabled) 2) Scenarios (if ’scenario_parallel’ is enabled) 3) Dataset export / slices (if ’dataset_export’ or ’slice_parallel’ is enabled) Usage setupPlanDefault(object) Arguments object simulation or hardware settings Value nothing setupPlanSequential Setup plan as sequential (i.e. no parallelisation). Description Setup plan as sequential (i.e. no parallelisation). Usage setupPlanSequential() Value nothing shadedPlot Shaded plot (or prediction interval plot). Description Shaded plot (or prediction interval plot). Usage shadedPlot(x, output, scenarios = NULL, level = 0.9, alpha = 0.25) Arguments x data frame output variable to show scenarios scenarios level PI level, default is 0.9 (90% PI) alpha alpha parameter (transparency) given to geom_ribbon Value a ggplot object 46 simulate simulate Simulate function. Description Simulate function. Usage simulate( model, dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, replicates = 1, dosing = FALSE, settings = NULL ) ## S4 method for signature ## 'campsis_model, ## dataset, ## character, ## events, ## scenarios, ## `function`, ## character, ## `function`, ## integer, ## integer, ## logical, ## simulation_settings' simulate( model, dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, simulate 47 replicates = 1, dosing = FALSE, settings = NULL ) ## S4 method for signature ## 'campsis_model, ## tbl_df, ## character, ## events, ## scenarios, ## `function`, ## character, ## `function`, ## integer, ## integer, ## logical, ## simulation_settings' simulate( model, dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, replicates = 1, dosing = FALSE, settings = NULL ) ## S4 method for signature ## 'campsis_model, ## data.frame, ## character, ## events, ## scenarios, ## `function`, ## character, ## `function`, ## integer, ## integer, ## logical, ## simulation_settings' simulate( model, 48 simulate dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, replicates = 1, dosing = FALSE, settings = NULL ) ## S4 method for signature ## 'campsis_model, ## tbl_df, ## rxode_engine, ## events, ## scenarios, ## `function`, ## character, ## `function`, ## integer, ## integer, ## logical, ## simulation_settings' simulate( model, dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, replicates = 1, dosing = FALSE, settings = NULL ) ## S4 method for signature ## 'campsis_model, ## tbl_df, ## mrgsolve_engine, ## events, ## scenarios, ## `function`, ## character, ## `function`, ## integer, ## integer, ## logical, ## simulation_settings' simulate( model, dataset, dest = NULL, events = NULL, scenarios = NULL, tablefun = NULL, outvars = NULL, outfun = NULL, seed = NULL, replicates = 1, dosing = FALSE, settings = NULL ) Arguments model generic CAMPSIS model dataset CAMPSIS dataset or 2-dimensional table dest destination simulation engine, default is ’RxODE’ events interruption events scenarios list of scenarios to be simulated tablefun function or lambda formula to apply on exported 2-dimensional dataset outvars variables to output in resulting dataframe outfun function or lambda formula to apply on resulting dataframe after each replicate seed seed value replicates number of replicates, default is 1 dosing output dosing information, default is FALSE settings advanced simulation settings Value dataframe with all results SimulationProgress Create a simulation progress object. Description Create a simulation progress object. Usage SimulationProgress( replicates = 1, scenarios = 1, progressor = NULL, hardware = NULL ) Arguments replicates total number of replicates to simulate scenarios total number of scenarios to simulate progressor progressr progressor hardware hardware settings Value a progress bar simulation_engine-class Simulation engine class. Description Simulation engine class. simulation_progress-class 51 simulation_progress-class Simulation progress class. Description Simulation progress class. Arguments replicates total number of replicates to simulate scenarios total number of scenarios to simulate iterations total number of iterations to simulate slices total number of slices to simulate replicate current replicate number being simulated scenario current scenario number being simulated iteration current iteration number being simulated slice current slice number being simulated progressor progressr progressor hardware hardware settings simulation_settings-class Simulation settings class. Description Simulation settings class. Slots hardware hardware settings object solver solver settings object nocb NOCB settings object declare declare settings (mrgsolve only) internal internal settings Solver Create solver settings. Description Create solver settings. Usage Solver( atol = 1e-08, rtol = 1e-08, hmax = NA, maxsteps = 70000L, method = "liblsoda" ) Arguments atol absolute solver tolerance, default is 1e-08 rtol relative solver tolerance, default is 1e-08 hmax limit how big a solver step can be, default is NA maxsteps max steps between 2 integration times (e.g. when observations records are far apart), default is 70000 method solver method, for RxODE/rxode2 only: ’liblsoda’ (default), ’lsoda’, ’dop853’, ’indLin’. Mrgsolve’s method is always ’lsoda’. Value solver settings solver_settings-class Solver settings class. See ?mrgsolve::update. See ?rxode2::rxSolve. Description Solver settings class. See ?mrgsolve::update. See ?rxode2::rxSolve. Slots atol absolute solver tolerance, default is 1e-08 rtol relative solver tolerance, default is 1e-08 hmax limit how big a solver step can be, default is NA maxsteps max steps between 2 integration times (e.g. when observations records are far apart), default is 70000 method solver method, for RxODE/rxode2 only: ’liblsoda’ (default), ’lsoda’, ’dop853’, ’indLin’. Mrgsolve’s method is always ’lsoda’. spaghettiPlot Spaghetti plot. Description Spaghetti plot. Usage spaghettiPlot(x, output, scenarios = NULL) Arguments x data frame output variable to show scenarios scenarios Value plot TimeVaryingCovariate Create a time-varying covariate. This covariate will be implemented using EVID=2 rows in the exported dataset and will not use interrup- tion events. Description Create a time-varying covariate. This covariate will be implemented using EVID=2 rows in the exported dataset and will not use interruption events. Usage TimeVaryingCovariate(name, table) Arguments name covariate name, character table data.frame, must contain the mandatory columns ’TIME’ and ’VALUE’. An ’ID’ column may also be specified. In that case, ID’s between 1 and the max number of subjects in the dataset/arm can be used. All ID’s must have a VALUE defined for TIME 0. Value a time-varying covariate time_varying_covariate-class Time-varying covariate class. Description Time-varying covariate class. treatment-class Treatment class. Description Treatment class. treatment_iov-class Treatment IOV class. Description Treatment IOV class. Slots colname name of the column that will be output in dataset distribution distribution dose_numbers associated dose numbers, integer vector, same length as values treatment_iovs-class Treatment IOV’s class. Description Treatment IOV’s class. undefined_distribution-class Undefined distribution class. This type of object is automatically cre- ated in method toExplicitDistribution() when the user does not provide a concrete distribution. This is because S4 objects do not accept NULL values. Description Undefined distribution class. This type of object is automatically created in method toExplicitDis- tribution() when the user does not provide a concrete distribution. This is because S4 objects do not accept NULL values. UniformDistribution Create an uniform distribution. Description Create an uniform distribution. Usage UniformDistribution(min, max) Arguments min min value max max value Value an uniform distribution VPC Compute the VPC summary. Input data frame must contain the follow- ing columns: - replicate: replicate number - low: low percentile value in replicate (and in scenario if present) - med: median value in repli- cate (and in scenario if present) - up: up percentile value in replicate (and in scenario if present) - any scenario column Description Compute the VPC summary. Input data frame must contain the following columns: - replicate: replicate number - low: low percentile value in replicate (and in scenario if present) - med: median value in replicate (and in scenario if present) - up: up percentile value in replicate (and in scenario if present) - any scenario column Usage VPC(x, scenarios = NULL, level = 0.9) Arguments x data frame scenarios scenarios, character vector, NULL is default level PI level, default is 0.9 (90% PI) Value VPC summary with columns TIME, <scenarios> and all combinations of low, med, up (i.e. low_low, low_med, low_up, etc.) vpcPlot VPC plot. Description VPC plot. Usage vpcPlot(x, scenarios = NULL, level = 0.9, alpha = 0.15) Arguments x data frame, output of CAMPSIS with replicates scenarios scenarios, character vector, NULL is default level PI level, default is 0.9 (90% PI) alpha alpha parameter (transparency) given to geom_ribbon vpcPlot 57 Value a ggplot object
gopkg.in/beevik/etree.v1
go
Go
README [¶](#section-readme) --- [![Build Status](https://travis-ci.org/beevik/etree.svg?branch=master)](https://travis-ci.org/beevik/etree) [![GoDoc](https://godoc.org/github.com/beevik/etree?status.svg)](https://godoc.org/github.com/beevik/etree) ### etree The etree package is a lightweight, pure go package that expresses XML in the form of an element tree. Its design was inspired by the Python [ElementTree](http://docs.python.org/2/library/xml.etree.elementtree.html) module. Some of the package's capabilities and features: * Represents XML documents as trees of elements for easy traversal. * Imports, serializes, modifies or creates XML documents from scratch. * Writes and reads XML to/from files, byte slices, strings and io interfaces. * Performs simple or complex searches with lightweight XPath-like query APIs. * Auto-indents XML using spaces or tabs for better readability. * Implemented in pure go; depends only on standard go libraries. * Built on top of the go [encoding/xml](http://golang.org/pkg/encoding/xml) package. ##### Creating an XML document The following example creates an XML document from scratch using the etree package and outputs its indented contents to stdout. ``` doc := etree.NewDocument() doc.CreateProcInst("xml", `version="1.0" encoding="UTF-8"`) doc.CreateProcInst("xml-stylesheet", `type="text/xsl" href="style.xsl"`) people := doc.CreateElement("People") people.CreateComment("These are all known people") jon := people.CreateElement("Person") jon.CreateAttr("name", "Jon") sally := people.CreateElement("Person") sally.CreateAttr("name", "Sally") doc.Indent(2) doc.WriteTo(os.Stdout) ``` Output: ``` <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="style.xsl"?> <People> <!--These are all known people--> <Person name="Jon"/> <Person name="Sally"/> </People> ``` ##### Reading an XML file Suppose you have a file on disk called `bookstore.xml` containing the following data: ``` <bookstore xmlns:p="urn:schemas-books-com:prices" <book category="COOKING"> <title lang="en">Everyday Italian</title> <author><NAME></author> <year>2005</year> <p:price>30.00</p:price> </book <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author><NAME></author> <year>2005</year> <p:price>29.99</p:price> </book <book category="WEB"> <title lang="en">XQuery Kick Start</title> <author><NAME></author> <author><NAME></author> <author><NAME></author> <author><NAME></author> <author><NAME></author> <year>2003</year> <p:price>49.99</p:price> </book <book category="WEB"> <title lang="en">Learning XML</title> <author><NAME></author> <year>2003</year> <p:price>39.95</p:price> </book</bookstore> ``` This code reads the file's contents into an etree document. ``` doc := etree.NewDocument() if err := doc.ReadFromFile("bookstore.xml"); err != nil { panic(err) } ``` You can also read XML from a string, a byte slice, or an `io.Reader`. ##### Processing elements and attributes This example illustrates several ways to access elements and attributes using etree selection queries. ``` root := doc.SelectElement("bookstore") fmt.Println("ROOT element:", root.Tag) for _, book := range root.SelectElements("book") { fmt.Println("CHILD element:", book.Tag) if title := book.SelectElement("title"); title != nil { lang := title.SelectAttrValue("lang", "unknown") fmt.Printf(" TITLE: %s (%s)\n", title.Text(), lang) } for _, attr := range book.Attr { fmt.Printf(" ATTR: %s=%s\n", attr.Key, attr.Value) } } ``` Output: ``` ROOT element: bookstore CHILD element: book TITLE: Everyday Italian (en) ATTR: category=COOKING CHILD element: book TITLE: <NAME> (en) ATTR: category=CHILDREN CHILD element: book TITLE: XQuery Kick Start (en) ATTR: category=WEB CHILD element: book TITLE: Learning XML (en) ATTR: category=WEB ``` ##### Path queries This example uses etree's path functions to select all book titles that fall into the category of 'WEB'. The double-slash prefix in the path causes the search for book elements to occur recursively; book elements may appear at any level of the XML hierarchy. ``` for _, t := range doc.FindElements("//book[@category='WEB']/title") { fmt.Println("Title:", t.Text()) } ``` Output: ``` Title: XQuery Kick Start Title: Learning XML ``` This example finds the first book element under the root bookstore element and outputs the tag and text of each of its child elements. ``` for _, e := range doc.FindElements("./bookstore/book[1]/*") { fmt.Printf("%s: %s\n", e.Tag, e.Text()) } ``` Output: ``` title: Everyday Italian author: <NAME> year: 2005 price: 30.00 ``` This example finds all books with a price of 49.99 and outputs their titles. ``` path := etree.MustCompilePath("./bookstore/book[p:price='49.99']/title") for _, e := range doc.FindElementsPath(path) { fmt.Println(e.Text()) } ``` Output: ``` XQuery Kick Start ``` Note that this example uses the FindElementsPath function, which takes as an argument a pre-compiled path object. Use precompiled paths when you plan to search with the same path more than once. ##### Other features These are just a few examples of the things the etree package can do. See the [documentation](http://godoc.org/github.com/beevik/etree) for a complete description of its capabilities. ##### Contributing This project accepts contributions. Just fork the repo and submit a pull request! Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package etree provides XML services through an Element Tree abstraction. ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [type Attr](#Attr) * + [func (a *Attr) Element() *Element](#Attr.Element) + [func (a *Attr) FullKey() string](#Attr.FullKey) + [func (a *Attr) NamespaceURI() string](#Attr.NamespaceURI) * [type CharData](#CharData) * + [func NewCData(data string) *CharData](#NewCData) + [func NewCharData(data string) *CharData](#NewCharData)deprecated + [func NewText(text string) *CharData](#NewText) * + [func (c *CharData) Index() int](#CharData.Index) + [func (c *CharData) IsCData() bool](#CharData.IsCData) + [func (c *CharData) IsWhitespace() bool](#CharData.IsWhitespace) + [func (c *CharData) Parent() *Element](#CharData.Parent) * [type Comment](#Comment) * + [func NewComment(comment string) *Comment](#NewComment) * + [func (c *Comment) Index() int](#Comment.Index) + [func (c *Comment) Parent() *Element](#Comment.Parent) * [type Directive](#Directive) * + [func NewDirective(data string) *Directive](#NewDirective) * + [func (d *Directive) Index() int](#Directive.Index) + [func (d *Directive) Parent() *Element](#Directive.Parent) * [type Document](#Document) * + [func NewDocument() *Document](#NewDocument) * + [func (d *Document) Copy() *Document](#Document.Copy) + [func (d *Document) Indent(spaces int)](#Document.Indent) + [func (d *Document) IndentTabs()](#Document.IndentTabs) + [func (d *Document) ReadFrom(r io.Reader) (n int64, err error)](#Document.ReadFrom) + [func (d *Document) ReadFromBytes(b []byte) error](#Document.ReadFromBytes) + [func (d *Document) ReadFromFile(filename string) error](#Document.ReadFromFile) + [func (d *Document) ReadFromString(s string) error](#Document.ReadFromString) + [func (d *Document) Root() *Element](#Document.Root) + [func (d *Document) SetRoot(e *Element)](#Document.SetRoot) + [func (d *Document) WriteTo(w io.Writer) (n int64, err error)](#Document.WriteTo) + [func (d *Document) WriteToBytes() (b []byte, err error)](#Document.WriteToBytes) + [func (d *Document) WriteToFile(filename string) error](#Document.WriteToFile) + [func (d *Document) WriteToString() (s string, err error)](#Document.WriteToString) * [type Element](#Element) * + [func NewElement(tag string) *Element](#NewElement) * + [func (e *Element) AddChild(t Token)](#Element.AddChild) + [func (e *Element) ChildElements() []*Element](#Element.ChildElements) + [func (e *Element) Copy() *Element](#Element.Copy) + [func (e *Element) CreateAttr(key, value string) *Attr](#Element.CreateAttr) + [func (e *Element) CreateCData(data string) *CharData](#Element.CreateCData) + [func (e *Element) CreateCharData(data string) *CharData](#Element.CreateCharData)deprecated + [func (e *Element) CreateComment(comment string) *Comment](#Element.CreateComment) + [func (e *Element) CreateDirective(data string) *Directive](#Element.CreateDirective) + [func (e *Element) CreateElement(tag string) *Element](#Element.CreateElement) + [func (e *Element) CreateProcInst(target, inst string) *ProcInst](#Element.CreateProcInst) + [func (e *Element) CreateText(text string) *CharData](#Element.CreateText) + [func (e *Element) FindElement(path string) *Element](#Element.FindElement) + [func (e *Element) FindElementPath(path Path) *Element](#Element.FindElementPath) + [func (e *Element) FindElements(path string) []*Element](#Element.FindElements) + [func (e *Element) FindElementsPath(path Path) []*Element](#Element.FindElementsPath) + [func (e *Element) FullTag() string](#Element.FullTag) + [func (e *Element) GetPath() string](#Element.GetPath) + [func (e *Element) GetRelativePath(source *Element) string](#Element.GetRelativePath) + [func (e *Element) Index() int](#Element.Index) + [func (e *Element) InsertChild(ex Token, t Token)](#Element.InsertChild)deprecated + [func (e *Element) InsertChildAt(index int, t Token)](#Element.InsertChildAt) + [func (e *Element) NamespaceURI() string](#Element.NamespaceURI) + [func (e *Element) Parent() *Element](#Element.Parent) + [func (e *Element) RemoveAttr(key string) *Attr](#Element.RemoveAttr) + [func (e *Element) RemoveChild(t Token) Token](#Element.RemoveChild) + [func (e *Element) RemoveChildAt(index int) Token](#Element.RemoveChildAt) + [func (e *Element) SelectAttr(key string) *Attr](#Element.SelectAttr) + [func (e *Element) SelectAttrValue(key, dflt string) string](#Element.SelectAttrValue) + [func (e *Element) SelectElement(tag string) *Element](#Element.SelectElement) + [func (e *Element) SelectElements(tag string) []*Element](#Element.SelectElements) + [func (e *Element) SetCData(text string)](#Element.SetCData) + [func (e *Element) SetTail(text string)](#Element.SetTail) + [func (e *Element) SetText(text string)](#Element.SetText) + [func (e *Element) SortAttrs()](#Element.SortAttrs) + [func (e *Element) Tail() string](#Element.Tail) + [func (e *Element) Text() string](#Element.Text) * [type ErrPath](#ErrPath) * + [func (err ErrPath) Error() string](#ErrPath.Error) * [type Path](#Path) * + [func CompilePath(path string) (Path, error)](#CompilePath) + [func MustCompilePath(path string) Path](#MustCompilePath) * [type ProcInst](#ProcInst) * + [func NewProcInst(target, inst string) *ProcInst](#NewProcInst) * + [func (p *ProcInst) Index() int](#ProcInst.Index) + [func (p *ProcInst) Parent() *Element](#ProcInst.Parent) * [type ReadSettings](#ReadSettings) * [type Token](#Token) * [type WriteSettings](#WriteSettings) #### Examples [¶](#pkg-examples) * [Document (Creating)](#example-Document-Creating) * [Document (Reading)](#example-Document-Reading) * [Path](#example-Path) ### Constants [¶](#pkg-constants) ``` const ( // NoIndent is used with Indent to disable all indenting. NoIndent = -1 ) ``` ### Variables [¶](#pkg-variables) ``` var ErrXML = [errors](/errors).[New](/errors#New)("etree: invalid XML format") ``` ErrXML is returned when XML parsing fails due to incorrect formatting. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Attr](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L116) [¶](#Attr) ``` type Attr struct { Space, Key [string](/builtin#string) // The attribute's namespace prefix and key Value [string](/builtin#string) // The attribute value string // contains filtered or unexported fields } ``` An Attr represents a key-value attribute of an XML element. #### func (*Attr) [Element](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1114) [¶](#Attr.Element) added in v1.1.0 ``` func (a *[Attr](#Attr)) Element() *[Element](#Element) ``` Element returns the element containing the attribute. #### func (*Attr) [FullKey](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1106) [¶](#Attr.FullKey) added in v1.1.0 ``` func (a *[Attr](#Attr)) FullKey() [string](/builtin#string) ``` FullKey returns the attribute a's complete key, including namespace prefix if present. #### func (*Attr) [NamespaceURI](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1121) [¶](#Attr.NamespaceURI) added in v1.1.0 ``` func (a *[Attr](#Attr)) NamespaceURI() [string](/builtin#string) ``` NamespaceURI returns the XML namespace URI associated with the attribute. If the element is part of the XML default namespace, NamespaceURI returns the empty string. #### type [CharData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L135) [¶](#CharData) ``` type CharData struct { Data [string](/builtin#string) // contains filtered or unexported fields } ``` CharData can be used to represent character data or a CDATA section within an XML document. #### func [NewCData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1145) [¶](#NewCData) added in v1.1.0 ``` func NewCData(data [string](/builtin#string)) *[CharData](#CharData) ``` NewCData creates a parentless XML character CDATA section. #### func [NewCharData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1153) deprecated ``` func NewCharData(data [string](/builtin#string)) *[CharData](#CharData) ``` NewCharData creates a parentless CharData token containing character data. Deprecated: NewCharData is deprecated. Instead, use NewText, which does the same thing. #### func [NewText](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1140) [¶](#NewText) added in v1.1.0 ``` func NewText(text [string](/builtin#string)) *[CharData](#CharData) ``` NewText creates a parentless CharData token containing character data. #### func (*CharData) [Index](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1224) [¶](#CharData.Index) added in v1.1.0 ``` func (c *[CharData](#CharData)) Index() [int](/builtin#int) ``` Index returns the index of this CharData token within its parent element's list of child tokens. If this CharData token has no parent element, the index is -1. #### func (*CharData) [IsCData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1205) [¶](#CharData.IsCData) added in v1.1.0 ``` func (c *[CharData](#CharData)) IsCData() [bool](/builtin#bool) ``` IsCData returns true if the character data token is to be encoded as a CDATA section. #### func (*CharData) [IsWhitespace](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1211) [¶](#CharData.IsWhitespace) added in v1.1.0 ``` func (c *[CharData](#CharData)) IsWhitespace() [bool](/builtin#bool) ``` IsWhitespace returns true if the character data token was created by one of the document Indent methods to contain only whitespace. #### func (*CharData) [Parent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1217) [¶](#CharData.Parent) ``` func (c *[CharData](#CharData)) Parent() *[Element](#Element) ``` Parent returns the character data token's parent element, or nil if it has no parent. #### type [Comment](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L143) [¶](#Comment) ``` type Comment struct { Data [string](/builtin#string) // contains filtered or unexported fields } ``` A Comment represents an XML comment. #### func [NewComment](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1257) [¶](#NewComment) ``` func NewComment(comment [string](/builtin#string)) *[Comment](#Comment) ``` NewComment creates a parentless XML comment. #### func (*Comment) [Index](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1297) [¶](#Comment.Index) added in v1.1.0 ``` func (c *[Comment](#Comment)) Index() [int](/builtin#int) ``` Index returns the index of this Comment token within its parent element's list of child tokens. If this Comment token has no parent element, the index is -1. #### func (*Comment) [Parent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1290) [¶](#Comment.Parent) ``` func (c *[Comment](#Comment)) Parent() *[Element](#Element) ``` Parent returns comment token's parent element, or nil if it has no parent. #### type [Directive](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L150) [¶](#Directive) ``` type Directive struct { Data [string](/builtin#string) // contains filtered or unexported fields } ``` A Directive represents an XML directive. #### func [NewDirective](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1320) [¶](#NewDirective) ``` func NewDirective(data [string](/builtin#string)) *[Directive](#Directive) ``` NewDirective creates a parentless XML directive. #### func (*Directive) [Index](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1362) [¶](#Directive.Index) added in v1.1.0 ``` func (d *[Directive](#Directive)) Index() [int](/builtin#int) ``` Index returns the index of this Directive token within its parent element's list of child tokens. If this Directive token has no parent element, the index is -1. #### func (*Directive) [Parent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1355) [¶](#Directive.Parent) ``` func (d *[Directive](#Directive)) Parent() *[Element](#Element) ``` Parent returns directive token's parent element, or nil if it has no parent. #### type [Document](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L100) [¶](#Document) ``` type Document struct { [Element](#Element) ReadSettings [ReadSettings](#ReadSettings) WriteSettings [WriteSettings](#WriteSettings) } ``` A Document is a container holding a complete XML hierarchy. Its embedded element contains zero or more children, one of which is usually the root element. The embedded element may include other children such as processing instructions or BOM CharData tokens. Example (Creating) [¶](#example-Document-Creating) Create an etree Document, add XML entities to it, and serialize it to stdout. ``` doc := NewDocument() doc.CreateProcInst("xml", `version="1.0" encoding="UTF-8"`) doc.CreateProcInst("xml-stylesheet", `type="text/xsl" href="style.xsl"`) people := doc.CreateElement("People") people.CreateComment("These are all known people") jon := people.CreateElement("Person") jon.CreateAttr("name", "<NAME>") sally := people.CreateElement("Person") sally.CreateAttr("name", "Sally") doc.Indent(2) doc.WriteTo(os.Stdout) ``` ``` Output: <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="style.xsl"?> <People> <!--These are all known people--> <Person name="<NAME>&apos;Reilly"/> <Person name="Sally"/> </People> ``` Example (Reading) [¶](#example-Document-Reading) ``` doc := NewDocument() if err := doc.ReadFromFile("document.xml"); err != nil { panic(err) } ``` ``` Output: ``` #### func [NewDocument](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L165) [¶](#NewDocument) ``` func NewDocument() *[Document](#Document) ``` NewDocument creates an XML document without a root element. #### func (*Document) [Copy](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L174) [¶](#Document.Copy) ``` func (d *[Document](#Document)) Copy() *[Document](#Document) ``` Copy returns a recursive, deep copy of the document. #### func (*Document) [Indent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L295) [¶](#Document.Indent) ``` func (d *[Document](#Document)) Indent(spaces [int](/builtin#int)) ``` Indent modifies the document's element tree by inserting character data tokens containing newlines and indentation. The amount of indentation per depth level is given as spaces. Pass etree.NoIndent for spaces if you want no indentation at all. #### func (*Document) [IndentTabs](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L311) [¶](#Document.IndentTabs) ``` func (d *[Document](#Document)) IndentTabs() ``` IndentTabs modifies the document's element tree by inserting CharData tokens containing newlines and tabs for indentation. One tab is used per indentation level. #### func (*Document) [ReadFrom](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L219) [¶](#Document.ReadFrom) ``` func (d *[Document](#Document)) ReadFrom(r [io](/io).[Reader](/io#Reader)) (n [int64](/builtin#int64), err [error](/builtin#error)) ``` ReadFrom reads XML from the reader r into the document d. It returns the number of bytes read and any error encountered. #### func (*Document) [ReadFromBytes](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L235) [¶](#Document.ReadFromBytes) ``` func (d *[Document](#Document)) ReadFromBytes(b [][byte](/builtin#byte)) [error](/builtin#error) ``` ReadFromBytes reads XML from the byte slice b into the document d. #### func (*Document) [ReadFromFile](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L224) [¶](#Document.ReadFromFile) ``` func (d *[Document](#Document)) ReadFromFile(filename [string](/builtin#string)) [error](/builtin#error) ``` ReadFromFile reads XML from the string s into the document d. #### func (*Document) [ReadFromString](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L241) [¶](#Document.ReadFromString) ``` func (d *[Document](#Document)) ReadFromString(s [string](/builtin#string)) [error](/builtin#error) ``` ReadFromString reads XML from the string s into the document d. #### func (*Document) [Root](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L180) [¶](#Document.Root) ``` func (d *[Document](#Document)) Root() *[Element](#Element) ``` Root returns the root element of the document, or nil if there is no root element. #### func (*Document) [SetRoot](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L194) [¶](#Document.SetRoot) ``` func (d *[Document](#Document)) SetRoot(e *[Element](#Element)) ``` SetRoot replaces the document's root element with e. If the document already has a root when this function is called, then the document's original root is unbound first. If the element e is bound to another document (or to another element within a document), then it is unbound first. #### func (*Document) [WriteTo](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L248) [¶](#Document.WriteTo) ``` func (d *[Document](#Document)) WriteTo(w [io](/io).[Writer](/io#Writer)) (n [int64](/builtin#int64), err [error](/builtin#error)) ``` WriteTo serializes an XML document into the writer w. It returns the number of bytes written and any error encountered. #### func (*Document) [WriteToBytes](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L272) [¶](#Document.WriteToBytes) ``` func (d *[Document](#Document)) WriteToBytes() (b [][byte](/builtin#byte), err [error](/builtin#error)) ``` WriteToBytes serializes the XML document into a slice of bytes. #### func (*Document) [WriteToFile](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L260) [¶](#Document.WriteToFile) ``` func (d *[Document](#Document)) WriteToFile(filename [string](/builtin#string)) [error](/builtin#error) ``` WriteToFile serializes an XML document into the file named filename. #### func (*Document) [WriteToString](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L281) [¶](#Document.WriteToString) ``` func (d *[Document](#Document)) WriteToString() (s [string](/builtin#string), err [error](/builtin#error)) ``` WriteToString serializes the XML document into a string. #### type [Element](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L107) [¶](#Element) ``` type Element struct { Space, Tag [string](/builtin#string) // namespace prefix and tag Attr [][Attr](#Attr) // key-value attribute pairs Child [][Token](#Token) // child tokens (elements, comments, etc.) // contains filtered or unexported fields } ``` An Element represents an XML element, its attributes, and its child tokens. #### func [NewElement](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L324) [¶](#NewElement) ``` func NewElement(tag [string](/builtin#string)) *[Element](#Element) ``` NewElement creates an unparented element with the specified tag. The tag may be prefixed by a namespace prefix and a colon. #### func (*Element) [AddChild](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L562) [¶](#Element.AddChild) ``` func (e *[Element](#Element)) AddChild(t [Token](#Token)) ``` AddChild adds the token t as the last child of element e. If token t was already the child of another element, it is first removed from its current parent element. #### func (*Element) [ChildElements](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L731) [¶](#Element.ChildElements) ``` func (e *[Element](#Element)) ChildElements() []*[Element](#Element) ``` ChildElements returns all elements that are children of element e. #### func (*Element) [Copy](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L349) [¶](#Element.Copy) ``` func (e *[Element](#Element)) Copy() *[Element](#Element) ``` Copy creates a recursive, deep copy of the element and all its attributes and children. The returned element has no parent but can be parented to a another element using AddElement, or to a document using SetRoot. #### func (*Element) [CreateAttr](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1038) [¶](#Element.CreateAttr) ``` func (e *[Element](#Element)) CreateAttr(key, value [string](/builtin#string)) *[Attr](#Attr) ``` CreateAttr creates an attribute and adds it to element e. The key may be prefixed by a namespace prefix and a colon. If an attribute with the key already exists, its value is replaced. #### func (*Element) [CreateCData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1180) [¶](#Element.CreateCData) added in v1.1.0 ``` func (e *[Element](#Element)) CreateCData(data [string](/builtin#string)) *[CharData](#CharData) ``` CreateCData creates a CharData token containing a CDATA section and adds it as a child of element e. #### func (*Element) [CreateCharData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1189) deprecated ``` func (e *[Element](#Element)) CreateCharData(data [string](/builtin#string)) *[CharData](#CharData) ``` CreateCharData creates a CharData token containing character data and adds it as a child of element e. Deprecated: CreateCharData is deprecated. Instead, use CreateText, which does the same thing. #### func (*Element) [CreateComment](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1276) [¶](#Element.CreateComment) ``` func (e *[Element](#Element)) CreateComment(comment [string](/builtin#string)) *[Comment](#Comment) ``` CreateComment creates an XML comment and adds it as a child of element e. #### func (*Element) [CreateDirective](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1340) [¶](#Element.CreateDirective) ``` func (e *[Element](#Element)) CreateDirective(data [string](/builtin#string)) *[Directive](#Directive) ``` CreateDirective creates an XML directive and adds it as the last child of element e. #### func (*Element) [CreateElement](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L554) [¶](#Element.CreateElement) ``` func (e *[Element](#Element)) CreateElement(tag [string](/builtin#string)) *[Element](#Element) ``` CreateElement creates an element with the specified tag and adds it as the last child element of the element e. The tag may be prefixed by a namespace prefix and a colon. #### func (*Element) [CreateProcInst](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1406) [¶](#Element.CreateProcInst) ``` func (e *[Element](#Element)) CreateProcInst(target, inst [string](/builtin#string)) *[ProcInst](#ProcInst) ``` CreateProcInst creates a processing instruction and adds it as a child of element e. #### func (*Element) [CreateText](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1174) [¶](#Element.CreateText) added in v1.1.0 ``` func (e *[Element](#Element)) CreateText(text [string](/builtin#string)) *[CharData](#CharData) ``` CreateText creates a CharData token containing character data and adds it as a child of element e. #### func (*Element) [FindElement](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L770) [¶](#Element.FindElement) ``` func (e *[Element](#Element)) FindElement(path [string](/builtin#string)) *[Element](#Element) ``` FindElement returns the first element matched by the XPath-like path string. Returns nil if no element is found using the path. Panics if an invalid path string is supplied. #### func (*Element) [FindElementPath](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L776) [¶](#Element.FindElementPath) ``` func (e *[Element](#Element)) FindElementPath(path [Path](#Path)) *[Element](#Element) ``` FindElementPath returns the first element matched by the XPath-like path string. Returns nil if no element is found using the path. #### func (*Element) [FindElements](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L789) [¶](#Element.FindElements) ``` func (e *[Element](#Element)) FindElements(path [string](/builtin#string)) []*[Element](#Element) ``` FindElements returns a slice of elements matched by the XPath-like path string. Panics if an invalid path string is supplied. #### func (*Element) [FindElementsPath](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L794) [¶](#Element.FindElementsPath) ``` func (e *[Element](#Element)) FindElementsPath(path [Path](#Path)) []*[Element](#Element) ``` FindElementsPath returns a slice of elements matched by the Path object. #### func (*Element) [FullTag](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L355) [¶](#Element.FullTag) added in v1.1.0 ``` func (e *[Element](#Element)) FullTag() [string](/builtin#string) ``` FullTag returns the element e's complete tag, including namespace prefix if present. #### func (*Element) [GetPath](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L800) [¶](#Element.GetPath) ``` func (e *[Element](#Element)) GetPath() [string](/builtin#string) ``` GetPath returns the absolute path of the element. #### func (*Element) [GetRelativePath](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L819) [¶](#Element.GetRelativePath) ``` func (e *[Element](#Element)) GetRelativePath(source *[Element](#Element)) [string](/builtin#string) ``` GetRelativePath returns the path of the element relative to the source element. If the two elements are not part of the same element tree, then GetRelativePath returns the empty string. #### func (*Element) [Index](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L988) [¶](#Element.Index) added in v1.1.0 ``` func (e *[Element](#Element)) Index() [int](/builtin#int) ``` Index returns the index of this element within its parent element's list of child tokens. If this element has no parent element, the index is -1. #### func (*Element) [InsertChild](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L577) deprecated ``` func (e *[Element](#Element)) InsertChild(ex [Token](#Token), t [Token](#Token)) ``` InsertChild inserts the token t before e's existing child token ex. If ex is nil or ex is not a child of e, then t is added to the end of e's child token list. If token t was already the child of another element, it is first removed from its current parent element. Deprecated: InsertChild is deprecated. Use InsertChildAt instead. #### func (*Element) [InsertChildAt](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L603) [¶](#Element.InsertChildAt) added in v1.1.0 ``` func (e *[Element](#Element)) InsertChildAt(index [int](/builtin#int), t [Token](#Token)) ``` InsertChildAt inserts the token t into the element e's list of child tokens just before the requested index. If the index is greater than or equal to the length of the list of child tokens, the token t is added to the end of the list. #### func (*Element) [NamespaceURI](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L365) [¶](#Element.NamespaceURI) added in v1.1.0 ``` func (e *[Element](#Element)) NamespaceURI() [string](/builtin#string) ``` NamespaceURI returns the XML namespace URI associated with the element. If the element is part of the XML default namespace, NamespaceURI returns the empty string. #### func (*Element) [Parent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L981) [¶](#Element.Parent) ``` func (e *[Element](#Element)) Parent() *[Element](#Element) ``` Parent returns the element token's parent element, or nil if it has no parent. #### func (*Element) [RemoveAttr](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1065) [¶](#Element.RemoveAttr) ``` func (e *[Element](#Element)) RemoveAttr(key [string](/builtin#string)) *[Attr](#Attr) ``` RemoveAttr removes and returns a copy of the first attribute of the element whose key matches the given key. The key may be prefixed by a namespace prefix and a colon. If a matching attribute does not exist, nil is returned. #### func (*Element) [RemoveChild](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L630) [¶](#Element.RemoveChild) ``` func (e *[Element](#Element)) RemoveChild(t [Token](#Token)) [Token](#Token) ``` RemoveChild attempts to remove the token t from element e's list of children. If the token t is a child of e, then it is returned. Otherwise, nil is returned. #### func (*Element) [RemoveChildAt](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L640) [¶](#Element.RemoveChildAt) added in v1.1.0 ``` func (e *[Element](#Element)) RemoveChildAt(index [int](/builtin#int)) [Token](#Token) ``` RemoveChildAt removes the index-th child token from the element e. The removed child token is returned. If the index is out of bounds, no child is removed and nil is returned. #### func (*Element) [SelectAttr](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L707) [¶](#Element.SelectAttr) ``` func (e *[Element](#Element)) SelectAttr(key [string](/builtin#string)) *[Attr](#Attr) ``` SelectAttr finds an element attribute matching the requested key and returns it if found. Returns nil if no matching attribute is found. The key may be prefixed by a namespace prefix and a colon. #### func (*Element) [SelectAttrValue](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L720) [¶](#Element.SelectAttrValue) ``` func (e *[Element](#Element)) SelectAttrValue(key, dflt [string](/builtin#string)) [string](/builtin#string) ``` SelectAttrValue finds an element attribute matching the requested key and returns its value if found. The key may be prefixed by a namespace prefix and a colon. If the key is not found, the dflt value is returned instead. #### func (*Element) [SelectElement](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L744) [¶](#Element.SelectElement) ``` func (e *[Element](#Element)) SelectElement(tag [string](/builtin#string)) *[Element](#Element) ``` SelectElement returns the first child element with the given tag. The tag may be prefixed by a namespace prefix and a colon. Returns nil if no element with a matching tag was found. #### func (*Element) [SelectElements](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L756) [¶](#Element.SelectElements) ``` func (e *[Element](#Element)) SelectElements(tag [string](/builtin#string)) []*[Element](#Element) ``` SelectElements returns a slice of all child elements with the given tag. The tag may be prefixed by a namespace prefix and a colon. #### func (*Element) [SetCData](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L453) [¶](#Element.SetCData) added in v1.1.0 ``` func (e *[Element](#Element)) SetCData(text [string](/builtin#string)) ``` SetCData replaces all character data immediately following an element's opening tag with a CDATA section. #### func (*Element) [SetTail](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L484) [¶](#Element.SetTail) added in v1.1.0 ``` func (e *[Element](#Element)) SetTail(text [string](/builtin#string)) ``` SetTail replaces all character data immediately following the element's end tag with the requested string. #### func (*Element) [SetText](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L447) [¶](#Element.SetText) ``` func (e *[Element](#Element)) SetText(text [string](/builtin#string)) ``` SetText replaces all character data immediately following an element's opening tag with the requested string. #### func (*Element) [SortAttrs](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1082) [¶](#Element.SortAttrs) added in v1.1.0 ``` func (e *[Element](#Element)) SortAttrs() ``` SortAttrs sorts the element's attributes lexicographically by key. #### func (*Element) [Tail](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L459) [¶](#Element.Tail) added in v1.1.0 ``` func (e *[Element](#Element)) Tail() [string](/builtin#string) ``` Tail returns all character data immediately following the element's end tag. #### func (*Element) [Text](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L425) [¶](#Element.Text) ``` func (e *[Element](#Element)) Text() [string](/builtin#string) ``` Text returns all character data immediately following the element's opening tag. #### type [ErrPath](https://github.com/beevik/etree/blob/v1.1.0/path.go#L88) [¶](#ErrPath) ``` type ErrPath [string](/builtin#string) ``` ErrPath is returned by path functions when an invalid etree path is provided. #### func (ErrPath) [Error](https://github.com/beevik/etree/blob/v1.1.0/path.go#L91) [¶](#ErrPath.Error) ``` func (err [ErrPath](#ErrPath)) Error() [string](/builtin#string) ``` Error returns the string describing a path error. #### type [Path](https://github.com/beevik/etree/blob/v1.1.0/path.go#L83) [¶](#Path) ``` type Path struct { // contains filtered or unexported fields } ``` A Path is a string that represents a search path through an etree starting from the document root or an arbitrary element. Paths are used with the Element object's Find* methods to locate and return desired elements. A Path consists of a series of slash-separated "selectors", each of which may be modified by one or more bracket-enclosed "filters". Selectors are used to traverse the etree from element to element, while filters are used to narrow the list of candidate elements at each node. Although etree Path strings are similar to XPath strings (<https://www.w3.org/TR/1999/REC-xpath-19991116/>), they have a more limited set of selectors and filtering options. The following selectors are supported by etree Path strings: ``` . Select the current element. .. Select the parent of the current element. * Select all child elements of the current element. / Select the root element when used at the start of a path. // Select all descendants of the current element. tag Select all child elements with a name matching the tag. ``` The following basic filters are supported by etree Path strings: ``` [@attrib] Keep elements with an attribute named attrib. [@attrib='val'] Keep elements with an attribute named attrib and value matching val. [tag] Keep elements with a child element named tag. [tag='val'] Keep elements with a child element named tag and text matching val. [n] Keep the n-th element, where n is a numeric index starting from 1. ``` The following function filters are also supported: ``` [text()] Keep elements with non-empty text. [text()='val'] Keep elements whose text matches val. [local-name()='val'] Keep elements whose un-prefixed tag matches val. [name()='val'] Keep elements whose full tag exactly matches val. [namespace-prefix()='val'] Keep elements whose namespace prefix matches val. [namespace-uri()='val'] Keep elements whose namespace URI matches val. ``` Here are some examples of Path strings: * Select the bookstore child element of the root element: /bookstore - Beginning from the root element, select the title elements of all descendant book elements having a 'category' attribute of 'WEB': ``` //book[@category='WEB']/title ``` - Beginning from the current element, select the first descendant book element with a title child element containing the text 'Great Expectations': ``` .//book[title='Great Expectations'][1] ``` - Beginning from the current element, select all child elements of book elements with an attribute 'language' set to 'english': ``` ./book/*[@language='english'] ``` - Beginning from the current element, select all child elements of book elements containing the text 'special': ``` ./book/*[text()='special'] ``` - Beginning from the current element, select all descendant book elements whose title child element has a 'language' attribute of 'french': ``` .//book/title[@language='french']/.. ``` - Beginning from the current element, select all book elements belonging to the <http://www.w3.org/TR/html4/> namespace: ``` .//book[namespace-uri()='http://www.w3.org/TR/html4/'] ``` Example [¶](#example-Path) ``` xml := `<bookstore><book><title>Great Expectations</title> <author><NAME></author></book><book><title>Ulysses</title> <author><NAME></author></book></bookstore>` doc := NewDocument() doc.ReadFromString(xml) for _, e := range doc.FindElements(".//book[author='<NAME>']") { doc := NewDocument() doc.SetRoot(e.Copy()) doc.Indent(2) doc.WriteTo(os.Stdout) } ``` ``` Output: <book> <title>Great Expectations</title> <author><NAME></author> </book> ``` #### func [CompilePath](https://github.com/beevik/etree/blob/v1.1.0/path.go#L97) [¶](#CompilePath) ``` func CompilePath(path [string](/builtin#string)) ([Path](#Path), [error](/builtin#error)) ``` CompilePath creates an optimized version of an XPath-like string that can be used to query elements in an element tree. #### func [MustCompilePath](https://github.com/beevik/etree/blob/v1.1.0/path.go#L110) [¶](#MustCompilePath) ``` func MustCompilePath(path [string](/builtin#string)) [Path](#Path) ``` MustCompilePath creates an optimized version of an XPath-like string that can be used to query elements in an element tree. Panics if an error occurs. Use this function to create Paths when you know the path is valid (i.e., if it's hard-coded). #### type [ProcInst](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L157) [¶](#ProcInst) ``` type ProcInst struct { Target [string](/builtin#string) Inst [string](/builtin#string) // contains filtered or unexported fields } ``` A ProcInst represents an XML processing instruction. #### func [NewProcInst](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1385) [¶](#NewProcInst) ``` func NewProcInst(target, inst [string](/builtin#string)) *[ProcInst](#ProcInst) ``` NewProcInst creates a parentless XML processing instruction. #### func (*ProcInst) [Index](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1429) [¶](#ProcInst.Index) added in v1.1.0 ``` func (p *[ProcInst](#ProcInst)) Index() [int](/builtin#int) ``` Index returns the index of this ProcInst token within its parent element's list of child tokens. If this ProcInst token has no parent element, the index is -1. #### func (*ProcInst) [Parent](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L1422) [¶](#ProcInst.Parent) ``` func (p *[ProcInst](#ProcInst)) Parent() *[Element](#Element) ``` Parent returns processing instruction token's parent element, or nil if it has no parent. #### type [ReadSettings](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L30) [¶](#ReadSettings) ``` type ReadSettings struct { // CharsetReader to be passed to standard xml.Decoder. Default: nil. CharsetReader func(charset [string](/builtin#string), input [io](/io).[Reader](/io#Reader)) ([io](/io).[Reader](/io#Reader), [error](/builtin#error)) // Permissive allows input containing common mistakes such as missing tags // or attribute values. Default: false. Permissive [bool](/builtin#bool) // Entity to be passed to standard xml.Decoder. Default: nil. Entity map[[string](/builtin#string)][string](/builtin#string) } ``` ReadSettings allow for changing the default behavior of the ReadFrom* methods. #### type [Token](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L87) [¶](#Token) ``` type Token interface { Parent() *[Element](#Element) Index() [int](/builtin#int) // contains filtered or unexported methods } ``` A Token is an empty interface that represents an Element, CharData, Comment, Directive, or ProcInst. #### type [WriteSettings](https://github.com/beevik/etree/blob/v1.1.0/etree.go#L54) [¶](#WriteSettings) ``` type WriteSettings struct { // CanonicalEndTags forces the production of XML end tags, even for // elements that have no child elements. Default: false. CanonicalEndTags [bool](/builtin#bool) // CanonicalText forces the production of XML character references for // text data characters &, <, and >. If false, XML character references // are also produced for " and '. Default: false. CanonicalText [bool](/builtin#bool) // CanonicalAttrVal forces the production of XML character references for // attribute value characters &, < and ". If false, XML character // references are also produced for > and '. Default: false. CanonicalAttrVal [bool](/builtin#bool) // When outputting indented XML, use a carriage return and linefeed // ("\r\n") as a new-line delimiter instead of just a linefeed ("\n"). // This is useful on Windows-based systems. UseCRLF [bool](/builtin#bool) } ``` WriteSettings allow for changing the serialization behavior of the WriteTo* methods.
TDCor
cran
R
Package ‘TDCor’ October 12, 2022 Type Package Title Gene Network Inference from Time-Series Transcriptomic Data Version 0.1-2 Date 2015-10-05 Author <NAME> Maintainer <NAME> <<EMAIL>> Imports parallel Depends R (>= 3.1.2), deSolve Description The Time-Delay Correlation algorithm (TDCor) reconstructs the topology of a gene regu- latory network (GRN) from time-series transcriptomic data. The algorithm is described in de- tails in Lavenus et al., Plant Cell, 2015. It was initially developed to infer the topol- ogy of the GRN controlling lateral root formation in Arabidopsis thaliana. The time-series tran- scriptomic dataset which was used in this study is included in the package to illus- trate how to use it. License GPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2015-10-26 15:58:36 R topics documented: TDCor-packag... 2 CalculateDP... 4 CalculateTP... 7 clean.a... 10 draw.profil... 10 estimate.dela... 11 LR_datase... 13 l_gene... 14 l_name... 15 l_prio... 15 shortest.pat... 16 TDCO... 17 T... 24 time... 25 UpdateDP... 25 UpdateTP... 27 TDCor-package TDCor algorithm for gene regulatory network inference Description TDCor (Time-Delay Correlation) is an algorithm designed to infer the topology of a gene regu- latory network (GRN) from time-series transcriptomic data. The algorithm is described in details in Lavenus et al., Plant Cell, 2015. It was initially developped to infer the topology of the GRN controlling lateral root formation in Arabidopsis thaliana. The time-series transcriptomic dataset analysed in this study is included in the package. Details Package: TDCor Type: Package Version: 1.2 License: GNU General Public License Version 2 The reconstruction of a gene network using the TDCor package involves six steps. 1. Load the averaged non-log2 time series transcriptomic data into the R workspace. 2. Define the vector times containing the times (in hours) at which the samples were collected. 3. Define the vector containing the gene codes of the genes you want to reconstruct the network with (e.g. see l_genes), as well as the associated gene names (e.g. see l_names) and the associated prior (e.g. see l_prior). 4. Build or update the TPI database using the CalculateTPI or UpdateTPI functions. 5. Build or update the DPI database using the CalculateDPI or UpdateDPI functions. 6. Reconstruct the network using the TDCOR main function. See examples below. Besides the functions of the TDCor algorithm, the package also contains the lateral root transcrip- tomic dataset (LR_dataset), the times vector to use with this dataset (times), the vector of AGI gene codes used to reconstruct the network shown in the original paper (l_genes), the vector of the gene names (l_names) and the prior (l_prior). The associated TPI and DPI databases (TPI10 and DPI15) which were used to build the network shown in the original paper are not included. Hence to reconstruct the lateral root network, these first need to be generated. A database of about 1800 Arabidopsis transcription factors is also included (TF). Three side functions, estimate.delay, shortest.path and draw.profile are also available to the user. These can be used to visualize the transcriptomic data, optimize some of the TDCOR parameters, and analyze the networks. Author(s) Author: <NAME> <<EMAIL>> Maintainer: <NAME> <<EMAIL>> References Lavenus et al. (2015), Inference of the Arabidopsis lateral root gene regulatory network sug- gests a bifurcation mechanism that defines primordia flanking and central zones. The Plant Cell, in press. See Also See also CalculateDPI, CalculateTPI, UpdateDPI, UpdateTPI, TDCOR, estimate.delay. Examples ## Not run: # Load the LR transcriptomic dataset data(LR_dataset) # Load the vectors of genes codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the LR_dataset data(times) # Generate the TPI database (this may take several hours) TPI10=CalculateTPI(dataset=LR_dataset,l_genes=l_genes,l_prior=l_prior, times=times,time_step=1,N=10000,ks_int=c(0.5,3),kd_int=c(0.5,3), delta_int=c(0.5,3),noise=0.1,delay=3) # Generate the DPI database (this may take several hours) DPI15=CalculateDPI(dataset=LR_dataset,l_genes=l_genes,l_prior=l_prior, times=times,time_step=1,N=10000,ks_int=c(0.5,3),kd_int=c(0.5,3), delta_int=c(0.5,3), noise=0.15, delay=3) # Check/update if necessary the databases TPI10=UpdateTPI(TPI10,LR_dataset,l_genes,l_prior) DPI15=UpdateDPI(DPI15,LR_dataset,l_genes,l_prior) # Choose your parameters ptime_step=1 ptol=0.13 pdelayspan=12 pthr_cor=c(0.65,0.8) pdelaymax=c(2.5,3.5) pdelaymin=0 pdelay=3 pthrpTPI=c(0.55,0.8) pthrpDPI=c(0.65,0.8) pthr_overlap=c(0.4,0.6) pthr_ind1=0.65 pthr_ind2=3.5 pn0=1000 pn1=10 pregmax=5 pthr_isr=c(4,6) pTPI=TPI10 pDPI=DPI15 pMinTarNumber=5 pMinProp=0.6 poutfile_name="TDCor_output.txt" # Reconstruct the network tdcor_out= TDCOR(dataset=LR_dataset, l_genes=l_genes,l_names=l_names,n0=pn0,n1=pn1, l_prior=l_prior, thr_ind1=pthr_ind1,thr_ind2=pthr_ind2,regmax=pregmax,thr_cor=pthr_cor, delayspan=pdelayspan,delaymax=pdelaymax,delaymin=pdelaymin,delay=pdelay,thrpTPI=pthrpTPI, thrpDPI=pthrpDPI,TPI=pTPI,DPI=pDPI,thr_isr=pthr_isr,time_step=ptime_step,thr_overlap=pthr_overlap, tol=ptol,MinProp=pMinProp,MinTarNumber=pMinTarNumber,outfile_name=poutfile_name) ## End(Not run) CalculateDPI Generate the DPI database to be used by the TDCOR main function Description CalculateDPI builds a DPI database for the TDCOR main function to prune diamond motifs Usage CalculateDPI(dataset,l_genes, l_prior, times, time_step, N, ks_int, kd_int, delta_int, noise, delay) Arguments dataset Numerical matrix storing the transcriptomic data. The rows of this matrix must be named by gene codes (like AGI gene codes for Arabidopsis data). l_genes A character vector containing the gene codes of the genes included in the anal- ysis (i.e. to be used to build the network) l_prior A numerical vector containing the prior information on the genes included in the network recontruction. By defining the l_prior vector, the user defines which genes should be regarded as positive regulators, which others as negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regulator; 0 for non-regulator (target only); 1 for positive regulator; 2 for both positive and negative regulator. The i-th element of the vector is the prior to associate to the i-th gene in l_genes. times A numerical vector containing the successive times at which the samples were collected to generate the time-series transcriptomic dataset. time_step A positive number corresponding to the time step (in hours) i.e. the temporal resolution at which the gene profiles are analysed. N An integer corresponding to the number of iterations that are carried out in order to estimate the DPI distributions. N should be >5000. ks_int A numerical vector containing two positive elements in increasing order. The first (second) element is the lower (upper) boundary of the interval into which the equation parameters corresponding to the regulation strength of the targets by their regulators are randomly sampled. kd_int A numerical vector containing two positive elements in increasing order. The first (second) element is the lower (upper) boundary of the interval into which the equation parameters corresponding to the transcripts degradation rates are randomly sampled. delta_int A numerical vector containing two positive elements in increasing order and expressed in hours. The first (second) element is the lower (upper) boundary of the sampling interval for the equation parameters corresponding to the time needed for the transcripts of the regulator to mature, to get exported out of the nucleus, to get translated and for the regulator protein to get imported into the nucleus and to bind its target promoter. noise A positive number between 0 and 1 corresponding to the noisiness of the system. (0 = no noise, 1 = very strong noise). noise should not be too high (for instance below 0.2). delay A positive number corresponding to the time shift (in hours) that is expected between the profile of a regulator and its direct target. This parameter is used to generate a reference target profile from the profile of the regulator and calculate the DPI index. Details CalculateDPI models two 4-genes networks showing slightly different topologies. Each network topology is modelled using a specific system of delay differential equations. For all genes listed in l_genes whose corresponding prior in l_prior is not null (i.e. the genes that are regarded as transcriptional regulators), the two systems of differential equations are solved N times with N differ- ent sets of random parameters. The Diamond Pruning Index (DPI) is calculated for all of these 2N networks. From these in silico data the conditional probability distribution of the DPI index given the regulator and the topology can be estimated. The probability distribution of the topology given DPI and the regulator is next calculated using Bayes’ theorem and returned by the function. These shall be used when reconstructing the network to prune the "diamond" motifs. CalculateDPI returns a list object which works as a database. It not only stores the conditional probability distributions but also all the necessary information for TDCOR to access the data, and the input parameters. The latter are read by the UpdateDPI function to update the database. Value CalculateDPI returns a list object. prob_DPI_ind A numerical vector whose elements are named by the vector l_genes; The ele- ment named gene i contains 0 if no probability distribution has been calculated for this gene (because its prior is 0) or a positive integer if this has been done. This positive integer then correponds to the number of the element in the list prob_DPI that stores the spline functions of the calculated conditional probabil- ity distributions associated with this particular regulator. prob_DPI A list storing lists of 3 spline functions of probability distributions. Each of the spline functions corresponds to the probability distribution of one topology given a regulator and a DPI value. The information about which regulator was used to generate the distributions stored in the i-th element of prob_DPI is stored in the prob_DPI_ind vector. prob_DPI_domain A list storing vectors of two elements. The first (second) element of element i is the lowest (greatest) DPI value obtained during the simulation with the regulator i. input A list that stores the input parameters used to generate the database. Note The computation of the TPI and DPI databases is time-consuming as it requires many systems of differential equations to be solved. It may take several hours to build a database for a hundred genes. Author(s) <NAME> <<EMAIL>> See Also See also UpdateDPI, TDCor-package. Examples ## Not run: # Load the LR transcriptomic dataset data(LR_dataset) # Load the vector of gene codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the LR_dataset data(times) # Generate a small DPI database (3 genes) DPI_example=CalculateDPI(dataset=LR_dataset,l_genes=l_genes[4:6],l_prior=l_prior[4:6], times=times,time_step=1,N=5000,ks_int=c(0.5,3),kd_int=c(0.5,3),delta_int=c(0.5,3), noise=0.15,delay=3) ## End(Not run) CalculateTPI Generate the TPI database to be used by the TDCOR main function Description CalculateTPI builds a TPI database for the TDCOR main function to prune triangle motifs Usage CalculateTPI(dataset,l_genes, l_prior, times, time_step, N, ks_int, kd_int, delta_int, noise, delay) Arguments dataset Numerical matrix storing the transcriptomic data. The rows of this matrix must be named by gene codes (AGI gene codes for Arabidospis data). l_genes A character vector containing the gene codes of the genes included in the anal- ysis (i.e. to be used to build the network) l_prior A numerical vector containing the prior information on the genes included in the network recontruction. By defining the l_prior vector, the user defines which genes should be regarded as positive regulators, which others as negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regulator; 0 for non-regulator (target only); 1 for positive regulator; 2 for both positive and negative regulator. The i-th element of the vector is the prior to associate to the i-th gene in l_genes. times A numerical vector containing the successive times at which the samples were collected to generate the time-series transcriptomic dataset. time_step A positive number corresponding to the time step (in hours) i.e. the temporal resolution at which the gene profiles are analysed. N An integer corresponding to the number of iterations that are carried out in order to estimate the TPI distributions. N should be >5000. ks_int A numerical vector containing two positive elements in increasing order. The first (second) element is the lower (upper) boundary of the interval into which the equation parameters corresponding to the regulation strength of the targets by their regulators are randomly sampled. kd_int A numerical vector containing two positive elements in increasing order. The first (second) element is the lower (upper) boundary of the interval into which the equation parameters corresponding to the transcripts degradation rates are randomly sampled. delta_int A numerical vector containing two positive elements in increasing order and expressed in hours. The first (second) element is the lower (upper) boundary of the sampling interval for the equation parameters corresponding to the time needed for the transcripts of the regulator to mature, to get exported out of the nucleus, to get translated and for the regulator protein to get imported into the nucleus and to bind its target promoter. noise A positive number between 0 and 1 corresponding to the noisiness of the system. (0 = no noise, 1 = very strong noise). noise should not be too high (for instance below 0.2). delay A positive number corresponding to the time shift (in hours) that is expected between the profile of a regulator and its direct target. This parameter is used to generate a reference target profile from the profile of the regulator and calculate the TPI index. Details CalculateTPI models three 3-genes networks showing slightly different topologies. Each network topology is modelled using a specific system of delay differential equations. For all genes listed in l_genes whose corresponding prior in l_prior is not null (i.e. the genes that are regarded as transcriptional regulators), the three systems of differential equations are solved N times with N dif- ferent sets of random parameters. The Triangle Pruning Index (TPI) is calculated for all of these 3N networks. From these in silico data the conditional probability distribution of the TPI index given the regulator and the topology can be estimated. The probability distribution of the topology given TPI and the regulator is next calculated using Bayes’ theorem and returned by the function. These shall be used when reconstructing the network to prune the "triangle" motifs. CalculateTPI returns a list object which works as a database. It not only stores the calculated probability distributions but also information on how to access the data, and the input parameters. The latter are read by the UpdateTPI function to update the database. Value CalculateTPI returns a list object. prob_TPI_ind A numerical vector whose elements are named by the vector l_genes; The ele- ment named gene i contains 0 if no probability distribution has been calculated for this gene (because its prior is 0) or a positive integer if this has been done. This positive integer then correponds to the number of the element in the list prob_TPI that stores the spline functions of the calculated conditional probabil- ity distributions associated with this particular regulator. prob_TPI A list storing lists of 3 spline functions of probability distributions. Each of the spline functions corresponds to the probability distribution of one topology given a regulator and a TPI value. The information about which regulator was used to generate the distributions stored in the i-th element of prob_TPI is stored in the prob_TPI_ind vector. prob_TPI_domain A list storing vectors of two elements. he first (second) element of element i is the lowest (greatest) TPI value obtained during the simulation with the regulator i. input A list that stores the input parameters used to generate the database. Note The computation of the TPI and DPI databases is time-consuming as it requires many systems of differential equations to be solved. It may take several hours to build a database for a hundred genes. Author(s) <NAME> <<EMAIL>> See Also See also UpdateTPI, TDCor-package. Examples ## Not run: # Load the lateral root transcriptomic dataset data(LR_dataset) # Load the vectors of gene codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the the lateral root dataset data(times) # Generate a small TPI database (3 genes) TPI_example=CalculateTPI(dataset=LR_dataset,l_genes=l_genes[4:6], l_prior=l_prior[4:6],times=times,time_step=1,N=5000,ks_int=c(0.5,3), kd_int=c(0.5,3),delta_int=c(0.5,3),noise=0.1,delay=3) ## End(Not run) clean.at Elimininate from a vector of gene codes the genes for which no data is available. Description clean.at removes from a vector of gene codes l_genes all the elements for which no data is present in the matrix dataset. Usage clean.at(dataset,l_genes) Arguments dataset A matrix containing the time-series transcriptomic data whose rows must be named by gene codes (like AGI gene codes). l_genes A character vector which contains gene codes (AGI gene codes in the case of the lateral root dataset). Examples ## Load lateral root transcriptomic dataset and the l_genes vector data(LR_dataset) data(l_genes) # Clean the l_gene vector clean.at(LR_dataset,l_genes) draw.profile Plot the expression profile of a gene in dataset Description draw.profile plots the expression profile of gene in dataset with respect to times. Usage draw.profile(dataset, gene, ...) Arguments dataset The matrix storing the time-serie transcriptomic data. gene The AGI code of the gene of interest. ... Additional arguments to be passed to the function: • col: String. Color of the curve. • type: String. Type of curve. "l", lines; "p", points; "b", both etc... For more information see the help file of the plot R function. • main: String. Title of the graph. Author(s) <NAME> (<<EMAIL>>) Examples # draw the profile of GATA23 in the LR dataset data(LR_dataset) data(times) draw.profile(LR_dataset,"AT5G26930",col="blue",main="GATA23") estimate.delay Estimate the time shift between two gene profiles and make a plot Description estimate.delay computes the delay/time shift between two gene expression profiles contained in dataset. It returns a list with one or two estimated time shifts and their associated correlation. By default the function also returns a plot composed of four panels which show in more details how these estimate were obtained. This can help the user finding the appropriate parameter values to be used with the TDCOR main function. For more details see below. Usage estimate.delay(dataset, tar, reg, times, time_step, thr_cor, tol, delaymax, delayspan, ...) Arguments dataset Numerical matrix storing the non-log2 transcriptomic data (average of repli- cates). The rows of this matrix must be named by gene codes (e.g. the AGI gene code for Arabidopsis datasets). The columns must be organized in chronological order from the left to the right. tar The gene code of the gene to be regarded as the target. reg The gene code of the gene to be regarded as the regulator. times A numerical vector containing the successive times (in hours) at which the sam- ples were collected to generate the time-series transcriptomic dataset. time_step A positive number corresponding to the time step (in hours) i.e. the temporal resolution at which the gene profiles are analysed. thr_cor A number between 0 and 1 corresponding to the threshold on Pearson’s corre- lation. The delay will be computed only if the absolute correlation between the profiles is higher than this threshold. Otherwise the genes are considered to have profiles that are too dissimilar, and computing the time shift would not make any sense. tol The tolerance threshold for the score. The score is a positive number used to rank the time shift estimates. The best score possible for a time shift estimate is 0. If the score is above the tolerance threshold, the time shift estimate will be ignored. delaymax The maximum time shift possible for a direct interaction (in hours). delayspan The maximum time shift (in hours) which will be analysed. It should be high enough for the time shift estimation process to be successful but relatively small in comparison to the overall duration of the time series. (e.g. for the LR dataset which has data spanning over 54 hours, delayspan was set to 12 hours). ... Additional optional arguments. • make.graph: A boolean. Set to FALSE to prevent the function from gener- ating a graph. • tar.name: A string. "Everyday name" of the target. This name will be used in the plots instead of the default value (gene code). • reg.name: A string. "Everyday name" of the regulator. This name will be used in the plots instead of the default value (gene code). • main: A string. Main title of the plot. By default the title of the plot is automatically generated from tar.name and reg.name. Details Negative time shifts occur when the gene which the user set as being the regulator could actually be the target. When two time shifts are returned, one is necessarily positive and the other is negative. When the only time shift estimate is zero, the function does not return any estimate. The function automatically guess the sign of the potential interaction (stimulatory or inhibitory) and adapt the analysis based on it. The sign of the potential interaction is indicated in the main title of the graph by (+) or (-). When both types of interaction are possible, the function generates two graphs (one for each sign). The function returns by default a graph composed of four panels. The top panel shows the spline functions of the two normalised expression profiles with respect to time. The second panel consists of the plots of the F1 and F2 functions with respect to the time shift (mu). The third one is for the F3 and F4 functions. All of these four functions aim at estimating the time shift between the two expression profiles by minimizing a distance-like measurement. But they each do it in a slightly different manner. F1 and F3 use Pearson’s correlation as a measure of distance while F2 and F4 use the sum of squares. Moreover F1 and F2 measure the distances directly between the spline functions while F3 and F4 do it between the first derivatives of these functions. The vertical red and purple lines in the second and third panel indicate the position of the respective maximum or minimum of the functions. In the fourth and last panel, the final score function is plotted. This score is computed for each possible time shift analysed by combining the four above-mentionned functions. The green horizontal line indicate the position of the tolerance threshold (tol) above which time shift estimates are rejected. The vertical dark grey line(s) represent(s) the position of the final estimated time shift(s). All these lines necessarily fall into regions where the score function is below the threshold (painted in light green). Other vertical light grey line(s) may indicate other time shift estimate(s) that have a score above the tolerance threshold and were therefore rejected. Value The function returns a list. The first element (delay) is a numerical vector containing the time-shift estimate(s). The second element (correlation) is another numerical vector containing the associated correlation. The function also returns a graph as explained above. Author(s) <NAME> <<EMAIL>> Examples # Load the data data(LR_dataset) data(l_genes) data(l_names) data(times) # Estimate the time shift between LBD16 and PUCHI (one time shift estimate returned) estimate.delay(dataset=LR_dataset, tar=l_genes[which(l_names=="PUCHI")], reg=l_genes[which(l_names=="LBD16")], times=times, time_step=1, thr_cor=0.7, tol=0.15, delaymax=3, delayspan=12, reg.name="LBD16",tar.name="PUCHI") # Estimate the time shift between ARF8 and PLT1 (two time shift estimates returned) estimate.delay(dataset=LR_dataset, tar=l_genes[which(l_names=="PLT1")], reg=l_genes[which(l_names=="ARF8")], times=times, time_step=1, thr_cor=0.7, tol=0.15, delaymax=3, delayspan=12, reg.name="ARF8",tar.name="PLT1") LR_dataset Lateral root transcriptomic dataset Description LR_dataset is a matrix of dimension 15240 lines x 18 columns. It stores a time-series transcriptomic dataset following the changes occuring in a young Arabidopsis root during the formation of a lateral root. To generate this dataset, lateral root formation was locally induced by a gravistimulus at t=0 and the stimulated part of the roots was collected every 3 hours from 6 hours to 54 hours. The transcriptomes were analyzed using the ATH1 affymetrix chip. For time point 0, unstimulated young primary root was taken as a control. Importantly, the transcript accumulation levels stored in this dataset are non-log2 values. Usage data("LR_dataset") Details The experiment spanned 54 hours in order to cover all aspects of lateral root development. Using this method, lateral root initiation (the first pericycle divisions) occurs synchroneously in all stim- ulated roots around 12 hours after stimulation and the fully formed lateral root emerges from the parental root around 45 hours. The dataset contains data for all significantly differentially expressed genes. Each column is the average of 4 independent replicates. The columns are organized in the following order: 0, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51 and 54 hours. Each line of the matrix is labbelled with an AGI gene code (Arabidopsis Genome Initiative gene code). Source Voss et al., Lateral root organ initiation re-phases the circadian clock in Arabidopsis thaliana. Na- ture communication, in revision. Examples # Load the dataset data(LR_dataset) # Have a look at the first rows head(LR_dataset) l_genes l_genes Description Character vector containing the AGI gene codes of the genes used to reconstruct the network in Lavenus et al. 2015, Plant Cell. Usage data("l_genes") Examples # Load the vector data(l_genes) # Have a look at it l_genes l_names l_names Description Character vector containing the ’everyday names’ of the genes used to reconstruct the network in Lavenus et al. 2015, Plant Cell. Usage data("l_names") Examples # Load the vector data(l_names) # Have a look at it l_names l_prior l_prior Description Vector containing the prior associated with the genes included in the network reconstruction in Lavenus et al. 2015, Plant Cell. By defining the l_prior vector, the user defines which genes should be regarded as positive regu- lators, which others as negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regulator; 0 for non-regulator (target only); 1 for positive regulator; 2 for both positive and negative regulator. The i-th element of the vector is the prior to associate to the i-th gene in l_genes. Usage data("l_prior") Examples # Load the vector data(l_prior) # Have a look at it l_prior shortest.path Calculate the shortest path linking every pairs of nodes in the network Description shortest.path computes the shortest influence path (in number of edges) linking every possible regulator/target pairs in the network. Usage shortest.path(bootstrap, BS_thr) Arguments bootstrap A square numerical matrix representing a network. The element [i,j] of this matrix is the signed bootstrap value for the edge ’gene j to gene i’. The sign of this element indicates the sign of the predicted interation (i.e. whether it is inhibitory or stimulatory) and the absolute value of the element is the bootstrap. BS_thr Minimum bootstrap threshold for an edge to be taken into consideration in the analysis. The edges with bootstrap values below this threshold are ignored. Details The paths are signed in order to keep track of the type of influence that the genes have on each other. If a path leads to the inhibition of a gene by another, shortest.path will return a negative number for this "pair" (Note that the (i,j) pair is not regarded as being the same than the (j,i) pair). Because the network is directed, edges can only be followed in one direction: from the regulator to the target. Hence if the network contains an edge from gene i to gene j, the length of the shortest path from i to j is 1 edge and therefore the function returns either 1 or -1 (depending on the sign of the interaction) for the length of i to j path. In absence of feedback loops between i and j, the network does not contain any path from gene j to gene i. In this case shortest.path shall return 0 for the length of the j to i path. Otherwise it will return the minimum number of edges to follow to go from j to i. Value shortest.path returns a list containing two matrices. SP A square numerical matrix. The element [i,j] stores the signed shortest path from gene j to gene i. The sign indicates of the type of regulatory influence (stimulatory or inhitory) that gene j has on gene i through the shortest path. BS A square numerical matrix. The element [i,j] stores the geometric mean of the bootstrap values of the edges located on the shortest path from gene j to gene i. Author(s) <NAME> <<EMAIL>> Examples ## Example with a 3-genes network where gene A upregulates B which upregulates A; and C represses B. ## the three edges have different bootstrap values (100, 60 and 55) network=data.frame(matrix(c(0,100,0,0,60,0,0,-55,0),3,3)) names(network)=c("gene A","gene B","gene C") rownames(network)=c("gene A","gene B","gene C") shortest.path(as.matrix(network),1) TDCOR The TDCOR main function Description This is the main function to run the TDCOR algorithm and reconstruct the gene network topology. Usage TDCOR(dataset,l_genes, TPI, DPI, ...) Arguments dataset Numerical matrix storing the non-log2 transcriptomic data (average of repli- cates). The rows of this matrix must be named by gene codes (e.g. the AGI gene code for Arabidopsis datasets). The columns must be organized in chronological order from the left to the right. l_genes A character vector containing the (AGI) gene codes of the genes one wishes to build the network with (gene codes -e.g. "AT5G26930"- by opposed to gene names -e.g."GATA23"- which are provided by the optional l_names argument). TPI A TPI database generated by CalculateTPI which contains some necessary statistical information for triangle motifs pruning. In particular it must have an entry for all the regulators included in the network analysis. The TPI database may also contain data for genes that are not included in l_genes. DPI A DPI database generated by CalculateDPI which contains some necessary statistical information for diamond motifs pruning. In particular it must have an entry for all the regulators included in the network analysis. The DPI database may also contain data for genes that are not included in l_genes. ... Additional arguments to be passed to the TDCOR function (Some are necessary if dataset is not the LR dataset): • l_names: A character vector containing the ’everyday names’ of the genes included in the analysis (e.g. "LBD16") . These are the names by which genes will be refered as to in the final network table. Gene names must be unique; repeats of the same name are not allowed. If no l_names parameter is given, it is by default equal to l_genes. The i-th element of the vector l_names contains the ’everyday name’ of the i-th gene in l_genes. • l_prior: A numerical vector containing the prior information on the genes included in the analysis. By defining the l_prior, the user defines which genes are positive regulators, which are negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regu- lator; 0 for non-regulator (target only); 1 for positive regulator; 2 for genes that can act as both positive and negative regulators. If no l_prior param- eter is provided, it is by default equal to a vector of 2s, meaning that all genes are regarded as being both potential activators and repressors. The i-th element of the vector l_prior contains the prior to associate to the i-th gene in l_genes. • times: A numerical vector containing the successive times (in hours) at which the samples were collected to generate the time-series transcriptomic dataset.If no times parameter is given, it is by default equal to the times parameter used for the lateral root transcriptomic dataset. • n0: An integer corresponding to the number of iterations to be performed in the external bootstrap loop. At the beginning of every iteration of this external loop, new random parameter values are sampled in the user-defined bootstrapping interval. If no n0 parameter is given, it is by default equal to 1000. • n1 : An integer corresponding to the number of iterations to be performed in the internal bootstrap loop. In this loop parameter values are kept the same but the order of node analysis is randomized at each iteration. If no n1 parameter is provided, it is by default equal to 10. • time_step: A positive number corresponding to the time step (in hours) i.e. the temporal resolution at which the gene profiles are analysed. If no time_step parameter is provided, it is by default equal to 1 hour. • delayspan: A positive number. It is the maximum time shift (in hours) which is analysed. It should be high enough for the time shift estimation process to be successful but argueably relatively small in comparison to the overall duration of the time series. (e.g. for the LR dataset which has data spanning over 54 hours, delayspan was set to 12 hours). • tol: A strictly positive number corresponding to the tolerance threshold on the final score of time shift estimates. The score is a positive number that measures the "level of disagreement" between the four time shift esti- mators. A time shift estimate is regarded as meaningful if it scores lower than the tol threshold. If all four estimators agree on a certain value of time shift, the estimate obtains the best possible score, which is 0. Increas- ing tol make the time shift estimation process LESS stringent. For more information see estimate.delay. • delaymin: A numerical vector containing one or two positive elements cor- responding to the boundaries of the boostrapping interval for the minimum time shift above which putative interactions are regarded as possible. If no delaymin parameter is provided, it is by default equal to 0 hour. Gene pairs with time shift lower than or equal to delaymin are regarded as co- regulation and are therefore not included in the network. • delaymax: A numerical vector containing one or two positive elements cor- responding to the boundaries of the boostrapping interval for the maximum time shift above which putative interactions are regarded as indirect. If no delaymax parameter is provided, it is by default equal to 3 hours. Puta- tive indirect interactions are included in the network only when the putative target is not predicted any direct regulator. • thr_cor: A numerical vector containing one or two positive elements be- tween 0 and 1 corresponding to the boundaries of the boostrapping interval for the threshold of Pearson’s correlation. A gene pair is included in the preliminary network only if the correlation between the profiles (with the time shift correction) is higher than or equal to the thr_cor threshold. If no thr_cor parameter is provided, it is by default equal to [0.7;0.9]. Note that increasing thr_cor makes the correlation filter MORE stringent. • delay: A positive number corresponding to the most likely time shift (in hours) one could expect between the profile of a regulator and the profile of its direct targets. This parameter enables one to generate the reference profiles of the ideal regulator when calculating the index of directness (ID). If no delay parameter is provided, it is by default equal to 3 hours. Note that similar parameters serving the same purpose are also used to calculate the triangle pruning index and the diamond pruning index. But TDCOR reads the value to use for calculating those indices directly from the TPI and DPI databases (for consistency reasons). • thr_ind1: A numerical vector containing one or two positive elements corresponding to the boundaries of the boostrapping interval for the index of directness (ID) lower threshold. Gene pairs showing an ID below this threshold will be regarded as co-regulation and therefore eliminated from the network. If no thr_ind1 parameter is provided, it is by default equal to 0.5. Reminder: For direct interaction one expects ID values around 1. For indirect interactions one expect values greater than 1. For co-regulated genes, ID should be smaller than 1. Note that increasing thr_ind1 makes the ID-based "anti-coregulation filter" MORE stringent. • thr_ind2: A numerical vector containing one or two positive elements cor- responding to the boundaries of the boostrapping interval for the index of directness (ID) upper threshold. Putative interactions showing an ID above this threshold are regarded as indirect. If no thr_ind2 parameter is pro- vided, it is by default equal to 4.5. Reminder: For direct interaction one expects ID values around 1. For indirect interactions one expect values greater than 1. For co-regulated genes, ID should be smaller than 1. Note that increasing thr_ind2 makes the ID-based filter against indirect interac- tions LESS stringent. • thr_overlap : A numerical vector containing one or two positive elements smaller than 1. These correspond to the boundaries of the boostrapping interval for the index of overlap. If no thr_overlap parameter is provided, it is by default equal to [0.5,0.6]. Note that increasing thr_overlap makes the overlap filter MORE stringent. This filter aims at removing unlikely negative interactions where the putative regulator switches on too late to downregulate the putative target. Keep in mind that the filter is sensitive to the noise level in the data. It should only be used if the data has a very low level of noise. To inactivate the filter set the thr_overlap parameter to 0. • thrpTPI: A numerical vector containing one or two positive numbers smaller or equal to 1 in increasing order. These correspond to the boundaries of the boostrapping interval for the probability threshold used in the triangle filter. If no thrpTPI parameter is provided, it is by default equal to [0.5,0.75]. Note that increasing thrpTPI makes the triangle filter LESS stringent. • thrpDPI: A numerical vector containing one or two positive numbers smaller or equal to 1 in increasing order. These correspond to the boundaries of the boostrapping interval for the probability threshold used in the diamond fil- ter. If no thrpTPI parameter is provided, it is by default equal to [0.8,0.9]. Note that increasing thrpDPI makes the diamond filter LESS stringent. • thr_isr: A numerical vector containing one or two positive elements cor- responding to the boundaries of the boostrapping interval for the threshold of the index of directness above which the gene is predicted to negatively self-regulate. Genes will be predicted to positively self-regulate if the index of directness is smaller than 1/thr_isr. If no thr_isr parameter is pro- vided, it is by default equal to [3,6]. Note that increasing thr_isr makes the search for self-regulating genes MORE stringent. • search.EP: A boolean to control whether Master-Regulator-Signal-Transducer (MRST) or signal Entry Point (EP) should be looked for or not. (If yes, set on TRUE which is the default value) • thr_bool_EP: A number between 0 and 1 used as threshold to convert nor- malized expression profiles (values between 0 and 1) into boolean expres- sion profiles (values equal to 0 or 1). If no thr_bool_EP parameter is pro- vided, it is by default equal to 0.8. The conversion of the continuous profiles into boolean profiles is part of the process of MRST analysis. • MinTarNumber: An integer. Minimum number of targets a regulator should have in order to be regarded as a potential MRST. If no MinTarNumber parameter is provided, it is by default equal to 5. Note that increasing MinTarNumber makes the search for MRST genes MORE stringent. • MinProp: A number between 0 and 1. Minimum proportion of targets which are not at steady state at t=0 that a regulator should have in order to be regarded as a potential MRST. If no MinProp parameter is provided, it is by default equal to 0.75. Note that increasing MinProp makes the search for MRST genes MORE stringent. • MaxEPNumber: An integer. Maximum number of MRST that can be pre- dicted at each iteration. If no MaxEPNumber parameter is provided, it is by default equal to 1. • regmax: An integer. Maximum number of regulators that a target may have. If no regmax parameter is provided, it is by default equal to 6. • outfile_name: A string. Name of the file to print the network table in. By default it is "TDCor_output.txt". Details The default values are certainly not the best values to work with. The TDCOR parameters have to be optimized by the user based on its own knowledge of the network, the quality of the data etc... Because TDCOR works by pruning interactions, it is probably easier (as a first go) to optimize the parameter values following the order of the filters. Before starting inactivate all the filters using the less stringent parameter values possible or for the MRST filter by setting search.EP to FALSE. You should as well set the bootstrap parameters to a relatively low value (e.g. n0=100 and n1=1). Hence the runs will be quick and you will be able to rapidly assess whether the changes you made in the parameter values were a good thing. Start by optimizing the parameters involved in time shifts estimation. That is to say, essentially delayspan, time_step, tol and delaymax. The latter (together with delaymin) is a biological parameters and the range of possible values is argueably limited. Though they ought to be adapted to the organism (e.g. in prokaryotes, the delays are extremely short since polysomes couple tran- scription and translation). Note that the estimate.delay function can be very helpful to optimize these various parameters thanks to the visual output. Use it with pairs of genes that have been shown to interact directly or indirectly in your system and for which the relationship in the dataset in clearly linear. For network reconstruction with TDCor, good time shift estimation is absolutely crucial. Once this is done, proceed with optimizing the threshold for correlation thr_cor and the thresholds on the index of directness (thr_ind1, thr_ind2). Then optimize the parameters of the triangle and diamond pruning filters (thrpTPI and thrpDPI). You may have to try a couple of dif- ferent TPI and DPI databases (i.e. databases built with different input parameters). In particular increasing the noise level when generating these database enables one to decrease the stringency of the triangle and diamond filters, when increasing the thrpTPI and thrpDPI value is not suffi- cient. Subsequently fine-tune the parameters of the MRST filter (thr_bool_EP, MinTarNumber, MinProp, MaxEPNumber) if you want it on. Remember to set search.EP back to TRUE first. Next optimize thr_isr (self-regulation). Finally, restrict the number of maximum regulators if necessary (regmax). Value The TDCOR main function returns a list containing 7 elements input A list containing the input parameters (as a reminder). intermediate A list containing three intermediate matrices. mat_cor is the matrix that stores the correlations, mat_isr stores the indices of self-regulations and mat_overlap contains the indices of overlap. network A matrix containing the network. The element [i,j] of this matrix contains the bootstrap value for the edge "gene j to gene i". The sign indicates the sign of the predicted interaction. ID A matrix containing the computed indices of directness (ID). The element [i,j] contains the ID for the edge "gene j to gene i". delay A matrix containing the computed time shifts. The element [i,j] of this matrix contains the estimated time shift between the profile of gene j and the profile of gene i. EP A vector containing the bootstrap values for the MRST predictions. predictions The edge predictions in the form of a table. The columns are organized in fol- lowing order: Regulator name, Type of interaction (+ or-), Target name, Boot- strap, Index of Directness, Estimated time shift between the target and regulator profiles. The table of predictions (without header) and the input parameters are printed at the end of the run in two separate text files located in the current R working directory (If you are not sure which directory this is, use the command getwd()). Note For a parameter to be involved in the bootstrapping process, one must feed the function a vector containing two values as input. These two values are respectively the lower and upper boundaries of the bootstrapping interval. If one chooses not to use a parameter for bootstrapping, one can either feed the function an input vector containing twice the same value, or only one value. Author(s) <NAME> <<EMAIL>> References Lavenus et al., 2015, The Plant Cell See Also See also CalculateDPI, CalculateTPI, UpdateDPI, UpdateTPI, TDCor-package. Examples ## Not run: # Load the lateral root transcriptomic dataset data(LR_dataset) # Load the vectors of gene codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the LR_dataset data(times) # Generate the DPI databases DPI15=CalculateDPI(dataset=LR_dataset,l_genes=l_genes,l_prior=l_prior, times=times,time_step=1,N=10000,ks_int=c(0.5,3),kd_int=c(0.5,3),delta_int=c(0.5,3), noise=0.15,delay=3) # Generate the TPI databases TPI10=CalculateTPI(dataset=LR_dataset,l_genes=l_genes, l_prior=l_prior,times=times,time_step=1,N=10000,ks_int=c(0.5,3), kd_int=c(0.5,3),delta_int=c(0.5,3),noise=0.1,delay=3) # Check/update if necessary the databases (Not necessary here though. # This is just to illustrate how it would work.) TPI10=UpdateTPI(TPI10,LR_dataset,l_genes,l_prior) DPI15=UpdateDPI(DPI15,LR_dataset,l_genes,l_prior) ### Choose your TDCOR parameters ### # Parameters for time shift estimatation # and filter on time shift value ptime_step=1 ptol=0.13 pdelayspan=12 pdelaymax=c(2.5,3.5) pdelaymin=0 # Parameter of the correlation filter pthr_cor=c(0.65,0.8) # Parameters of the ID filter pdelay=3 pthr_ind1=0.65 pthr_ind2=3.5 # Parameter of the overlap filter pthr_overlap=c(0.4,0.6) # Parameters of the triangle and diamond filters pthrpTPI=c(0.55,0.8) pthrpDPI=c(0.65,0.8) pTPI=TPI10 pDPI=DPI15 # Parameter for identification of self-regulations pthr_isr=c(4,6) # Parameters for MRST identification pMinTarNumber=5 pMinProp=0.6 # Max number of regulators pregmax=5 # Bootstrap parameters pn0=1000 pn1=10 # Name of the file to print network in poutfile_name="TDCor_output.txt" ### Reconstruct the network ### tdcor_out= TDCOR(dataset=LR_dataset,l_genes=l_genes,l_names=l_names,n0=pn0,n1=pn1, l_prior=l_prior,thr_ind1=pthr_ind1,thr_ind2=pthr_ind2,regmax=pregmax,thr_cor=pthr_cor, delayspan=pdelayspan,delaymax=pdelaymax,delaymin=pdelaymin,delay=pdelay,thrpTPI=pthrpTPI, thrpDPI=pthrpDPI,TPI=pTPI,DPI=pDPI,thr_isr=pthr_isr,time_step=ptime_step, thr_overlap=pthr_overlap,tol=ptol,MinProp=pMinProp,MinTarNumber=pMinTarNumber, outfile_name=poutfile_name) ## End(Not run) TF Table of 1834 Arabidopsis Transcription factors Description TF is a dataframe with two columns. The first column contains the AGI gene code of 1834 genes encoding Arabidopsis transcription factors. The second column contains the associated gene names. Usage data("TF") Source Data published on the Agris database website (http://arabidopsis.med.ohio-state.edu/AtTFDB/). References Davuluri et al. (2003), AGRIS: Arabidopsis Gene Regulatory Information Server, an informa- tion resource of Arabidopsis cis-regulatory elements and transcription factors, BMC Bioinfor- matics, 4:25 Examples # Load the database data(TF) # Obtain the transcription factors for which data is available in the LR dataset # i.e. present on ATH1 chip and differentially expressed. data(LR_dataset) clean.at(LR_dataset,TF[,1]) times The times vector to use with the lateral root dataset Description Contains the times (in hours) at which the samples were collected to generate the Lateral Root transcriptomic dataset (data(LR_dataset)) Usage data("times") Examples # Load the vector associated with the LR dataset data(times) # Have a look at it times UpdateDPI Update or check the DPI database Description UpdateDPI analyzes the DPI database and add new entries into it if it does not contain all the necessary data for reconstructing the network with the genes listed in the vector l_genes. Usage UpdateDPI(DPI,dataset,l_genes, l_prior) Arguments DPI The DPI database to update or check before reconstructing the network. dataset Numerical matrix storing the transcriptomic data. The rows of this matrix must be named by gene codes (like AGI gene codes for Arabidopsis data). l_genes A character vector containing the gene codes of the genes we want to reconstruct the network with. l_prior A numerical vector containing the prior information on the genes included in the network recontruction. By defining the l_prior vector, the user defines which genes should be regarded as positive regulators, which others as negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regulator; 0 for non-regulator (target only); 1 for positive regulator; 2 for both positive and negative regulator. The i-th element of the vector is the prior to associate to the i-th gene in l_genes. Value UpdateDPI returns an updated DPI database containing data for at least all the genes in l_genes whose associated prior is not null. Author(s) <NAME> <<EMAIL>> See Also See also CalculateDPI. Examples ## Not run: # Load the Lateral root transcriptomic dataset data(LR_dataset) # Load the vector of gene codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the LR_dataset data(times) # Build a very small DPI database (3 genes) DPI_example=CalculateDPI(dataset=LR_dataset,l_genes=l_genes[4:6],l_prior=l_prior[4:6], times=times,time_step=1,N=5000,ks_int=c(0.5,3),kd_int=c(0.5,3),delta_int=c(0.5,3), noise=0.15,delay=3) # Add one gene in the database DPI_example=UpdateDPI(DPI_example,dataset=LR_dataset,l_genes[4:7],l_prior[4:7]) ## End(Not run) UpdateTPI Update or check the TPI database Description UpdateTPI analyzes the TPI database and add new entries into it if it does not contain all the necessary data for reconstructing the network with the genes listed in the vector l_genes. Usage UpdateTPI(TPI, dataset, l_genes, l_prior) Arguments TPI The TPI database to update or check before reconstructing the network. dataset Numerical matrix storing the transcriptomic data. The rows of this matrix must be named by gene codes (like AGI gene codes for Arabidopsis data). l_genes A character vector containing the gene codes of the genes we want to reconstruct the network with. l_prior A numerical vector containing the prior information on the genes included in the network recontruction. By defining the l_prior vector, the user defines which genes should be regarded as positive regulators, which others as negative regulators and which can only be targets. The prior code is defined as follow: -1 for negative regulator; 0 for non-regulator (target only); 1 for positive regulator; 2 for both positive and negative regulator. The i-th element of the vector is the prior to associate to the i-th gene in l_genes. Value UpdateTPI returns an updated TPI database containing data for at least all the genes in l_genes whose associated prior is not null. Author(s) <NAME> <<EMAIL>> See Also See also CalculateTPI. Examples ## Not run: # Load the Lateral root transcriptomic dataset data(LR_dataset) # Load the vector of gene codes, gene names and prior data(l_genes) data(l_names) data(l_prior) # Load the vector of time points for the LR_dataset data(times) # Build a very small TPI database (3 genes) TPI_example=CalculateTPI(dataset=LR_dataset,l_genes=l_genes[4:6], l_prior=l_prior[4:6],times=times,time_step=1,N=5000,ks_int=c(0.5,3), kd_int=c(0.5,3),delta_int=c(0.5,3),noise=0.1,delay=3) # Add one gene in the database TPI_example=UpdateTPI(TPI_example,dataset=LR_dataset,l_genes[4:7],l_prior[4:7]) ## End(Not run)
cairo-blur
rust
Rust
Crate cairo_blur === cairo-blur --- Apply a Gaussian blur to your Cairo ImageSurface. ``` let radius = 15; let mut surf = cairo::ImageSurface::create(Format::ARgb32, 200, 100).expect("Couldn’t create surface"); cairo_blur::blur_image_surface(&mut surf, radius); ``` The code in this crate is a translation of the code here: https://www.cairographics.org/cookbook/blur.c/ Functions --- * blur_image_surfaceBlur a cairo image surface Crate cairo_blur === cairo-blur --- Apply a Gaussian blur to your Cairo ImageSurface. ``` let radius = 15; let mut surf = cairo::ImageSurface::create(Format::ARgb32, 200, 100).expect("Couldn’t create surface"); cairo_blur::blur_image_surface(&mut surf, radius); ``` The code in this crate is a translation of the code here: https://www.cairographics.org/cookbook/blur.c/ Functions --- * blur_image_surfaceBlur a cairo image surface Function cairo_blur::blur_image_surface === ``` pub fn blur_image_surface(surface: &mut ImageSurface, radius: i32) ``` Blur a cairo image surface
abnf_parsec
hex
Erlang
AbnfParsec === Generates a parser from ABNF definition - text (`:abnf`) or file path (`:abnf_file`) An entry rule can be defined by `:parse`. If defined, a `parse/1` function and a `parse!/1` function will be generated with the entry rule. By default, every chunk defined by a rule is wrapped (in list) and tagged by the rulename. Use the options to `:untag`, `:unwrap` or both (`:unbox`). Parsed chunks (rules) can be discarded `:ignore`. Transformations (`:map`, `:reduce`, `:replace`) can be applied by passing in a `:transform` map with keys being rulenames and values being 2-tuples of transformation type (`:map`, `:reduce`, `:replace`), and mfa tuple (for `:map` and `reduce`) or a literal value (for `:replace`) Example usage: ``` defmodule JsonParser do use AbnfParsec, abnf_file: "test/fixture/json.abnf", parse: :json_text, transform: %{ "string" => {:reduce, {List, :to_string, []}}, "int" => [{:reduce, {List, :to_string, []}}, {:map, {String, :to_integer, []}}], "frac" => {:reduce, {List, :to_string, []}}, "null" => {:replace, nil}, "true" => {:replace, true}, "false" => {:replace, false} }, untag: ["member"], unwrap: ["int", "frac"], unbox: [ "JSON-text", "null", "true", "false", "digit1-9", "decimal-point", "escape", "unescaped", "char" ], ignore: [ "name-separator", "value-separator", "quotation-mark", "begin-object", "end-object", "begin-array", "end-array" ] end json = ~s| {"a": {"b": 1.2, "c": [true]}, "d": null, "e": "e\te"} | JsonParser.json_text(json) # or JsonParser.parse(json) # => {:ok, ...} JsonParser.parse!(json) # =[ object: [ [ string: ["a"], value: [ object: [ [string: ["b"], value: [number: [int: 1, frac: ".2"]]], [string: ["c"], value: [array: [value: [true]]]] ] ] ], [string: ["d"], value: [nil]], [string: ["e"], value: [string: ["e\te"]]] ] ] ``` [Link to this section](#summary) Summary === [Types](#types) --- [rulename()](#t:rulename/0) [rulenames()](#t:rulenames/0) [transformation()](#t:transformation/0) [transformations()](#t:transformations/0) [Functions](#functions) --- [__using__(opts)](#__using__/1) All rules by default are wrapped and tagged. See NimbleParsec for more details. [Link to this section](#types) Types === [Link to this section](#functions) Functions === AbnfParsec.LeftoverTokenError exception === AbnfParsec.Parser === Abnf Parser. [Link to this section](#summary) Summary === [Functions](#functions) --- [alternation(binary, opts \\ [])](#alternation/2) Parses the given `binary` as alternation. [char_val(binary, opts \\ [])](#char_val/2) Parses the given `binary` as char_val. [comment(binary, opts \\ [])](#comment/2) Parses the given `binary` as comment. [concatenation(binary, opts \\ [])](#concatenation/2) Parses the given `binary` as concatenation. [core_rule(binary, opts \\ [])](#core_rule/2) Parses the given `binary` as core_rule. [element(binary, opts \\ [])](#element/2) Parses the given `binary` as element. [exception(binary, opts \\ [])](#exception/2) Extension: Used in RFC3501 [group(binary, opts \\ [])](#group/2) Parses the given `binary` as group. [normalize(text)](#normalize/1) [num_val(binary, opts \\ [])](#num_val/2) Parses the given `binary` as num_val. [option(binary, opts \\ [])](#option/2) Parses the given `binary` as option. [parse!(text)](#parse!/1) [parse(text)](#parse/1) [prose_val(binary, opts \\ [])](#prose_val/2) Parses the given `binary` as prose_val. [repetition(binary, opts \\ [])](#repetition/2) Parses the given `binary` as repetition. [rule(binary, opts \\ [])](#rule/2) Parses the given `binary` as rule. [rulelist(binary, opts \\ [])](#rulelist/2) Parses the given `binary` as rulelist. [rulename(binary, opts \\ [])](#rulename/2) Parses the given `binary` as rulename. [Link to this section](#functions) Functions === AbnfParsec.UnexpectedTokenError exception ===
whitestorm
npm
JavaScript
``` <p align="center"><i><b>Framework for developing 3D web apps with physics.</b></i></p> ``` --- FEATURES --- * **Simple shape crafting** — We use a JSON-like structure for creating objects from inputted data and adding them to your 3D world. * **Physics with WebWorkers** — We use the [Physi.js](https://github.com/chandlerprall/Physijs/blob/master/physi.js) library for calculating physics of 3D shapes with **WebWorkers technology** that allows for rendering and calculating physics in multiple threads. * **Plugin system** — Our framework supports *plugins & components* made by other users. You need to include them after whitestorm.js and follow provided instructions. * **Automatization of rendering** — Our framework does rendering automatically and doesn't need a to be called. Functionality like the `resize` function can be called automatically by setting additional parameters such as `autoresize: true`. * **ES6 Features** - Our framework is written using the latest features of ECMAScript 6 and ECMAScript 7 (beta) features and compiled with [Babel](https://babeljs.io/). * **Softbodies** - WhitestormJS is the only engine (except native ammo.js) that supports softbodies. PLAYGROUND 🚀 --- GAME EXAMPLE 🎮 --- INSTALLATION ⤬ USAGE --- #### NODE ``` npm install whitestormjs ``` #### BROWSER Include a script tag linking the [WhitestormJS](https://cdn.jsdelivr.net/whitestormjs/latest/whitestorm.min.js) library in your `head` or after your `body`: ``` <script src="{path_to_lib}/whitestorm.js"></script> ``` After adding these libraries, you can configure your app: ``` const world = new WHS.World({    stats: "fps", // fps, ms, mb or false if not need.    autoresize: true,     gravity: { // Physic gravity.        x: 0,        y: -100,        z: 0    },        camera: {      z: 50 // Move camera.    }}); const sphere = new WHS.Sphere({ // Create sphere object.  geometry: {    radius: 3  },   mass: 10, // Mass of physics object.   material: {    color: 0xffffff,    kind: 'basic'  },   pos: {    x: 0,    y: 100,    z: 0  }}); sphere.addTo(GAME);sphere.getNative(); // Returns THREE.Mesh of this object. world.start(); // Start animations and physics simulation. ``` [Examples](http://192.241.128.187/current/examples/): --- #### 👾 BASIC: * [Basic / Hello world](http://192.241.128.187/current/examples/basic/helloworld/) (Basic "Hello world!" example.) * [Basic / Model](http://192.241.128.187/current/examples/basic/model/) (Basic model example.) * [Basic / Debugging](http://192.241.128.187/current/examples/basic/debugging/) (Object's debug example.) * [Basic / Extending API](http://192.241.128.187/current/examples/basic/extending/) (Extending api example.) * [Basic / Softbody](http://192.241.128.187/current/examples/basic/softbody/) (Basic softbody implementation.) * [Basic / Three.js](http://192.241.128.187/current/examples/basic/threejs/) (Importing three.js scene to whitestormjs core.) #### 💎 DESIGN: * [Design / Saturn](http://192.241.128.187/current/examples/design/saturn/) (Saturn planet example from: <http://codepen.io/Yakudoo/pen/qbygaJ>) * [Design / Easter](http://192.241.128.187/current/examples/design/easter/) (Easter rabbit with easter eggs.) * [Design / Points](http://192.241.128.187/current/examples/design/points/) (Using WHS.Points to make a point cloud shaped in cube.) #### 🏂 FIRST-PERSON: * [FPS / Shooter](http://192.241.128.187/current/examples/fps/shooter/) (First person example with Wagner effects and terrain. + fog) * [FPS / Fog](http://192.241.128.187/current/examples/fps/fog/) (First person game with animated objects) #### 🎳 PHYSICS: * [Physics / Dominos](http://192.241.128.187/current/examples/physics/domino/) (Physics example with dominos.) #### 🚀 PERFORMANCE: * [Performance / Sticks](http://192.241.128.187/current/examples/performance/sticks/) (Collisions performance of 320 basic box objects.) --- #### 📈 [Changelog](https://github.com/WhitestormJS/whitestorm.js/blob/master/CHANGELOG.md) | 📖 [Documentation](http://whitestormjs.xyz/) | 🎮 [Playground](http://whitestormjs.xyz/playground/) --- [Contributors](https://github.com/WhitestormJS/whitestorm.js/graphs/contributors): --- Readme --- ### Keywords * three.js * cannon.js * webgl * wagner * api * 3d * web * javascript
trillium-sessions
rust
Rust
Crate trillium_sessions === Trillium sessions --- Trillium sessions is built on top of `async-session`. Sessions allows trillium to securely attach data to a browser session allowing for retrieval and modification of this data within trillium on subsequent visits. Session data is generally only retained for the duration of a browser session. Trillium’s session implementation provides guest sessions by default, meaning that all web requests to a session-enabled trillium host will have a cookie attached, whether or not there is anything stored in that client’s session yet. ### Stores Although this crate provides two bundled session stores, it is highly recommended that trillium applications use an external-datastore-backed session storage. For a list of currently available session stores, see the documentation for async-session. ### Security Although each session store may have different security implications, the general approach of trillium’s session system is as follows: On each request, trillium checks the cookie configurable as `cookie_name` on the handler. #### If no cookie is found: A cryptographically random cookie value is generated. A cookie is set on the outbound response and signed with an HKDF key derived from the `secret` provided on creation of the SessionHandler. The configurable session store uses a SHA256 digest of the cookie value and stores the session along with a potential expiry. #### If a cookie is found: The hkdf derived signing key is used to verify the cookie value’s signature. If it verifies, it is then passed to the session store to retrieve a Session. For most session stores, this will involve taking a SHA256 digest of the cookie value and retrieving a serialized Session from an external datastore based on that digest. #### Expiry In addition to setting an expiry on the session cookie, trillium sessions include the same expiry in their serialization format. If an adversary were able to tamper with the expiry of a cookie, trillium sessions would still check the expiry on the contained session before using it #### If anything goes wrong with the above process If there are any failures in the above session retrieval process, a new empty session is generated for the request, which proceeds through the application as normal. ### Stale/expired session cleanup Any session store other than the cookie store will accumulate stale sessions. Although the trillium session handler ensures that they will not be used as valid sessions, For most session stores, it is the trillium application’s responsibility to call cleanup on the session store if it requires it ``` use trillium::Conn; use trillium_cookies::{CookiesHandler, cookie::Cookie}; use trillium_sessions::{MemoryStore, SessionConnExt, SessionHandler}; let session_secret = std::env::var("TRILLIUM_SESSION_SECRET").unwrap(); let handler = ( CookiesHandler::new(), SessionHandler::new(MemoryStore::new(), session_secret.as_bytes()), |conn: Conn| async move { let count: usize = conn.session().get("count").unwrap_or_default(); conn.with_session("count", count + 1) .ok(format!("count: {}", count)) }, ); use trillium_testing::prelude::*; let mut conn = get("/").on(&handler); assert_ok!(&mut conn, "count: 0"); let set_cookie_header = conn.headers_mut().get_str("set-cookie").unwrap(); let cookie = Cookie::parse_encoded(set_cookie_header).unwrap(); let make_request = || get("/") .with_request_header("cookie", format!("{}={}", cookie.name(), cookie.value())) .on(&handler); assert_ok!(make_request(), "count: 1"); assert_ok!(make_request(), "count: 2"); assert_ok!(make_request(), "count: 3"); assert_ok!(make_request(), "count: 4"); ``` Structs --- * CookieStoreA session store that serializes the entire session into a Cookie. * MemoryStorein-memory session store * SessionThe main session type. * SessionHandlerHandler to enable sessions. Traits --- * SessionConnExtextension trait to add session support to `Conn` Functions --- * sessionsAlias for `SessionHandler::new` Crate trillium_sessions === Trillium sessions --- Trillium sessions is built on top of `async-session`. Sessions allows trillium to securely attach data to a browser session allowing for retrieval and modification of this data within trillium on subsequent visits. Session data is generally only retained for the duration of a browser session. Trillium’s session implementation provides guest sessions by default, meaning that all web requests to a session-enabled trillium host will have a cookie attached, whether or not there is anything stored in that client’s session yet. ### Stores Although this crate provides two bundled session stores, it is highly recommended that trillium applications use an external-datastore-backed session storage. For a list of currently available session stores, see the documentation for async-session. ### Security Although each session store may have different security implications, the general approach of trillium’s session system is as follows: On each request, trillium checks the cookie configurable as `cookie_name` on the handler. #### If no cookie is found: A cryptographically random cookie value is generated. A cookie is set on the outbound response and signed with an HKDF key derived from the `secret` provided on creation of the SessionHandler. The configurable session store uses a SHA256 digest of the cookie value and stores the session along with a potential expiry. #### If a cookie is found: The hkdf derived signing key is used to verify the cookie value’s signature. If it verifies, it is then passed to the session store to retrieve a Session. For most session stores, this will involve taking a SHA256 digest of the cookie value and retrieving a serialized Session from an external datastore based on that digest. #### Expiry In addition to setting an expiry on the session cookie, trillium sessions include the same expiry in their serialization format. If an adversary were able to tamper with the expiry of a cookie, trillium sessions would still check the expiry on the contained session before using it #### If anything goes wrong with the above process If there are any failures in the above session retrieval process, a new empty session is generated for the request, which proceeds through the application as normal. ### Stale/expired session cleanup Any session store other than the cookie store will accumulate stale sessions. Although the trillium session handler ensures that they will not be used as valid sessions, For most session stores, it is the trillium application’s responsibility to call cleanup on the session store if it requires it ``` use trillium::Conn; use trillium_cookies::{CookiesHandler, cookie::Cookie}; use trillium_sessions::{MemoryStore, SessionConnExt, SessionHandler}; let session_secret = std::env::var("TRILLIUM_SESSION_SECRET").unwrap(); let handler = ( CookiesHandler::new(), SessionHandler::new(MemoryStore::new(), session_secret.as_bytes()), |conn: Conn| async move { let count: usize = conn.session().get("count").unwrap_or_default(); conn.with_session("count", count + 1) .ok(format!("count: {}", count)) }, ); use trillium_testing::prelude::*; let mut conn = get("/").on(&handler); assert_ok!(&mut conn, "count: 0"); let set_cookie_header = conn.headers_mut().get_str("set-cookie").unwrap(); let cookie = Cookie::parse_encoded(set_cookie_header).unwrap(); let make_request = || get("/") .with_request_header("cookie", format!("{}={}", cookie.name(), cookie.value())) .on(&handler); assert_ok!(make_request(), "count: 1"); assert_ok!(make_request(), "count: 2"); assert_ok!(make_request(), "count: 3"); assert_ok!(make_request(), "count: 4"); ``` Structs --- * CookieStoreA session store that serializes the entire session into a Cookie. * MemoryStorein-memory session store * SessionThe main session type. * SessionHandlerHandler to enable sessions. Traits --- * SessionConnExtextension trait to add session support to `Conn` Functions --- * sessionsAlias for `SessionHandler::new` Struct trillium_sessions::CookieStore === ``` pub struct CookieStore; ``` A session store that serializes the entire session into a Cookie. ***This is not recommended for most production deployments.*** --- This implementation uses `bincode` to serialize the Session to decrease the size of the cookie. Note: There is a maximum of 4093 cookie bytes allowed *per domain*, so the cookie store is limited in capacity. **Note:** Currently, the data in the cookie is only signed, but *not encrypted*. If the contained session data is sensitive and should not be read by a user, the cookie store is not an appropriate choice. Expiry: `SessionStore::destroy_session` and `SessionStore::clear_store` are not meaningful for the CookieStore, and noop. Destroying a session must be done at the cookie setting level, which is outside of the scope of this crate. Implementations --- ### impl CookieStore #### pub fn new() -> CookieStore constructs a new CookieStore Trait Implementations --- ### impl Clone for CookieStore #### fn clone(&self) -> CookieStore Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn load_session<'life0, 'async_trait>( &'life0 self, cookie_value: String ) -> Pin<Box<dyn Future<Output = Result<Option<Session>, Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, CookieStore: 'async_trait, Get a session from the storage backend. &'life0 self, session: Session ) -> Pin<Box<dyn Future<Output = Result<Option<String>, Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, CookieStore: 'async_trait, Store a session on the storage backend. &'life0 self, _session: Session ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, CookieStore: 'async_trait, Remove a session from the session store#### fn clear_store<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, CookieStore: 'async_trait, Empties the entire store, destroying all sessions### impl Copy for CookieStore Auto Trait Implementations --- ### impl RefUnwindSafe for CookieStore ### impl Send for CookieStore ### impl Sync for CookieStore ### impl Unpin for CookieStore ### impl UnwindSafe for CookieStore Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Struct trillium_sessions::MemoryStore === ``` pub struct MemoryStore { /* private fields */ } ``` in-memory session store --- Because there is no external persistance, this session store is ephemeral and will be cleared on server restart. ***DO NOT USE THIS IN A PRODUCTION DEPLOYMENT.*** --- Implementations --- ### impl MemoryStore #### pub fn new() -> MemoryStore Create a new instance of MemoryStore #### pub async fn cleanup(&self) -> impl Future<Output = Result<(), Error>Performs session cleanup. This should be run on an intermittent basis if this store is run for long enough that memory accumulation is a concern #### pub async fn count(&self) -> impl Future<Output = usizereturns the number of elements in the memory store ##### Example ``` let mut store = MemoryStore::new(); assert_eq!(store.count().await, 0); store.store_session(Session::new()).await?; assert_eq!(store.count().await, 1); ``` Trait Implementations --- ### impl Clone for MemoryStore #### fn clone(&self) -> MemoryStore Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn load_session<'life0, 'async_trait>( &'life0 self, cookie_value: String ) -> Pin<Box<dyn Future<Output = Result<Option<Session>, Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, MemoryStore: 'async_trait, Get a session from the storage backend. &'life0 self, session: Session ) -> Pin<Box<dyn Future<Output = Result<Option<String>, Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, MemoryStore: 'async_trait, Store a session on the storage backend. &'life0 self, session: Session ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, MemoryStore: 'async_trait, Remove a session from the session store#### fn clear_store<'life0, 'async_trait>( &'life0 self ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait, Global>>where 'life0: 'async_trait, MemoryStore: 'async_trait, Empties the entire store, destroying all sessionsAuto Trait Implementations --- ### impl !RefUnwindSafe for MemoryStore ### impl Send for MemoryStore ### impl Sync for MemoryStore ### impl Unpin for MemoryStore ### impl !UnwindSafe for MemoryStore Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Struct trillium_sessions::Session === ``` pub struct Session { /* private fields */ } ``` The main session type. --- ### Cloning and Serialization The `cookie_value` field is not cloned or serialized, and it can only be read through `into_cookie_value`. The intent of this field is that it is set either by initialization or by a session store, and read exactly once in order to set the cookie value. ### Change tracking session tracks whether any of its inner data was changed since it was last serialized. Any sessoin store that does not undergo a serialization-deserialization cycle must call `Session::reset_data_changed` in order to reset the change tracker on an individual record. #### Change tracking example ``` let mut session = Session::new(); assert!(!session.data_changed()); session.insert("key", 1)?; assert!(session.data_changed()); session.reset_data_changed(); assert_eq!(session.get::<usize>("key").unwrap(), 1); assert!(!session.data_changed()); session.insert("key", 2)?; assert!(session.data_changed()); assert_eq!(session.get::<usize>("key").unwrap(), 2); session.insert("key", 1)?; assert!(session.data_changed(), "reverting the data still counts as a change"); session.reset_data_changed(); assert!(!session.data_changed()); session.remove("nonexistent key"); assert!(!session.data_changed()); session.remove("key"); assert!(session.data_changed()); ``` Implementations --- ### impl Session #### pub fn new() -> Session Create a new session. Generates a random id and matching cookie value. Does not set an expiry by default ##### Example ``` let session = Session::new(); assert_eq!(None, session.expiry()); assert!(session.into_cookie_value().is_some()); ``` #### pub fn id_from_cookie_value(string: &str) -> Result<String, DecodeErrorapplies a cryptographic hash function on a cookie value returned by `Session::into_cookie_value` to obtain the session id for that cookie. Returns an error if the cookie format is not recognized ##### Example ``` let session = Session::new(); let id = session.id().to_string(); let cookie_value = session.into_cookie_value().unwrap(); assert_eq!(id, Session::id_from_cookie_value(&cookie_value)?); ``` #### pub fn destroy(&mut self) mark this session for destruction. the actual session record is not destroyed until the end of this response cycle. ##### Example ``` let mut session = Session::new(); assert!(!session.is_destroyed()); session.destroy(); assert!(session.is_destroyed()); ``` #### pub fn is_destroyed(&self) -> bool returns true if this session is marked for destruction ##### Example ``` let mut session = Session::new(); assert!(!session.is_destroyed()); session.destroy(); assert!(session.is_destroyed()); ``` #### pub fn id(&self) -> &str Gets the session id ##### Example ``` let session = Session::new(); let id = session.id().to_owned(); let cookie_value = session.into_cookie_value().unwrap(); assert_eq!(id, Session::id_from_cookie_value(&cookie_value)?); ``` #### pub fn insert(&mut self, key: &str, value: impl Serialize) -> Result<(), Errorinserts a serializable value into the session hashmap. returns an error if the serialization was unsuccessful. ##### Example ``` #[derive(Serialize, Deserialize)] struct User { name: String, legs: u8 } let mut session = Session::new(); session.insert("user", User { name: "chashu".into(), legs: 4 }).expect("serializable"); assert_eq!(r#"{"name":"chashu","legs":4}"#, session.get_raw("user").unwrap()); ``` #### pub fn insert_raw(&mut self, key: &str, value: String) inserts a string into the session hashmap ##### Example ``` let mut session = Session::new(); session.insert_raw("ten", "10".to_string()); let ten: usize = session.get("ten").unwrap(); assert_eq!(ten, 10); ``` #### pub fn get<T>(&self, key: &str) -> Option<T>where T: DeserializeOwned, deserializes a type T out of the session hashmap ##### Example ``` let mut session = Session::new(); session.insert("key", vec![1, 2, 3]); let numbers: Vec<usize> = session.get("key").unwrap(); assert_eq!(vec![1, 2, 3], numbers); ``` #### pub fn get_raw(&self, key: &str) -> Option<Stringreturns the String value contained in the session hashmap ##### Example ``` let mut session = Session::new(); session.insert("key", vec![1, 2, 3]); assert_eq!("[1,2,3]", session.get_raw("key").unwrap()); ``` #### pub fn remove(&mut self, key: &str) removes an entry from the session hashmap ##### Example ``` let mut session = Session::new(); session.insert("key", "value"); session.remove("key"); assert!(session.get_raw("key").is_none()); assert_eq!(session.len(), 0); ``` #### pub fn len(&self) -> usize returns the number of elements in the session hashmap ##### Example ``` let mut session = Session::new(); assert_eq!(session.len(), 0); session.insert("key", 0); assert_eq!(session.len(), 1); ``` #### pub fn regenerate(&mut self) Generates a new id and cookie for this session ##### Example ``` let mut session = Session::new(); let old_id = session.id().to_string(); session.regenerate(); assert!(session.id() != &old_id); let new_id = session.id().to_string(); let cookie_value = session.into_cookie_value().unwrap(); assert_eq!(new_id, Session::id_from_cookie_value(&cookie_value)?); ``` #### pub fn set_cookie_value(&mut self, cookie_value: String) sets the cookie value that this session will use to serialize itself. this should only be called by cookie stores. any other uses of this method will result in the cookie not getting correctly deserialized on subsequent requests. ##### Example ``` let mut session = Session::new(); session.set_cookie_value("hello".to_owned()); let cookie_value = session.into_cookie_value().unwrap(); assert_eq!(cookie_value, "hello".to_owned()); ``` #### pub fn expiry(&self) -> Option<&DateTime<Utc>returns the expiry timestamp of this session, if there is one ##### Example ``` let mut session = Session::new(); assert_eq!(None, session.expiry()); session.expire_in(std::time::Duration::from_secs(1)); assert!(session.expiry().is_some()); ``` #### pub fn set_expiry(&mut self, expiry: DateTime<Utc>) assigns an expiry timestamp to this session ##### Example ``` let mut session = Session::new(); assert_eq!(None, session.expiry()); session.set_expiry(chrono::Utc::now()); assert!(session.expiry().is_some()); ``` #### pub fn expire_in(&mut self, ttl: Duration) assigns the expiry timestamp to a duration from the current time. ##### Example ``` let mut session = Session::new(); assert_eq!(None, session.expiry()); session.expire_in(std::time::Duration::from_secs(1)); assert!(session.expiry().is_some()); ``` #### pub fn is_expired(&self) -> bool predicate function to determine if this session is expired. returns false if there is no expiry set, or if it is in the past. ##### Example ``` let mut session = Session::new(); assert_eq!(None, session.expiry()); assert!(!session.is_expired()); session.expire_in(Duration::from_secs(1)); assert!(!session.is_expired()); task::sleep(Duration::from_secs(2)).await; assert!(session.is_expired()); ``` #### pub fn validate(self) -> Option<SessionEnsures that this session is not expired. Returns None if it is expired ##### Example ``` let session = Session::new(); let mut session = session.validate().unwrap(); session.expire_in(Duration::from_secs(1)); let session = session.validate().unwrap(); task::sleep(Duration::from_secs(2)).await; assert_eq!(None, session.validate()); ``` #### pub fn data_changed(&self) -> bool Checks if the data has been modified. This is based on the implementation of `PartialEq` for the inner data type. ##### Example ``` let mut session = Session::new(); assert!(!session.data_changed(), "new session is not changed"); session.insert("key", 1); assert!(session.data_changed()); session.reset_data_changed(); assert!(!session.data_changed()); session.remove("key"); assert!(session.data_changed()); ``` #### pub fn reset_data_changed(&self) Resets `data_changed` dirty tracking. This is unnecessary for any session store that serializes the data to a string on storage. ##### Example ``` let mut session = Session::new(); assert!(!session.data_changed(), "new session is not changed"); session.insert("key", 1); assert!(session.data_changed()); session.reset_data_changed(); assert!(!session.data_changed()); session.remove("key"); assert!(session.data_changed()); ``` #### pub fn expires_in(&self) -> Option<DurationEnsures that this session is not expired. Returns None if it is expired ##### Example ``` let mut session = Session::new(); session.expire_in(Duration::from_secs(123)); let expires_in = session.expires_in().unwrap(); assert!(123 - expires_in.as_secs() < 2); ``` Duration from now to the expiry time of this session #### pub fn into_cookie_value(self) -> Option<Stringtakes the cookie value and consume this session. this is generally only performed by the session store ##### Example ``` let mut session = Session::new(); session.set_cookie_value("hello".to_owned()); let cookie_value = session.into_cookie_value().unwrap(); assert_eq!(cookie_value, "hello".to_owned()); ``` Trait Implementations --- ### impl Clone for Session #### fn clone(&self) -> Session Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn default() -> Session Returns the “default value” for a type. #### fn deserialize<__D>( __deserializer: __D ) -> Result<Session, <__D as Deserializer<'de>>::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Session) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Session #### fn serialize<__S>( &self, __serializer: __S ) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Session ### impl Send for Session ### impl Sync for Session ### impl Unpin for Session ### impl UnwindSafe for Session Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct trillium_sessions::SessionHandler === ``` pub struct SessionHandler<Store> { /* private fields */ } ``` Handler to enable sessions. --- See crate-level docs for an overview of this crate’s approach to sessions and security. Implementations --- ### impl<Store: SessionStore> SessionHandler<Store#### pub fn new(store: Store, secret: impl AsRef<[u8]>) -> Self Constructs a SessionHandler from the given `async_session::SessionStore` and secret. The `secret` MUST be at least 32 bytes long, and MUST be cryptographically random to be secure. It is recommended to retrieve this at runtime from the environment instead of compiling it into your application. ##### Panics SessionHandler::new will panic if the secret is fewer than 32 bytes. ##### Defaults The defaults for SessionHandler are: * cookie path: “/” * cookie name: “trillium.sid” * session ttl: one day * same site: strict * save unchanged: enabled * older secrets: none ##### Customization Although the above defaults are appropriate for most applications, they can be overridden. Please be careful changing these settings, as they can weaken your application’s security: ``` // this logic will be unique to your deployment let secrets_var = std::env::var("TRILLIUM_SESSION_SECRETS").unwrap(); let session_secrets = secrets_var.split(' ').collect::<Vec<_>>(); let handler = ( CookiesHandler::new(), SessionHandler::new(MemoryStore::new(), session_secrets[0]) .with_cookie_name("custom.cookie.name") .with_cookie_path("/some/path") .with_cookie_domain("trillium.rs") .with_same_site_policy(SameSite::Strict) .with_session_ttl(Some(Duration::from_secs(1))) .with_older_secrets(&session_secrets[1..]) .without_save_unchanged() ); ``` #### pub fn with_cookie_path(self, cookie_path: impl AsRef<str>) -> Self Sets a cookie path for this session handler. The default for this value is “/” #### pub fn with_session_ttl(self, session_ttl: Option<Duration>) -> Self Sets a session ttl. This will be used both for the cookie expiry and also for the session-internal expiry. The default for this value is one day. Set this to None to not set a cookie or session expiry. This is not recommended. #### pub fn with_cookie_name(self, cookie_name: impl AsRef<str>) -> Self Sets the name of the cookie that the session is stored with or in. If you are running multiple trillium applications on the same domain, you will need different values for each application. The default value is “trillium.sid” #### pub fn without_save_unchanged(self) -> Self Disables the `save_unchanged` setting. When `save_unchanged` is enabled, a session will cookie will always be set. With `save_unchanged` disabled, the session data must be modified from the `Default` value in order for it to save. If a session already exists and its data unmodified in the course of a request, the session will only be persisted if `save_unchanged` is enabled. #### pub fn with_same_site_policy(self, policy: SameSite) -> Self Sets the same site policy for the session cookie. Defaults to SameSite::Strict. See incrementally better cookies for more information about this setting #### pub fn with_cookie_domain(self, cookie_domain: impl AsRef<str>) -> Self Sets the domain of the cookie. #### pub fn with_older_secrets(self, secrets: &[impl AsRef<[u8]>]) -> Self Sets optional older signing keys that will not be used to sign cookies, but can be used to validate previously signed cookies. Trait Implementations --- ### impl<Store: SessionStore> Debug for SessionHandler<Store#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. &'life0 self, conn: Conn ) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Executes this handler, performing any modifications to the Conn that are desired.#### fn before_send<'life0, 'async_trait>( &'life0 self, conn: Conn ) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run, as well as database transaction handlers and similar library code. &'life0 mut self, _info: &'life1 mut Info ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, 'life1: 'async_trait, Self: 'async_trait, Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional, and chances are high that you do not need this. predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the `trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>( &'life0 self, _upgrade: Upgrade<BoxedTransport> ) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait, Global>>where 'life0: 'async_trait, Self: 'async_trait, This will only be called if the handler reponds true to `Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport (and therefore the Upgrade) will hang up / disconnect.#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.Auto Trait Implementations --- ### impl<Store> RefUnwindSafe for SessionHandler<Store>where Store: RefUnwindSafe, ### impl<Store> Send for SessionHandler<Store>where Store: Send, ### impl<Store> Sync for SessionHandler<Store>where Store: Sync, ### impl<Store> Unpin for SessionHandler<Store>where Store: Unpin, ### impl<Store> UnwindSafe for SessionHandler<Store>where Store: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Trait trillium_sessions::SessionConnExt === ``` pub trait SessionConnExt { // Required methods fn with_session(self, key: &str, value: impl Serialize) -> Self; fn session(&self) -> &Session; fn session_mut(&mut self) -> &mut Session; } ``` extension trait to add session support to `Conn` `SessionHandler` **MUST** be called on the conn prior to using any of these functions. Required Methods --- #### fn with_session(self, key: &str, value: impl Serialize) -> Self append a key-value pair to the current session, where the key is a &str and the value is anything serde-serializable. #### fn session(&self) -> &Session retrieve a reference to the current session #### fn session_mut(&mut self) -> &mut Session retrieve a mutable reference to the current session Implementations on Foreign Types --- ### impl SessionConnExt for Conn #### fn session(&self) -> &Session #### fn with_session(self, key: &str, value: impl Serialize) -> Self #### fn session_mut(&mut self) -> &mut Session Implementors --- Function trillium_sessions::sessions === ``` pub fn sessions<Store>( store: Store, secret: impl AsRef<[u8]> ) -> SessionHandler<Store>where Store: SessionStore, ``` Alias for `SessionHandler::new`
ambethia-recaptcha
ruby
Ruby
reCAPTCHA === Author <NAME> ([ambethia.com](http://ambethia.com)) Copyright Copyright © 2007 <NAME> License [MIT](http://creativecommons.org/licenses/MIT/) Info [ambethia.com/recaptcha](http://ambethia.com/recaptcha) Git [github.com/ambethia/recaptcha/tree/master](https://github.com/ambethia/recaptcha/tree/master) Bugs [github.com/ambethia/recaptcha/issues](https://github.com/ambethia/recaptcha/issues) This plugin adds helpers for the [reCAPTCHA API](http://recaptcha.net). In your views you can use the `recaptcha_tags` method to embed the needed javascript, and you can validate in your controllers with `verify_recaptcha`. You’ll want to add your public and private API keys in the environment variables `RECAPTCHA_PUBLIC_KEY` and `RECAPTCHA_PRIVATE_KEY`, respectively. You could also specify them in `config/environment.rb` if you are so inclined (see below). Exceptions will be raised if you call these methods and the keys can’t be found. Rails Installation --- reCAPTCHA for Rails can be installed as a gem: ``` config.gem "ambethia-recaptcha", :lib => "recaptcha/rails", :source => "http://gems.github.com" ``` Or, as a standard rails plugin: ``` script/plugin install git://github.com/ambethia/recaptcha.git ``` Setting up your API Keys --- There are two ways to setup your reCAPTCHA API keys once you [obtain](http://recaptcha.net/whyrecaptcha.html) a pair. You can pass in your keys as options at runtime, for example: ``` recaptcha_tags :public_key => '<KEY>' ``` and later, ``` verify_recaptcha :private_key => '<KEY>' ``` Or, preferably, you can keep your keys out of your code base by exporting the environment variables mentioned earlier. You might do this in the .profile/rc, or equivalent for the user running your application: ``` export RECAPTCHA_PUBLIC_KEY = '<KEY>' export RECAPTCHA_PRIVATE_KEY = '<KEY>' ``` If that’s not your thing, and dropping things into `config/environment.rb` is, you can just do: ``` ENV['RECAPTCHA_PUBLIC_KEY'] = '<KEY>' ENV['RECAPTCHA_PRIVATE_KEY'] = '<KEY>' ``` `recaptcha_tags` --- Some of the options available: `:ssl` Uses secure http for captcha widget (default `false`) `:noscript` Include <noscript> content (default `true`) `:display` Takes a hash containing the `theme` and `tabindex` options per the API. (default `nil`) `:ajax` Render the dynamic AJAX captcha per the API. (default `false`) `:public_key` Your public API key, takes precedence over the ENV variable (default `nil`) `:error` Override the error code returned from the reCAPTCHA API (default `nil`) You can also override the html attributes for the sizes of the generated `textarea` and `iframe` elements, if CSS isn’t your thing. Inspect the source of `recaptcha_tags` to see these options. `verify_recaptcha` --- This method returns `true` or `false` after processing the parameters from the reCAPTCHA widget. Why isn’t this a model validation? Because that violates MVC. Use can use it like this, or how ever you like. Passing in the ActiveRecord object is optional, if you do–and the captcha fails to verify–an error will be added to the object for you to use. Some of the options available: `:model` Model to set errors `:message` Custom error message `:private_key` Your private API key, takes precedence over the ENV variable (default `nil`). `:timeout` The number of seconds to wait for reCAPTCHA servers before give up. (default `3`) ``` respond_to do |format| if verify_recaptcha(:model => @post, :message => 'Oh! It's error with reCAPTCHA!') && @post.save # ... else # ... end end ``` TODO --- * Remove Rails/ActionController dependencies * Framework agnostic * Add some helpers to use in before_filter and what not * Better documentation
packagetrackr
cran
R
Package ‘packagetrackr’ October 14, 2022 Type Package Title Track R Package Downloads from RStudio's CRAN Mirror Version 0.1.1 Date 2015-09-22 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Allows to get and cache R package download log files from RStudio's CRAN mirror for analyzing package usage. URL http://gitlab.points-of-interest.cc/points-of-interest/packagetrackr http://cran-logs.rstudio.com/ BugReports http://gitlab.points-of-interest.cc/points-of-interest/packagetrackr/issues Depends magrittr Imports rappdirs, utils, httr (>= 1.0.0), dplyr (>= 0.4.3) License GPL (>= 3) LazyData TRUE NeedsCompilation no Repository CRAN Date/Publication 2015-09-23 02:48:55 R topics documented: download_lo... 2 packagetracke... 2 package_download... 2 package_download_cache_di... 3 download_log Download and unzip a package download log .csv Description Format is assumed to be as provided by cran-logs.rstudio.com Usage download_log(day, package_name = NULL, cache_dir = NULL) Arguments day day of the log to download package_name package name if not NULL, downloads are directly filtered for this package cache_dir if not NULL, results are cached in this directory packagetracker packagetracker Description packagetracker package_downloads Track package downloads from Rstudio’s CRAN mirror Description Results are cached in a local folder. Usage package_downloads(package_name, start = as.Date("2012-10-01"), end = Sys.Date() - 1, cache_dir = package_download_cache_dir(package_name), force = FALSE) Arguments package_name Name of the package to get download statistics for start first day of requested download stats end last day of requested download stats cache_dir cache folder to use, defaults to one given by rappdirs force if TRUE, user is not prompted to confirm writing to hard disk (intended for non-interactive use) package_download_cache_dir (via rappdirs) Examples ## Not run: package_download("package_downloads", start = as.Date("2015-09-01")) package_download_cache_dir Canonical download directory for package download logs Description Canonical download directory for package download logs Usage package_download_cache_dir(package_name) remove_package_download_cache_dir(package_name) Arguments package_name name of package
lexiconPT
cran
R
Package ‘lexiconPT’ October 13, 2022 Type Package Title Lexicons for Portuguese Text Analysis Version 0.1.0 Description Provides easy access for sentiment lexicons for those who want to do text analysis in Por- tuguese texts. As of now, two Portuguese lexicons are available: 'SentiLex- PT02' and 'OpLexicon' (v2.1 and v3.0). License GPL-2 | file LICENSE Encoding UTF-8 LazyData true RoxygenNote 6.0.1 Depends R(>= 2.10.0) NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2017-10-24 10:24:58 UTC R topics documented: get_word_sentimen... 2 oplexicon_v2.... 2 oplexicon_v3.... 3 sentiLex_lem_PT0... 3 get_word_sentiment get_word_sentiment Description Lookup word or term in datasets available on lexiconPT Usage get_word_sentiment(word) Arguments word character. Value A list of all datasets available on lexiconPT filtered by the inputed word. Examples { get_word_sentiment("cantar") } oplexicon_v2.1 OpLexicon V2.1 Description OpLexicon is a sentiment lexicon for the Portuguese language. Please see SOUZA and VIEIRA (2012) and SOUZA et al. (2012) for its complete reference and documentation. Usage oplexicon_v2.1 Format A data frame with 30677 rows and 3 variables: term character. The word or term. type character. Grammar classification of the word or term. polarity integer. Numeric classification of the polarity or sentiment. It can only assume the values of -1, 0, and 1 Source http://ontolp.inf.pucrs.br/Recursos/downloads-OpLexicon.php oplexicon_v3.0 OpLexicon V3.0 Description OpLexicon is a sentiment lexicon for the Portuguese language. Please see SOUZA and VIEIRA (2012) and SOUZA et al. (2012) for its complete reference and documentation. Usage oplexicon_v3.0 Format A data frame with 32191 rows and 4 variables: term character. The word of term. It also includes emoticons. type character. The type of the term. polarity integer. Numeric classification of the polarity or sentiment. polarity_revision character. Was the polarity obtained manually (A) or automatically (C)? Source http://ontolp.inf.pucrs.br/Recursos/downloads-OpLexicon.php sentiLex_lem_PT02 SentiLex-PT02 Description A sentiment lexicon designed for the extraction of sentiment and opinion about human entities in Portuguese texts. Please see SILVA, CARVALHO, COSTA and SARMENTO (2010) for its complete reference and documentation. Usage sentiLex_lem_PT02 Format A data frame with 7014 rows and 5 variables: term character The word of term. grammar_category character. The grammar classification of the term. polarity double. Numeric classification of the polarity or sentiment. polarity_target character. Polarity target. It can be N0 (subject), N1 (complement) or N2 (no documentation was found for what it means). polarity_classification character. Was the polarity obtained manually (MAN) or automati- cally (JALC)?. Source <NAME>, <NAME>, <NAME>, <NAME>, Automatic Expansion of a Social Judgment Lexicon for Sentiment Analysis Technical Report. TR 10-08. University of Lisbon, Faculty of Sciences, LASIGE, December 2010. doi: 10455/6694
mmodely
cran
R
Package ‘mmodely’ May 17, 2023 Version 0.2.5 Date 2023-05-05 Title Modeling Multivariate Origins Determinants - Evolutionary Lineages in Ecology Author <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 2.0.0),caper Imports stats, caroline, ape Description Perform multivariate modeling of evolved traits, with special attention to understanding the interplay of the multi-factorial determinants of their origins in complex ecological settings (Stephens, 2007 <doi:10.1016/j.tree.2006.12.003>). This software primarily concentrates on phylogenetic regression analysis, enabling implementation of tree transformation averaging and visualization functionality. Functions additionally support information theoretic approaches (Grueber, 2011 <doi:10.1111/j.1420-9101.2010.02210.x>; Garamszegi, 2011 <doi:10.1007/s00265-010-1028-7>) such as model averaging and selection of phylogenetic models. Accessory functions are also implemented for coef standardization (Cade 2015), selection uncertainty, and variable importance (Burnham & Anderson 2000). There are other numerous functions for visualizing confounded variables, plotting phylogenetic trees, as well as reporting and exporting modeling results. Lastly, as challenges to ecology are inherently multifarious, and therefore often multi-dataset, this package features several functions to support the identification, interpolation, merging, and updating of missing data and outdated nomenclature. License Apache License LazyLoad yes NeedsCompilation no Repository CRAN Date/Publication 2023-05-17 07:10:02 UTC R topics documented: average.fit.model... 3 calc.q2n.rati... 4 cep... 5 comp.dat... 5 compare.data.gs.vs.tree.tip... 6 correct.AI... 7 count.mod.var... 8 ct.possible.model... 8 drop.na.dat... 9 fit.1ln.rpr... 9 get.mod.clmn... 10 get.mod.outcom... 11 get.mod.var... 12 get.model.combo... 12 get.pgls.coef... 13 get.phylo.stat... 14 gs.chec... 15 gs.names.mismatch.chec... 15 gs.renam... 16 interpolat... 17 missing.dat... 18 missing.fill.i... 18 pgls.ite... 19 pgls.iter.stat... 20 pgls.prin... 21 pgls.repor... 22 pgls.wra... 23 plot.confound.gri... 24 plot.pgls.iter... 25 plot.pgls.R2AI... 26 plot.transformed.phyl... 27 plot.xy.ab.... 28 select.best.model... 29 sparge.modse... 30 trim.phyl... 31 weight.I... 32 average.fit.models Calculate a weighted average of pglm Description These function takes the output of pgls.iter and uses its list of objects model fits, optimzations (e.g.AICc) and performs a weighted average on the ctoefficients estimated in the former by weight- ing by the latter. The parameters can also optionally be converted to binary by specifying "bi- nary=FALSE" or just running the alias wraper function for assessing evidence of variable impor- tance (Burnham & Anderson 2000). Usage average.fit.models(vars, fits, optims,weight='AICw', by=c('n','q','nXq','rwGsm')[1], round.digits=5, binary=FALSE, standardize=FALSE) variable.importance(vars, fits, optims,weight='AICw', by=c('n','q','nXq','rwGsm')[1], round.digits=5) Arguments vars variable names of model fits a list of PGLS model fits optims a list of PGLS optimization paramters (should include "AICw") weight a column name in the optims that specifies the weights to be used in the average by unique identifier used to group sub-datasets for reporting (defaults to n) round.digits the tnumber of decimal places of the resultant mean to ouput binary converts all parameters to binary for presense or absense to calculate ’impor- tance’ standardize standardize the coefficient estimates by partial standard deviations, according to Cade (2015) Value A vector of AICc difference weighted [AICw] averages of PGLS coefficients. Also returns model ’selection’ errors or the square root of ’uncertainties’ (Burnham & Anderson 2000) Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) average.fit.models(vars=c('mass.Kg','group.size'), fits=PGLSi$fits, optims=PGLSi$optim) variable.importance(vars=c('mass.Kg','group.size'), fits=PGLSi$fits, optims=PGLSi$optim) calc.q2n.ratio Calculate the ratio of fit predictor variables to sample size Description The one in ten rule of thumb for model fitting suggest at least 10 fold as many data as parametes fit. This function allows for easily calculating that ratio on model selected PGLS fits. Usage calc.q2n.ratio(coefs) Arguments coefs a list of coefficients extracted from fit PGLS models Value the ratio of q to n (on average for all extracted fit models) Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) coefs.objs <- get.pgls.coefs(PGLSi$fits, est='Estimate') calc.q2n.ratio(coefs.objs) cept Include all variables except ... Description This function takes a dataframe, list, or a named vector of variable (column) names to subset Usage cept(x,except='gn_sp') Arguments x a dataframe, list, or named vector except a vector of the names of the items in x to exclude Value the subset of x without those ’except’ items specified Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) df.except.gnsp <- cept(x=data,except='gn_sp') comp.data Comparative Data Description This is a shortcut function that wraps around "comparative.data" for use in the PGLS function. Usage comp.data(phylo,df,gn_sp='gn_sp') Arguments phylo a tre file of the format phylo df a data.frame with row names matching number and tip labels of ’tree’ gn_sp the column name (e.g. "gn_sp") that indicates how to match df with tree Value a "comparative data" table for use in PGLS modeling Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) compare.data.gs.vs.tree.tips Find data being dropped by mismatches to the tree Description This function simply lists the rows of the data that are not getting matched to tips of the tree. Usage compare.data.gs.vs.tree.tips(data, phylo, match.on=c('gn_sp','rownames')[1]) Arguments data a data frame with genus species information as row names and a column named "gn_sp" phylo a phylogenetic tree with labeled tip match.on use a character string specifiying where the ’Genus_species’ vector lies Value prints rows that are not matched ot the tree tips Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] compare.data.gs.vs.tree.tips(data, phyl, match.on='rownames') correct.AIC Correct AIC Description Calculate a corrected Akaiki Information Criterion Usage correct.AIC(AIC, K,n) Arguments AIC a vector of AIC values K number of parameters n number of data Value corrected AIC values Examples correct.AIC(AIC=100,K=10,n=100) count.mod.vars Count the predictor variables in a model Description This function takes a model string and counts the number of predictor variables. Usage count.mod.vars(model) Arguments model model specified as a string in the form "y ~ x1 + x2 ..." Value an integer specifying the count of predictor variables Examples count <- count.mod.vars(model=formula('y ~ x1 + x2')) if(count == 2) { print('sane'); }else{ print('insane')} ct.possible.models Count all possible model combinations Description Count all combinations of predictor variables in a multivariate regression model. Usage ct.possible.models(q) Arguments q number of predictor variables Value a count of the number of possible models Examples ct.possible.models(9) drop.na.data Drop any rows with NA values Description This function takes a dataframe as input and removes any rows that have NA as values. Usage drop.na.data(df, vars=names(df)) Arguments df a dataframe vars sub set of variable (column) names to use in searching for missing values Value A subset of ’df’ that only has non-missing values in the columns specified by ’vars’ Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) df.nona <- drop.na.data(data, vars=names(df)) fit.1ln.rprt Report a model fit in a single line of text output Description This function takes a fit multivariate regression model as input and converts the normal tabular output into a single line using repeated "+"or"-" symbols for significance Usage fit.1ln.rprt(fit, method=c('std.dev','p-value')[1], decimal.places=3, name.char.len=6, print.inline=TRUE, rtrn.line=FALSE, R2AIC=TRUE,mn='') Arguments fit a fit model method how to calculate the number of pluses or minuses before each coefficient name (default is standard deviations) decimal.places the number of decimal places to use in reporting p-values name.char.len the maximum length to use when truncating variable names R2AIC boolean for also returning/printing AIC and R^2 values print.inline should the outout string be printed to the terminal? rtrn.line should the output string be returned as a characters string? mn model number prefixed to printout if ’print.inline’ is TRUE Value A character string of the form "++var1 +var5 var3 | -var2 –var4" indicating signifcance and direction of regression results Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) model.fit <- lm('OC ~ mass.Kg + group.size + arboreal + leap.pct', data=data) fit.1ln.rprt(fit=model.fit, decimal.places=3, name.char.len=6, print.inline=TRUE, rtrn.line=FALSE) get.mod.clmns Get model columns Description Get the variable names from a model string by splitting on "+" and ’~’ using both ’get.mod.outcome’ and ’get.mod.vars’. The results are passed to the comp.data function for eventual use in PGLS modeling. ’gn_sp’ is included as it is typically required to link tree tips to rows of the comparative data. Usage get.mod.clmns(model, gs.clmn='gn_sp') Arguments model a model string of the form "y ~ x1 + x2 ..." gs.clmn the column header for the vector of "Genus_species" names, to link a tree tips to rows Value a vector of characters enummerating the columns to retain in PGLS modeling (input to df param in the ’comp.data’ function) Examples model.columns <- get.mod.clmns(model=formula('y ~ x1 + x2')) get.mod.outcome Get the outcome variable from a model string Description Get the outcome variable from the front of a model formula string. Used as part of ’get.mod.clmns’ to be passed to ’comp.data’ Usage get.mod.outcome(model) Arguments model a character string of a formula of the form ’y ~ x1 + x2 ...’ Value a character string specifying the outcome variable Examples model.columns <- get.mod.clmns(model=formula('y ~ x1 + x2')) get.mod.vars Get model variable names Description Split the predictor string of a model formula into it’s constituent character strings. Usage get.mod.vars(model) Arguments model a character string of a formula of the form ’y ~ x1 + x2’ Value a vector of character strings of variable names (e.g. corresponding to column names for comp.data input) Examples model.variables <- get.mod.vars(model='y ~ x1 + x2') get.model.combos All combinations of predictor variables Description Enumerate all combinations of predictor variables in a multivariate regression model. Usage get.model.combos(outcome.var, predictor.vars, min.q=1) Arguments predictor.vars predictor variables names (a vector of character strings) outcome.var outcome variable name (character string) min.q minimum number of predictor variables to include in the mode (default is 2) Value a vector of models as character strings of the form "y ~ x1 + x2 ..." Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) get.model.combos(outcome.var='OC', predictor.vars=names(data), min.q=2) get.pgls.coefs Get coeficients from a list of PGLS model-fits (from each selected sub- set) Description Post PGLS model selection, the list of all possible PGLS model fits can be subset and passed to this function, which harvests out the coefficients or t-values for each model into bins for the coefficients Usage get.pgls.coefs(pgls.fits, est=c("t value","Estimate","Pr(>|t|)")[1]) Arguments pgls.fits a list of PGLS models output from ’pgls’ or ’pgls.report’ est a character string indicating if Estimate or t value should be used as data points in the plot, default is ’Estimate’ Value A list of PGLS coeficients (lists of estimates and t-values) organized by coeficient-named bins Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) coefs.objs <- get.pgls.coefs(PGLSi$fits, est='Estimate') get.phylo.stats Get tree statistics for a trait Description This function uses Pagel’s lambda, Blombergs k, and Ancestral Character Estimation [ACE] to calculate statistics on a tree given a specified trait. Usage get.phylo.stats(phylo, data, trait.clmn, gs.clmn='gn_sp', ace.method='REML',ace.scaled=TRUE, ace.kappa=1) Arguments phylo PARAMDESCRIPTION data PARAMDESCRIPTION trait.clmn PARAMDESCRIPTION gs.clmn PARAMDESCRIPTION ace.method PARAMDESCRIPTION ace.scaled PARAMDESCRIPTION ace.kappa PARAMDESCRIPTION Value statistics on a particular trait within a tree (Pagel’s lambda, Blomberg’s k, and the most ancestral ACE estimate) Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] get.phylo.stats(phylo=phyl, data=data, trait.clmn='OC', gs.clmn='gn_sp', ace.method='REML',ace.scaled=TRUE, ace.kappa=1) gs.check Check "Genus species" name formatting Description This convienience function checks to make sure that all of the elements the provided character vector adhere to the "Genus species" naming convention format. Default delimiters between genus and species names in the string are " ", "_", or "." Usage gs.check(genus.species, sep='[ _\\.]') Arguments genus.species a vector of character strings specifiying the combination of Genus [and] species sep a regular expression between genus and species Value None Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") gs.tab <- read.csv(path, row.names=1) gs.tab$gn_sp <- rownames(gs.tab) gs.check(genus.species=gs.tab$gn_sp, sep='[ _\\.]') gs.names.mismatch.check Check "Genus species" name formatting Description This convienience function checks to make sure that all of the elements the provided character vector adhere to the "Genus species" naming convention format. Default delimiters between genus and species names in the string are " ", "_", or "." Usage gs.names.mismatch.check(df, alias.table.path, gs.clmn='gn_sp') Arguments df a data frame with genus species information as row names and optionally in a column named "gn_sp" alias.table.path a file system path (e.g. ’inst/extdata/primate.taxa.aliases.tab’) to a lookup table with ’old.name’ and ’new.name’ as columns gs.clmn the name of the column containing the ’Genus_species’ vector Value None Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") gs.tab <- read.csv(path, row.names=1) gs.tab$gn_sp <- rownames(gs.tab) path.look <- system.file("extdata","primate.taxa.aliases.tab", package="mmodely") gs.names.mismatch.check(gs.tab, alias.table.path=path.look, gs.clmn='gn_sp') gs.rename Rename the Genus species information in a data frame Description This function takes a data frame (with a genus species column) and proceeds to use an external look-up table to update the names if they’ve been changed Usage gs.rename(df, alias.table.path, retro=FALSE, update.gn_sp=FALSE) Arguments df a data frame with genus species information as row names and optionally in a column named "gn_sp" alias.table.path a file system path (e.g. ’inst/extdata/primate.taxa.aliases.tab’) to a lookup table with ’old.name’ and ’new.name’ as columns retro a boolean (T/F) parameter specifying if the renaming should go from new to old instead of the default of old to new update.gn_sp a boolean parameter specifying if the ’gn_sp’ column should also be updated with ’new.name’s Value the original data frame with (potentially) updated row names and updated gn_sp column values Examples path.data <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path.data, row.names=1) path.look <- system.file("extdata","primate.taxa.aliases.tab", package="mmodely") data.renamed <- gs.rename(df=data, alias.table.path=path.look, retro=FALSE, update.gn_sp=FALSE) interpolate Interpolate missing data in a data frame Description This function finds NA values and interpolates using averaging values of nearby genus and species Usage interpolate(df, taxa=c('genus','family'), clmns=1:length(df)) Arguments df a data frame taxa a vector of taxonomic ranks (corresonding to columns) to assist in guiding the interpolating clmns the names of the columns to interpolate over Value a modified data frame without missing values in the columns specified Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") gs.tab <- read.csv(path, row.names=1) clmns <- match(c('mass.Kg','DPL.km'),names(gs.tab)) df.2 <- interpolate(df=gs.tab, taxa='genus', clmns=clmns) missing.data Report missing values in a dataframe Description This funciton reports column and rowwise missing data. It can also list the rownames for missing columns or the column names for missing rows. Usage missing.data(x, cols=NULL, rows=NULL) Arguments x a dataframe cols print the specific rows corresponding to missing values in this column rows print the specific cols corresponding to missing values in this rowname Value a report on column versus row wise missing data Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) missing.data(data) missing.fill.in Fill in missing values in a dataframe with a secondary source Description This function uses the (non-missing) values from one column to fill in the missing values of another Usage missing.fill.in(x, var.from, var.to) Arguments x a dataframe or matrix var.from secondary variable (of the same type and units) providing values to ’var.to’ var.to primary variable with missing values to fill in by ’var.from’ Value a modified dataframe with fewer missing values in the ’var.to’ column Examples df <- data.frame(a=c(1,2,NA),b=c(1,NA,3),c=c(1,2,6)) missing.fill.in(df, 'c','a') pgls.iter Iterate through PGLS estimations Description This function takes phylogenetic tree and a list of (all possible) combinations of variables as a vector of model strings and estimates PGLS fits based on the bounds or tree parameters provided seperately. Usage pgls.iter(models, phylo, df, gs.clmn='gn_sp', b=list(lambda=c(.2,1),kappa=c(.2,2.8),delta=c(.2,2.8)),l='ML', k='ML',d='ML') Arguments models a vector of all possible model formulas (as character strings) phylo a phylogenetic tree df the name of the column used to specify ’Genus_species’ gs.clmn the name of the column containing the ’Genus_species’ vector b a list of vectors of upper and lower bounds for kappa, lambda, and delta k the fixed or ’ML’ value for kappa l the fixed or ’ML’ value for lambda d the fixed or ’ML’ value for delta Value a list of fit PGLS regression models plus ’optim’ and ’param’ support tables Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) pgls.iter.stats Statistics from PGLS runs Description Print (and plot) statistics from a list of PGLSs fitted models and tables of associated parameters. Usage pgls.iter.stats(PGLSi, verbose=TRUE, plots=FALSE) Arguments PGLSi a list of PGLS iter objects, each of which is list including: fitted PGLS model, a optim table, and a tree-transformation parameter table verbose the model formula (as acharacter string) plots the fixed or ’ML’ value for kappa Value A summary statistics on each of the objects in the PGLS list of lists Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) pgls.iter.stats(PGLSi, verbose=TRUE, plots=FALSE) pgls.print Print the results of a PGLS model fit Description Print the results of a PGLS model fit Usage pgls.print(pgls, all.vars=names(pgls$data$data)[-1], model.no=NA, mtx.out=NA, write=TRUE, print=FALSE) Arguments pgls a fit PGLS model all.vars the names of all the variables to be reported model.no the model number (can be the order that models were run mtx.out should a matrix of the tabular summary results be returned write should the matrix of summary results be written to disk? print should the matrix of summary results be printed to screen? Value A matrix of summary results of a fit PGLS model Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") #5. RAxML phylogram based on the 61199 bp concatenation of 69 nuclear and ten mitochondrial genes. phyl <- ape::read.tree(tree.path)[[5]] phyl <- trim.phylo(phylo=phyl, gs.vect=data$gn_sp) comp <- comp.data(phylo=phyl, df=data) a.PGLS <- caper::pgls(formula('OC~mass.Kg + DPL.km'), data=comp) pgls.print(a.PGLS, all.vars=names(a.PGLS$data$data)[-1], model.no=NA, mtx.out='', write=FALSE, print=FALSE) pgls.report Report PGLS results as a table Description Output a spreadsheet ready tabular summary of a fit PGLS model Usage pgls.report(cd, f=formula('y~x'), l=1,k=1,d=1, bounds=list(lambda=c(.2,1),kappa=c(.2,2.7),delta=c(.2,2.7)), anova=FALSE, mod.no='NA', out='pgls.output-temp',QC.plot=FALSE) Arguments cd a comparative data object, here created by ’comp.data’ f the model formula (as acharacter string) k the fixed or ’ML’ value for kappa l the fixed or ’ML’ value for lambda d the fixed or ’ML’ value for delta bounds a list of vectors of upper and lower bounds for kappa, lambda, and delta anova should an anova be run on the fit model and output to the terminal? mod.no the model number (can be the order that models were run) out the base filename to be printed out QC.plot should a quality control plot be output to screen? Value A summary results of a fit PGLS model with ANOVA and tabular spreadsheet ready csv filesystem output. Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") #5. RAxML phylogram based on the 61199 bp concatenation of 69 nuclear and ten mitochondrial genes. phyl <- ape::read.tree(tree.path)[[5]] phyl <- trim.phylo(phylo=phyl, gs.vect=data$gn_sp) comp <- comp.data(phylo=phyl, df=data) pgls.report(comp, f=formula('OC~mass.Kg + DPL.km'), l=1,k=1,d=1, anova=FALSE, mod.no='555', out='', QC.plot=TRUE) pgls.wrap A Wrapper for PGLS model Description Print the results of an unfit PGLS model Usage pgls.wrap(cd,f,b,l,k,d,all.vars=names(cd$data)[-1], model.no=NA, mtx.out=NA, write=TRUE,print=FALSE) Arguments cd a ’comparative data’ object, here created by ’comp.data(phylo, df, gs.clmn)’ f the model formula (as acharacter string) b a list of vectors of upper and lower bounds for kappa, lambda, and delta l the fixed or ’ML’ value for lambda k the fixed or ’ML’ value for kappa d the fixed or ’ML’ value for delta all.vars the names of all the variables to be reported model.no the model number (can be the order that models were run mtx.out should a matrix of the tabular summary results be returned write should the matrix of summary results be written to disk? print should the matrix of summary results be printed to screen? Value A matrix of summary results of a fit PGLS model Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") #5. RAxML phylogram based on the 61199 bp concatenation of 69 nuclear and ten mitochondrial genes. phyl <- ape::read.tree(tree.path)[[5]] phyl <- trim.phylo(phylo=phyl, gs.vect=data$gn_sp) comp <- comp.data(phylo=phyl, df=data) model <- 'OC ~ mass.Kg + group.size' pgls.wrap(cd=comp,f=model,b=list(kappa=c(.3,3),lambda=c(.3,3),delta=c(.3,3)), l=1,k=1,d=1,all.vars=names(cd.obj$data)[-1]) plot.confound.grid Plot a grid of x y plots split by a confounder z Description Plot a grid of x y plots showing how a third confounding variable ’z’ changes the slope Usage ## S3 method for class 'confound.grid' plot(x, Y='y', X='x', confounder='z', breaks=3,...) Arguments x a data frame Y the name of the column with the dependent/outcome variable X the name of the column with the predictor variable confounder the name of the column with confounding variable breaks number or vector of breaks to split the plots horizontally (across x) ... other arguments passed to ’plot’ Value a confound grid plot Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) data$col <- c('yellow','red')[data$nocturnal+1] plot.confound.grid(x=data, Y='OC', X='leap.pct', confounder='mass.Kg') plot.pgls.iters Plot the PGLS iterations Description A plot of AIC (and AICc) vs R^2 (and adjusted R^2) for all of the PGLS iterations Usage ## S3 method for class 'pgls.iters' plot(x, bests=bestBy(x$optim, by=c('n','q','qXn','rwGsm')[1], best=c('AICc','R2.adj')[1], inverse=FALSE), ...) Arguments x a PGLSi[teration] object (a list of pgls model fits as well as optimization and tree parameter tables) bests a table of the ’best’ models to highlight in the plot based on some optimization criterion (e.g. R2) ... other parameters passed to ’plot’ Value a plot of all of PGLS iterations Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) # sprinkle in some missing data so as to make model selection more interesting for(pv in pvs){ data[sample(x=1:nrow(data),size=2),pv] <- NA} PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) # find the lowest AIC within each q by n sized sub-dataset plot.pgls.iters(x=PGLSi) plot.pgls.R2AIC Plot (R2 vs AIC) results of a collection of fit PGLS models Description Plots a single panel of R^2 versus AIC, using versions of your choosing. Usage ## S3 method for class 'pgls.R2AIC' plot(x, bests=bestBy(x, by=c('n','q','qXn','rwGsm')[4], best=c('AICc','R2.adj')[1], inverse=c(FALSE,TRUE)[1]),bcl=rgb(1,1,1,maxColorValue=3,alpha=1), nx=2, model.as.title='', ...) Arguments x a PGLSi[teration]$optim [optimization] table bests a list of the best PGLS models grouped by variable count and sorted by some metric (e.g. adjusted R2) bcl background color of plot point nx point size expansion factor to multiply against sample size ratio (this model to max of all models) model.as.title uses model.1ln.report to create a short character string of the "best" model re- sults as a title ... other parameters passed to ’plot’ Value a plot of R2 versus AIC of many PGLS models Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:6]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) # sprinkle in some missing data so as to make model selection more interesting for(pv in pvs){ data[sample(x=1:nrow(data),size=2),pv] <- NA} PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) plot.pgls.R2AIC(PGLSi$optim) # find the lowest AIC within each q by n sized sub-dataset plot.transformed.phylo Plot a transformed phylogenetic tree Description PGLS regression will use maximum likelihood to estimate tree parameters while also estimating regression parameters. Here we provide a utility function to visualize what this new tree would look like in two dimensions. Usage ## S3 method for class 'transformed.phylo' plot(x, delta=1,kappa=1,...) Arguments x a phylogenetic tree delta an integer between 0 and 3 kappa an integer between 0 and 3 ... other parameters passed to ’plot’ Value a plot of a transformed phylogenetic tree Examples tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] plot.transformed.phylo(x=phyl, delta=2.3,kappa=2.1) plot.xy.ab.p An x/y scatterplot with a linear regression line and p-value Description This function performs a simple scatter plot but also superimposses a linear regression trend (abline) and optionally also the p-value of this line Usage ## S3 method for class 'xy.ab.p' plot(x, x.var, y.var, fit.line=TRUE, p.value=TRUE, slope=TRUE, p.col='red', plot.labels=TRUE, verbose=TRUE, ...) Arguments x a data frame x.var the name of the x variable in df y.var the name of the y variable in df fit.line should a fit (ab) line be drawn? p.value should the p-value be printed on the plot? slope should the slope be printed on the plot? p.col should the plot be labeled? plot.labels should all of thie model fit information be printed out? verbose should all other information be printed out too? ... other parameters passed to ’plot’ Value An x/y scatterplot with regression line Examples path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(path, row.names=1) plot.xy.ab.p(x=data, x.var='OC', y.var='group.size', fit.line=TRUE, p.value=TRUE, slope=TRUE, p.col='red', plot.labels=TRUE, verbose=TRUE) select.best.models Get the best model from list of PGLS model fits Description Get the outcome variable from the front of a model formula string. Used as part of ’get.mod.clmns’ to be passed to ’comp.data’ Usage select.best.models(PGLSi, using=c('AICc','R2.adj','AIC','R2')[1], by=c('n','q','nXq','rwGsm')[1]) Arguments PGLSi a list of PGLS iter objects, each of which is list including: fitted PGLS model, a optim table, and a tree-transformation parameter table using performance metric to use in searching for the best model by unique identifier used to group sub-datasets for reporting (defaults to n) Value a line corresponding to the "best" models from the PGLSi "optim" table Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) a.PGLS <- select.best.models(PGLSi, by=c('R2.adj','AICc')[1]) sparge.modsel Coeficients distribution [sparge] plot of models selected from each subset Description Plot the raw distribution of points corresponding to the coefficients harvested from the best model of each subset of the dataset. Usage sparge.modsel(PC, jit.f=1, R2x=3, nx=2, n.max=max(unlist(PC$n)), zeroline=TRUE, add=FALSE, pd=0, pvs=names(PC$coefs), pvlabs=NULL, xlim=range(unlist(PC$coefs)), MA = NULL, ap=8, ac = 1, ax = nx, ...) Arguments PC a list of vectors of pooled coefficients (or scores) harvested from the ’best’ se- lected modeling runs (out put from ’get.pgls.coefs’) jit.f factor for random jittering (see ’jitter()’ R2x the line width expansion factor according to R^2 value nx the point size expansion factor according to sample size of model n.max the maximum sample size used in all models zeroline should we add an abline at x=0? add should we add to the existing plot? pd ’position dodge’ moves all y axis plotting positions up or down by this provided value (useful for adding multiple distributions for the same param) pvs the predictor variable vector for ordering the y-axis labels pvlabs the predictor variable labels for labeling the plot (defaults to pvs) xlim x axis plot limits MA matrix of model averages (defaults to NULL) ap coded numeric point character symbol used for model averaged parameter posi- tion ac color symbol used for model averaged parameters plot character ax expansion factor to expant model average parameter plot character (defaults to nx) ... other parameters passed on to plot Value a ’sparge’ [sprinkle/smear] plot of coefficent distributions See Also See also ’boxplot’ and ’stripchart’ in package ’graphics’ as well as ’violin’, ’bean’, ’ridgelines’, and ’raincloud’ plots. Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) coefs.objs <- get.pgls.coefs(PGLSi$fits, est='Estimate') sparge.modsel(coefs.objs) trim.phylo Trim a phylogenetic tree using Genus species names Description Read in a vector of genus species names and a tree and drop the tips in the tree that match the vector of names. Usage trim.phylo(phylo, gs.vect) Arguments phylo a phylogenetic tree gs.vect a vector of character strings in the ’Genus_species’ format Value a plot of a transformed phylogenetic tree Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phylo <- read.tree(tree.path)[[5]] trim.phylo(phylo, gs.vect=data$gn_sp) weight.IC Get IC weights Description An implementation of IC weighting that first calulates the difference in IC values by subtracting all values from the lowest IC value. Second, the changes are expoentiated divided by a sum of the same and exponentiated yet again. Usage weight.IC(IC) Arguments IC a vector of IC values Value a vector of IC based weights Examples data.path <- system.file("extdata","primate-example.data.csv", package="mmodely") data <- read.csv(data.path, row.names=1) pvs <- names(data[3:5]) data$gn_sp <- rownames(data) tree.path <- system.file("extdata","primate-springer.2012.tre", package="mmodely") phyl <- ape::read.tree(tree.path)[[5]] comp <- comp.data(phylo=phyl, df=data) mods <- get.model.combos(predictor.vars=pvs, outcome.var='OC', min.q=2) PGLSi <- pgls.iter(models=mods, phylo=phyl, df=data, k=1,l=1,d=1) weight.IC 33 AICc.w <- weight.IC(IC=PGLSi$optim$AICc)
github.com/aws/aws-sdk-go-v2/service/macie
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package macie provides the API client, operations, and parameter types for Amazon Macie. Amazon Macie Classic Amazon Macie Classic has been discontinued and is no longer available. A new Amazon Macie is now available with significant design improvements and additional features, at a lower price and in most Amazon Web Services Regions. We encourage you to take advantage of the new and improved features, and benefit from the reduced cost. To learn about features and pricing for the new Macie, see Amazon Macie (<http://aws.amazon.com/macie/>) . To learn how to use the new Macie, see the Amazon Macie User Guide (<https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html>) . ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [func NewDefaultEndpointResolver() *internalendpoints.Resolver](#NewDefaultEndpointResolver) * [func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)](#WithAPIOptions) * [func WithEndpointResolver(v EndpointResolver) func(*Options)](#WithEndpointResolver)deprecated * [func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)](#WithEndpointResolverV2) * [type AssociateMemberAccountInput](#AssociateMemberAccountInput) * [type AssociateMemberAccountOutput](#AssociateMemberAccountOutput) * [type AssociateS3ResourcesInput](#AssociateS3ResourcesInput) * [type AssociateS3ResourcesOutput](#AssociateS3ResourcesOutput) * [type Client](#Client) * + [func New(options Options, optFns ...func(*Options)) *Client](#New) + [func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client](#NewFromConfig) * + [func (c *Client) AssociateMemberAccount(ctx context.Context, params *AssociateMemberAccountInput, ...) (*AssociateMemberAccountOutput, error)](#Client.AssociateMemberAccount) + [func (c *Client) AssociateS3Resources(ctx context.Context, params *AssociateS3ResourcesInput, ...) (*AssociateS3ResourcesOutput, error)](#Client.AssociateS3Resources) + [func (c *Client) DisassociateMemberAccount(ctx context.Context, params *DisassociateMemberAccountInput, ...) (*DisassociateMemberAccountOutput, error)](#Client.DisassociateMemberAccount) + [func (c *Client) DisassociateS3Resources(ctx context.Context, params *DisassociateS3ResourcesInput, ...) (*DisassociateS3ResourcesOutput, error)](#Client.DisassociateS3Resources) + [func (c *Client) ListMemberAccounts(ctx context.Context, params *ListMemberAccountsInput, optFns ...func(*Options)) (*ListMemberAccountsOutput, error)](#Client.ListMemberAccounts) + [func (c *Client) ListS3Resources(ctx context.Context, params *ListS3ResourcesInput, optFns ...func(*Options)) (*ListS3ResourcesOutput, error)](#Client.ListS3Resources) + [func (c *Client) UpdateS3Resources(ctx context.Context, params *UpdateS3ResourcesInput, optFns ...func(*Options)) (*UpdateS3ResourcesOutput, error)](#Client.UpdateS3Resources) * [type DisassociateMemberAccountInput](#DisassociateMemberAccountInput) * [type DisassociateMemberAccountOutput](#DisassociateMemberAccountOutput) * [type DisassociateS3ResourcesInput](#DisassociateS3ResourcesInput) * [type DisassociateS3ResourcesOutput](#DisassociateS3ResourcesOutput) * [type EndpointParameters](#EndpointParameters) * + [func (p EndpointParameters) ValidateRequired() error](#EndpointParameters.ValidateRequired) + [func (p EndpointParameters) WithDefaults() EndpointParameters](#EndpointParameters.WithDefaults) * [type EndpointResolver](#EndpointResolver) * + [func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver](#EndpointResolverFromURL) * [type EndpointResolverFunc](#EndpointResolverFunc) * + [func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)](#EndpointResolverFunc.ResolveEndpoint) * [type EndpointResolverOptions](#EndpointResolverOptions) * [type EndpointResolverV2](#EndpointResolverV2) * + [func NewDefaultEndpointResolverV2() EndpointResolverV2](#NewDefaultEndpointResolverV2) * [type HTTPClient](#HTTPClient) * [type HTTPSignerV4](#HTTPSignerV4) * [type ListMemberAccountsAPIClient](#ListMemberAccountsAPIClient) * [type ListMemberAccountsInput](#ListMemberAccountsInput) * [type ListMemberAccountsOutput](#ListMemberAccountsOutput) * [type ListMemberAccountsPaginator](#ListMemberAccountsPaginator) * + [func NewListMemberAccountsPaginator(client ListMemberAccountsAPIClient, params *ListMemberAccountsInput, ...) *ListMemberAccountsPaginator](#NewListMemberAccountsPaginator) * + [func (p *ListMemberAccountsPaginator) HasMorePages() bool](#ListMemberAccountsPaginator.HasMorePages) + [func (p *ListMemberAccountsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListMemberAccountsOutput, error)](#ListMemberAccountsPaginator.NextPage) * [type ListMemberAccountsPaginatorOptions](#ListMemberAccountsPaginatorOptions) * [type ListS3ResourcesAPIClient](#ListS3ResourcesAPIClient) * [type ListS3ResourcesInput](#ListS3ResourcesInput) * [type ListS3ResourcesOutput](#ListS3ResourcesOutput) * [type ListS3ResourcesPaginator](#ListS3ResourcesPaginator) * + [func NewListS3ResourcesPaginator(client ListS3ResourcesAPIClient, params *ListS3ResourcesInput, ...) *ListS3ResourcesPaginator](#NewListS3ResourcesPaginator) * + [func (p *ListS3ResourcesPaginator) HasMorePages() bool](#ListS3ResourcesPaginator.HasMorePages) + [func (p *ListS3ResourcesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListS3ResourcesOutput, error)](#ListS3ResourcesPaginator.NextPage) * [type ListS3ResourcesPaginatorOptions](#ListS3ResourcesPaginatorOptions) * [type Options](#Options) * + [func (o Options) Copy() Options](#Options.Copy) * [type ResolveEndpoint](#ResolveEndpoint) * + [func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, ...) (out middleware.SerializeOutput, metadata middleware.Metadata, err error)](#ResolveEndpoint.HandleSerialize) + [func (*ResolveEndpoint) ID() string](#ResolveEndpoint.ID) * [type UpdateS3ResourcesInput](#UpdateS3ResourcesInput) * [type UpdateS3ResourcesOutput](#UpdateS3ResourcesOutput) ### Constants [¶](#pkg-constants) ``` const ServiceAPIVersion = "2017-12-19" ``` ``` const ServiceID = "Macie" ``` ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [NewDefaultEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L33) [¶](#NewDefaultEndpointResolver) ``` func NewDefaultEndpointResolver() *[internalendpoints](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/internal/endpoints).[Resolver](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/internal/endpoints#Resolver) ``` NewDefaultEndpointResolver constructs a new service endpoint resolver #### func [WithAPIOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L152) [¶](#WithAPIOptions) added in v1.0.0 ``` func WithAPIOptions(optFns ...func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)) func(*[Options](#Options)) ``` WithAPIOptions returns a functional option for setting the Client's APIOptions option. #### func [WithEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L163) deprecated ``` func WithEndpointResolver(v [EndpointResolver](#EndpointResolver)) func(*[Options](#Options)) ``` Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead. #### func [WithEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L171) [¶](#WithEndpointResolverV2) added in v1.16.0 ``` func WithEndpointResolverV2(v [EndpointResolverV2](#EndpointResolverV2)) func(*[Options](#Options)) ``` WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option. ### Types [¶](#pkg-types) #### type [AssociateMemberAccountInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateMemberAccount.go#L35) [¶](#AssociateMemberAccountInput) ``` type AssociateMemberAccountInput struct { // (Discontinued) The ID of the Amazon Web Services account that you want to // associate with Amazon Macie Classic as a member account. // // This member is required. MemberAccountId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [AssociateMemberAccountOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateMemberAccount.go#L46) [¶](#AssociateMemberAccountOutput) ``` type AssociateMemberAccountOutput struct { // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [AssociateS3ResourcesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateS3Resources.go#L40) [¶](#AssociateS3ResourcesInput) ``` type AssociateS3ResourcesInput struct { // (Discontinued) The S3 resources that you want to associate with Amazon Macie // Classic for monitoring and data classification. // // This member is required. S3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[S3ResourceClassification](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#S3ResourceClassification) // (Discontinued) The ID of the Amazon Macie Classic member account whose // resources you want to associate with Macie Classic. MemberAccountId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [AssociateS3ResourcesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateS3Resources.go#L55) [¶](#AssociateS3ResourcesOutput) ``` type AssociateS3ResourcesOutput struct { // (Discontinued) S3 resources that couldn't be associated with Amazon Macie // Classic. An error code and an error message are provided for each failed item. FailedS3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[FailedS3Resource](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#FailedS3Resource) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [Client](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L29) [¶](#Client) ``` type Client struct { // contains filtered or unexported fields } ``` Client provides the API client to make operations call for Amazon Macie. #### func [New](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L36) [¶](#New) ``` func New(options [Options](#Options), optFns ...func(*[Options](#Options))) *[Client](#Client) ``` New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client, such as changing the client's endpoint or adding custom middleware behavior. #### func [NewFromConfig](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L280) [¶](#NewFromConfig) ``` func NewFromConfig(cfg [aws](/github.com/aws/aws-sdk-go-v2/aws).[Config](/github.com/aws/aws-sdk-go-v2/aws#Config), optFns ...func(*[Options](#Options))) *[Client](#Client) ``` NewFromConfig returns a new client from the provided config. #### func (*Client) [AssociateMemberAccount](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateMemberAccount.go#L20) [¶](#Client.AssociateMemberAccount) ``` func (c *[Client](#Client)) AssociateMemberAccount(ctx [context](/context).[Context](/context#Context), params *[AssociateMemberAccountInput](#AssociateMemberAccountInput), optFns ...func(*[Options](#Options))) (*[AssociateMemberAccountOutput](#AssociateMemberAccountOutput), [error](/builtin#error)) ``` (Discontinued) Associates a specified Amazon Web Services account with Amazon Macie Classic as a member account. #### func (*Client) [AssociateS3Resources](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_AssociateS3Resources.go#L25) [¶](#Client.AssociateS3Resources) ``` func (c *[Client](#Client)) AssociateS3Resources(ctx [context](/context).[Context](/context#Context), params *[AssociateS3ResourcesInput](#AssociateS3ResourcesInput), optFns ...func(*[Options](#Options))) (*[AssociateS3ResourcesOutput](#AssociateS3ResourcesOutput), [error](/builtin#error)) ``` (Discontinued) Associates specified S3 resources with Amazon Macie Classic for monitoring and data classification. If memberAccountId isn't specified, the action associates specified S3 resources with Macie Classic for the current Macie Classic administrator account. If memberAccountId is specified, the action associates specified S3 resources with Macie Classic for the specified member account. #### func (*Client) [DisassociateMemberAccount](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateMemberAccount.go#L19) [¶](#Client.DisassociateMemberAccount) ``` func (c *[Client](#Client)) DisassociateMemberAccount(ctx [context](/context).[Context](/context#Context), params *[DisassociateMemberAccountInput](#DisassociateMemberAccountInput), optFns ...func(*[Options](#Options))) (*[DisassociateMemberAccountOutput](#DisassociateMemberAccountOutput), [error](/builtin#error)) ``` (Discontinued) Removes the specified member account from Amazon Macie Classic. #### func (*Client) [DisassociateS3Resources](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateS3Resources.go#L24) [¶](#Client.DisassociateS3Resources) ``` func (c *[Client](#Client)) DisassociateS3Resources(ctx [context](/context).[Context](/context#Context), params *[DisassociateS3ResourcesInput](#DisassociateS3ResourcesInput), optFns ...func(*[Options](#Options))) (*[DisassociateS3ResourcesOutput](#DisassociateS3ResourcesOutput), [error](/builtin#error)) ``` (Discontinued) Removes specified S3 resources from being monitored by Amazon Macie Classic. If memberAccountId isn't specified, the action removes specified S3 resources from Macie Classic for the current Macie Classic administrator account. If memberAccountId is specified, the action removes specified S3 resources from Macie Classic for the specified member account. #### func (*Client) [ListMemberAccounts](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L21) [¶](#Client.ListMemberAccounts) ``` func (c *[Client](#Client)) ListMemberAccounts(ctx [context](/context).[Context](/context#Context), params *[ListMemberAccountsInput](#ListMemberAccountsInput), optFns ...func(*[Options](#Options))) (*[ListMemberAccountsOutput](#ListMemberAccountsOutput), [error](/builtin#error)) ``` (Discontinued) Lists all Amazon Macie Classic member accounts for the current Macie Classic administrator account. #### func (*Client) [ListS3Resources](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L24) [¶](#Client.ListS3Resources) ``` func (c *[Client](#Client)) ListS3Resources(ctx [context](/context).[Context](/context#Context), params *[ListS3ResourcesInput](#ListS3ResourcesInput), optFns ...func(*[Options](#Options))) (*[ListS3ResourcesOutput](#ListS3ResourcesOutput), [error](/builtin#error)) ``` (Discontinued) Lists all the S3 resources associated with Amazon Macie Classic. If memberAccountId isn't specified, the action lists the S3 resources associated with Macie Classic for the current Macie Classic administrator account. If memberAccountId is specified, the action lists the S3 resources associated with Macie Classic for the specified member account. #### func (*Client) [UpdateS3Resources](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_UpdateS3Resources.go#L25) [¶](#Client.UpdateS3Resources) ``` func (c *[Client](#Client)) UpdateS3Resources(ctx [context](/context).[Context](/context#Context), params *[UpdateS3ResourcesInput](#UpdateS3ResourcesInput), optFns ...func(*[Options](#Options))) (*[UpdateS3ResourcesOutput](#UpdateS3ResourcesOutput), [error](/builtin#error)) ``` (Discontinued) Updates the classification types for the specified S3 resources. If memberAccountId isn't specified, the action updates the classification types of the S3 resources associated with Amazon Macie Classic for the current Macie Classic administrator account. If memberAccountId is specified, the action updates the classification types of the S3 resources associated with Macie Classic for the specified member account. #### type [DisassociateMemberAccountInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateMemberAccount.go#L34) [¶](#DisassociateMemberAccountInput) ``` type DisassociateMemberAccountInput struct { // (Discontinued) The ID of the member account that you want to remove from Amazon // Macie Classic. // // This member is required. MemberAccountId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [DisassociateMemberAccountOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateMemberAccount.go#L45) [¶](#DisassociateMemberAccountOutput) ``` type DisassociateMemberAccountOutput struct { // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [DisassociateS3ResourcesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateS3Resources.go#L39) [¶](#DisassociateS3ResourcesInput) ``` type DisassociateS3ResourcesInput struct { // (Discontinued) The S3 resources (buckets or prefixes) that you want to remove // from being monitored and classified by Amazon Macie Classic. // // This member is required. AssociatedS3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[S3Resource](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#S3Resource) // (Discontinued) The ID of the Amazon Macie Classic member account whose // resources you want to remove from being monitored by Macie Classic. MemberAccountId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [DisassociateS3ResourcesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_DisassociateS3Resources.go#L54) [¶](#DisassociateS3ResourcesOutput) ``` type DisassociateS3ResourcesOutput struct { // (Discontinued) S3 resources that couldn't be removed from being monitored and // classified by Amazon Macie Classic. An error code and an error message are // provided for each failed item. FailedS3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[FailedS3Resource](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#FailedS3Resource) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [EndpointParameters](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L265) [¶](#EndpointParameters) added in v1.16.0 ``` type EndpointParameters struct { // The AWS region used to dispatch the request. // // Parameter is // required. // // AWS::Region Region *[string](/builtin#string) // When true, use the dual-stack endpoint. If the configured endpoint does not // support dual-stack, dispatching the request MAY return an error. // // Defaults to // false if no value is provided. // // AWS::UseDualStack UseDualStack *[bool](/builtin#bool) // When true, send this request to the FIPS-compliant regional endpoint. If the // configured endpoint does not have a FIPS compliant endpoint, dispatching the // request will return an error. // // Defaults to false if no value is // provided. // // AWS::UseFIPS UseFIPS *[bool](/builtin#bool) // Override the endpoint used to send this request // // Parameter is // required. // // SDK::Endpoint Endpoint *[string](/builtin#string) } ``` EndpointParameters provides the parameters that influence how endpoints are resolved. #### func (EndpointParameters) [ValidateRequired](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L303) [¶](#EndpointParameters.ValidateRequired) added in v1.16.0 ``` func (p [EndpointParameters](#EndpointParameters)) ValidateRequired() [error](/builtin#error) ``` ValidateRequired validates required parameters are set. #### func (EndpointParameters) [WithDefaults](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L317) [¶](#EndpointParameters.WithDefaults) added in v1.16.0 ``` func (p [EndpointParameters](#EndpointParameters)) WithDefaults() [EndpointParameters](#EndpointParameters) ``` WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable. #### type [EndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L26) [¶](#EndpointResolver) ``` type EndpointResolver interface { ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error)) } ``` EndpointResolver interface for resolving service endpoints. #### func [EndpointResolverFromURL](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L51) [¶](#EndpointResolverFromURL) added in v1.1.0 ``` func EndpointResolverFromURL(url [string](/builtin#string), optFns ...func(*[aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint))) [EndpointResolver](#EndpointResolver) ``` EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint. #### type [EndpointResolverFunc](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L40) [¶](#EndpointResolverFunc) ``` type EndpointResolverFunc func(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error)) ``` EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values. #### func (EndpointResolverFunc) [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L42) [¶](#EndpointResolverFunc.ResolveEndpoint) ``` func (fn [EndpointResolverFunc](#EndpointResolverFunc)) ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) (endpoint [aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), err [error](/builtin#error)) ``` #### type [EndpointResolverOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L23) [¶](#EndpointResolverOptions) added in v0.29.0 ``` type EndpointResolverOptions = [internalendpoints](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/internal/endpoints).[Options](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/internal/endpoints#Options) ``` EndpointResolverOptions is the service endpoint resolver options #### type [EndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L329) [¶](#EndpointResolverV2) added in v1.16.0 ``` type EndpointResolverV2 interface { // ResolveEndpoint attempts to resolve the endpoint with the provided options, // returning the endpoint if found. Otherwise an error is returned. ResolveEndpoint(ctx [context](/context).[Context](/context#Context), params [EndpointParameters](#EndpointParameters)) ( [smithyendpoints](/github.com/aws/smithy-go/endpoints).[Endpoint](/github.com/aws/smithy-go/endpoints#Endpoint), [error](/builtin#error), ) } ``` EndpointResolverV2 provides the interface for resolving service endpoints. #### func [NewDefaultEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L340) [¶](#NewDefaultEndpointResolverV2) added in v1.16.0 ``` func NewDefaultEndpointResolverV2() [EndpointResolverV2](#EndpointResolverV2) ``` #### type [HTTPClient](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L177) [¶](#HTTPClient) ``` type HTTPClient interface { Do(*[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error)) } ``` #### type [HTTPSignerV4](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L425) [¶](#HTTPSignerV4) ``` type HTTPSignerV4 interface { SignHTTP(ctx [context](/context).[Context](/context#Context), credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[Credentials](/github.com/aws/aws-sdk-go-v2/aws#Credentials), r *[http](/net/http).[Request](/net/http#Request), payloadHash [string](/builtin#string), service [string](/builtin#string), region [string](/builtin#string), signingTime [time](/time).[Time](/time#Time), optFns ...func(*[v4](/github.com/aws/aws-sdk-go-v2/aws/signer/v4).[SignerOptions](/github.com/aws/aws-sdk-go-v2/aws/signer/v4#SignerOptions))) [error](/builtin#error) } ``` #### type [ListMemberAccountsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L144) [¶](#ListMemberAccountsAPIClient) added in v0.30.0 ``` type ListMemberAccountsAPIClient interface { ListMemberAccounts([context](/context).[Context](/context#Context), *[ListMemberAccountsInput](#ListMemberAccountsInput), ...func(*[Options](#Options))) (*[ListMemberAccountsOutput](#ListMemberAccountsOutput), [error](/builtin#error)) } ``` ListMemberAccountsAPIClient is a client that implements the ListMemberAccounts operation. #### type [ListMemberAccountsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L36) [¶](#ListMemberAccountsInput) ``` type ListMemberAccountsInput struct { // (Discontinued) Use this parameter to indicate the maximum number of items that // you want in the response. The default value is 250. MaxResults *[int32](/builtin#int32) // (Discontinued) Use this parameter when paginating results. Set the value of // this parameter to null on your first call to the ListMemberAccounts action. // Subsequent calls to the action fill nextToken in the request with the value of // nextToken from the previous response to continue listing data. NextToken *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [ListMemberAccountsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L51) [¶](#ListMemberAccountsOutput) ``` type ListMemberAccountsOutput struct { // (Discontinued) A list of the Amazon Macie Classic member accounts returned by // the action. The current Macie Classic administrator account is also included in // this list. MemberAccounts [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[MemberAccount](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#MemberAccount) // (Discontinued) When a response is generated, if there is more data to be // listed, this parameter is present in the response and contains the value to use // for the nextToken parameter in a subsequent pagination request. If there is no // more data to be listed, this parameter is set to null. NextToken *[string](/builtin#string) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [ListMemberAccountsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L163) [¶](#ListMemberAccountsPaginator) added in v0.30.0 ``` type ListMemberAccountsPaginator struct { // contains filtered or unexported fields } ``` ListMemberAccountsPaginator is a paginator for ListMemberAccounts #### func [NewListMemberAccountsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L172) [¶](#NewListMemberAccountsPaginator) added in v0.30.0 ``` func NewListMemberAccountsPaginator(client [ListMemberAccountsAPIClient](#ListMemberAccountsAPIClient), params *[ListMemberAccountsInput](#ListMemberAccountsInput), optFns ...func(*[ListMemberAccountsPaginatorOptions](#ListMemberAccountsPaginatorOptions))) *[ListMemberAccountsPaginator](#ListMemberAccountsPaginator) ``` NewListMemberAccountsPaginator returns a new ListMemberAccountsPaginator #### func (*ListMemberAccountsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L196) [¶](#ListMemberAccountsPaginator.HasMorePages) added in v0.30.0 ``` func (p *[ListMemberAccountsPaginator](#ListMemberAccountsPaginator)) HasMorePages() [bool](/builtin#bool) ``` HasMorePages returns a boolean indicating whether more pages are available #### func (*ListMemberAccountsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L201) [¶](#ListMemberAccountsPaginator.NextPage) added in v0.30.0 ``` func (p *[ListMemberAccountsPaginator](#ListMemberAccountsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListMemberAccountsOutput](#ListMemberAccountsOutput), [error](/builtin#error)) ``` NextPage retrieves the next ListMemberAccounts page. #### type [ListMemberAccountsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListMemberAccounts.go#L152) [¶](#ListMemberAccountsPaginatorOptions) added in v0.30.0 ``` type ListMemberAccountsPaginatorOptions struct { // (Discontinued) Use this parameter to indicate the maximum number of items that // you want in the response. The default value is 250. Limit [int32](/builtin#int32) // Set to true if pagination should stop if the service returns a pagination token // that matches the most recent token provided to the service. StopOnDuplicateToken [bool](/builtin#bool) } ``` ListMemberAccountsPaginatorOptions is the paginator options for ListMemberAccounts #### type [ListS3ResourcesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L149) [¶](#ListS3ResourcesAPIClient) added in v0.30.0 ``` type ListS3ResourcesAPIClient interface { ListS3Resources([context](/context).[Context](/context#Context), *[ListS3ResourcesInput](#ListS3ResourcesInput), ...func(*[Options](#Options))) (*[ListS3ResourcesOutput](#ListS3ResourcesOutput), [error](/builtin#error)) } ``` ListS3ResourcesAPIClient is a client that implements the ListS3Resources operation. #### type [ListS3ResourcesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L39) [¶](#ListS3ResourcesInput) ``` type ListS3ResourcesInput struct { // (Discontinued) Use this parameter to indicate the maximum number of items that // you want in the response. The default value is 250. MaxResults *[int32](/builtin#int32) // (Discontinued) The Amazon Macie Classic member account ID whose associated S3 // resources you want to list. MemberAccountId *[string](/builtin#string) // (Discontinued) Use this parameter when paginating results. Set its value to // null on your first call to the ListS3Resources action. Subsequent calls to the // action fill nextToken in the request with the value of nextToken from the // previous response to continue listing data. NextToken *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [ListS3ResourcesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L58) [¶](#ListS3ResourcesOutput) ``` type ListS3ResourcesOutput struct { // (Discontinued) When a response is generated, if there is more data to be // listed, this parameter is present in the response and contains the value to use // for the nextToken parameter in a subsequent pagination request. If there is no // more data to be listed, this parameter is set to null. NextToken *[string](/builtin#string) // (Discontinued) A list of the associated S3 resources returned by the action. S3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[S3ResourceClassification](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#S3ResourceClassification) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [ListS3ResourcesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L167) [¶](#ListS3ResourcesPaginator) added in v0.30.0 ``` type ListS3ResourcesPaginator struct { // contains filtered or unexported fields } ``` ListS3ResourcesPaginator is a paginator for ListS3Resources #### func [NewListS3ResourcesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L176) [¶](#NewListS3ResourcesPaginator) added in v0.30.0 ``` func NewListS3ResourcesPaginator(client [ListS3ResourcesAPIClient](#ListS3ResourcesAPIClient), params *[ListS3ResourcesInput](#ListS3ResourcesInput), optFns ...func(*[ListS3ResourcesPaginatorOptions](#ListS3ResourcesPaginatorOptions))) *[ListS3ResourcesPaginator](#ListS3ResourcesPaginator) ``` NewListS3ResourcesPaginator returns a new ListS3ResourcesPaginator #### func (*ListS3ResourcesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L200) [¶](#ListS3ResourcesPaginator.HasMorePages) added in v0.30.0 ``` func (p *[ListS3ResourcesPaginator](#ListS3ResourcesPaginator)) HasMorePages() [bool](/builtin#bool) ``` HasMorePages returns a boolean indicating whether more pages are available #### func (*ListS3ResourcesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L205) [¶](#ListS3ResourcesPaginator.NextPage) added in v0.30.0 ``` func (p *[ListS3ResourcesPaginator](#ListS3ResourcesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListS3ResourcesOutput](#ListS3ResourcesOutput), [error](/builtin#error)) ``` NextPage retrieves the next ListS3Resources page. #### type [ListS3ResourcesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_ListS3Resources.go#L156) [¶](#ListS3ResourcesPaginatorOptions) added in v0.30.0 ``` type ListS3ResourcesPaginatorOptions struct { // (Discontinued) Use this parameter to indicate the maximum number of items that // you want in the response. The default value is 250. Limit [int32](/builtin#int32) // Set to true if pagination should stop if the service returns a pagination token // that matches the most recent token provided to the service. StopOnDuplicateToken [bool](/builtin#bool) } ``` ListS3ResourcesPaginatorOptions is the paginator options for ListS3Resources #### type [Options](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L60) [¶](#Options) ``` type Options struct { // Set of options to modify how an operation is invoked. These apply to all // operations invoked for this client. Use functional options on operation call to // modify this list for per operation behavior. APIOptions []func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error) // The optional application specific identifier appended to the User-Agent header. AppID [string](/builtin#string) // This endpoint will be given as input to an EndpointResolverV2. It is used for // providing a custom base endpoint that is subject to modifications by the // processing EndpointResolverV2. BaseEndpoint *[string](/builtin#string) // Configures the events that will be sent to the configured logger. ClientLogMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[ClientLogMode](/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode) // The credentials object to use when signing requests. Credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[CredentialsProvider](/github.com/aws/aws-sdk-go-v2/aws#CredentialsProvider) // The configuration DefaultsMode that the SDK should use when constructing the // clients initial default settings. DefaultsMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[DefaultsMode](/github.com/aws/aws-sdk-go-v2/aws#DefaultsMode) // The endpoint options to be used when attempting to resolve an endpoint. EndpointOptions [EndpointResolverOptions](#EndpointResolverOptions) // The service endpoint resolver. // // Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a // value for this field will likely prevent you from using any endpoint-related // service features released after the introduction of EndpointResolverV2 and // BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom // endpoint, set the client option BaseEndpoint instead. EndpointResolver [EndpointResolver](#EndpointResolver) // Resolves the endpoint used for a particular service. This should be used over // the deprecated EndpointResolver EndpointResolverV2 [EndpointResolverV2](#EndpointResolverV2) // Signature Version 4 (SigV4) Signer HTTPSignerV4 [HTTPSignerV4](#HTTPSignerV4) // The logger writer interface to write logging messages to. Logger [logging](/github.com/aws/smithy-go/logging).[Logger](/github.com/aws/smithy-go/logging#Logger) // The region to send requests to. (Required) Region [string](/builtin#string) // RetryMaxAttempts specifies the maximum number attempts an API client will call // an operation that fails with a retryable error. A value of 0 is ignored, and // will not be used to configure the API client created default retryer, or modify // per operation call's retry max attempts. When creating a new API Clients this // member will only be used if the Retryer Options member is nil. This value will // be ignored if Retryer is not nil. If specified in an operation call's functional // options with a value that is different than the constructed client's Options, // the Client's Retryer will be wrapped to use the operation's specific // RetryMaxAttempts value. RetryMaxAttempts [int](/builtin#int) // RetryMode specifies the retry mode the API client will be created with, if // Retryer option is not also specified. When creating a new API Clients this // member will only be used if the Retryer Options member is nil. This value will // be ignored if Retryer is not nil. Currently does not support per operation call // overrides, may in the future. RetryMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[RetryMode](/github.com/aws/aws-sdk-go-v2/aws#RetryMode) // Retryer guides how HTTP requests should be retried in case of recoverable // failures. When nil the API client will use a default retryer. The kind of // default retry created by the API client can be changed with the RetryMode // option. Retryer [aws](/github.com/aws/aws-sdk-go-v2/aws).[Retryer](/github.com/aws/aws-sdk-go-v2/aws#Retryer) // The RuntimeEnvironment configuration, only populated if the DefaultsMode is set // to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You // should not populate this structure programmatically, or rely on the values here // within your applications. RuntimeEnvironment [aws](/github.com/aws/aws-sdk-go-v2/aws).[RuntimeEnvironment](/github.com/aws/aws-sdk-go-v2/aws#RuntimeEnvironment) // The HTTP client to invoke API calls with. Defaults to client's default HTTP // implementation if nil. HTTPClient [HTTPClient](#HTTPClient) // contains filtered or unexported fields } ``` #### func (Options) [Copy](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_client.go#L182) [¶](#Options.Copy) ``` func (o [Options](#Options)) Copy() [Options](#Options) ``` Copy creates a clone where the APIOptions list is deep copied. #### type [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L67) [¶](#ResolveEndpoint) ``` type ResolveEndpoint struct { Resolver [EndpointResolver](#EndpointResolver) Options [EndpointResolverOptions](#EndpointResolverOptions) } ``` #### func (*ResolveEndpoint) [HandleSerialize](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L76) [¶](#ResolveEndpoint.HandleSerialize) ``` func (m *[ResolveEndpoint](#ResolveEndpoint)) HandleSerialize(ctx [context](/context).[Context](/context#Context), in [middleware](/github.com/aws/smithy-go/middleware).[SerializeInput](/github.com/aws/smithy-go/middleware#SerializeInput), next [middleware](/github.com/aws/smithy-go/middleware).[SerializeHandler](/github.com/aws/smithy-go/middleware#SerializeHandler)) ( out [middleware](/github.com/aws/smithy-go/middleware).[SerializeOutput](/github.com/aws/smithy-go/middleware#SerializeOutput), metadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata), err [error](/builtin#error), ) ``` #### func (*ResolveEndpoint) [ID](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/endpoints.go#L72) [¶](#ResolveEndpoint.ID) ``` func (*[ResolveEndpoint](#ResolveEndpoint)) ID() [string](/builtin#string) ``` #### type [UpdateS3ResourcesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_UpdateS3Resources.go#L40) [¶](#UpdateS3ResourcesInput) ``` type UpdateS3ResourcesInput struct { // (Discontinued) The S3 resources whose classification types you want to update. // // This member is required. S3ResourcesUpdate [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[S3ResourceClassificationUpdate](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#S3ResourceClassificationUpdate) // (Discontinued) The Amazon Web Services account ID of the Amazon Macie Classic // member account whose S3 resources' classification types you want to update. MemberAccountId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [UpdateS3ResourcesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/macie/v1.17.2/service/macie/api_op_UpdateS3Resources.go#L54) [¶](#UpdateS3ResourcesOutput) ``` type UpdateS3ResourcesOutput struct { // (Discontinued) The S3 resources whose classification types can't be updated. An // error code and an error message are provided for each failed item. FailedS3Resources [][types](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types).[FailedS3Resource](/github.com/aws/aws-sdk-go-v2/service/macie@v1.17.2/types#FailedS3Resource) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ```
dictator
hex
Erlang
Dictator === Dictator is a plug-based authorization mechanism. Dictate what your users can access in fewer than 10 lines of code: ``` # config/config.exs config :dictator, repo: Client.Repo # lib/client_web/controllers/thing_controller.ex defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator # ... end # lib/client_web/policies/thing.ex defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing use Dictator.Policies.BelongsTo, for: Thing end ``` And that's it! Just like that your users can edit, see and delete their own `Thing`s but not `Thing`s belonging to other users. --- * [Installation](#installation) * [Usage](#usage) + [Custom policies](#custom-policies) - [`Dictator.Policies.EctoSchema`](#dictator.policies.ectoschema) - [`Dictator.Policies.BelongsTo`](#dictator.policies.belongsto) + [Plug Options](#plug-options) - [Limitting the actions to be authorized](#limitting-the-actions-to-be-authorized) - [Overriding the policy to be used](#overriding-the-policy-to-be-used) - [Overriding the current user key](#overriding-the-current-user-key) - [Overriding the current user fetch strategy](#overriding-the-current-user-fetch-strategy) + [Configuration Options](#configuration-options) - [Setting a default repo](#setting-a-default-repo) - [Setting a default user key](#setting-a-default-current-user-key) - [Setting the fetch strategy](#setting-the-fetch-strategy) - [Setting the unauthorized handler](#setting-the-unauthorized-handler) * [Contributing](#contributing) * [Setup](#setup) * [Other Projects](#other-projects) * [About](#about) Installation --- First, you need to add `:dictator` to your list of dependencies on your `mix.exs`: ``` def deps do [{:dictator, "~> 1.1"}] end ``` Usage --- To authorize your users, just add in your controller: ``` defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator # ... end ``` Alternatively, you can also do it at the router level: ``` defmodule ClientWeb.Router do pipeline :authorised do plug Dictator end end ``` That plug will automatically look for a `ClientWeb.Policies.Thing` module, which should `use Dictator.Policy`. It is a simple module that should implement `can?/3`. It receives the current user, the action it is trying to perform and a map containing the `conn.params`, the resource being acccessed and any options passed when `plug`-ing Dictator. In `lib/client_web/policies/thing.ex`: ``` defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing use Dictator.Policies.EctoSchema, for: Thing # User can edit, update, delete and show their own things def can?(%User{id: user_id}, action, %{resource: %Thing{user_id: user_id}}) when action in [:edit, :update, :delete, :show], do: true # Any user can index, new and create things def can?(_, action, _) when action in [:index, :new, :create], do: true # Users can't do anything else (users editing, updating, deleting and showing) # on things they don't own def can?(_, _, _), do: false end ``` This exact scenario is, in fact, so common that already comes bundled as [`Dictator.Policies.BelongsTo`](Dictator.Policies.BelongsTo.html). This is equivalent to the previous definition: ``` defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing use Dictator.Policies.BelongsTo, for: Thing end ``` **IMPORTANT: Dictator assumes you have your current user in your `conn.assigns`. See our [demo app](https://github.com/subvisual/dictator_demo) for an example on integrating with guardian.** --- ### Custom Policies Dictator comes bundled with three different types of policies: * **[`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html)**: most common behaviour. When you `use` it, Dictator will try to call a `load_resource/1` function by passing the HTTP params. This function is overridable, along with `can?/3` * **[`Dictator.Policies.BelongsTo`](Dictator.Policies.BelongsTo.html)**: abstraction on top of [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html), for the most common use case: when a user wants to read and write resources they own, but read access is provided to everyone else. This policy makes some assumptions regarding your implementation, all of those highly customisable. * **[`Dictator.Policy`](Dictator.Policy.html)**: most basic policy possible. `use` it if you don't want to load resources from the database (e.g to check if a user has an `is_admin` field set to `true`) #### Dictator.Policies.EctoSchema Most common behaviour. When you `use` it, Dictator will try to call a `load_resource/1` function by passing the HTTP params. This allows you to access the resource in the third parameter of `can/3?`. The `load_resource/1` function is overridable, along with `can?/3`. Take the following example: ``` defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing use Dictator.Policies.EctoSchema, for: Thing # User can edit, update, delete and show their own things def can?(%User{id: user_id}, action, %{resource: %Thing{user_id: user_id}}) when action in [:edit, :update, :delete, :show], do: true # Any user can index, new and create things def can?(_, action, _) when action in [:index, :new, :create], do: true # Users can't do anything else (users editing, updating, deleting and showing) # on things they don't own def can?(_, _, _), do: false end ``` In the example above, Dictator takes care of loading the `Thing` resource through the HTTP params. However, you might want to customise the way the resource is loaded. To do that, you should override the `load_resource/1` function. As an example: ``` defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing use Dictator.Policies.EctoSchema, for: Thing def load_resource(%{"owner_id" => owner_id, "uuid" => uuid}) do ClientWeb.Repo.get_by(Thing, owner_id: owner_id, uuid: uuid) end def can?(_, action, _) when action in [:index, :show, :new, :create], do: true def can?(%{id: owner_id}, action, %{resource: %Thing{owner_id: owner_id}}) when action in [:edit, :update, :delete], do: true def can?(_user, _action, _params), do: false end ``` The following custom options are available: * **`key`**: defaults to `:id`, primary key of the resource being accessed. * **`repo`**: overrides the repo set by the config. #### Dictator.Policies.BelongsTo Policy definition commonly used in typical `belongs_to` associations. It is an abstraction on top of [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html). This policy assumes the users can read (`:show`, `:index`, `:new`, `:create`) any information but only write (`:edit`, `:update`, `:delete`) their own. As an example, in a typical Twitter-like application, a user `has_many` posts and a post `belongs_to` a user. You can define a policy to let users manage their own posts but read all others by doing the following: ``` defmodule MyAppWeb.Policies.Post do alias MyApp.{Post, User} use Dictator.Policies.EctoSchema, for: Post def can?(_, action, _) when action in [:index, :show, :new, :create], do: true def can?(%User{id: id}, action, %{resource: %Post{user_id: id}}) when action in [:edit, :update, :delete], do: true def can?(_, _, _), do: false end ``` This scenario is so common, it is abstracted completely through this module and you can simply `use Dictator.Policies.BelongsTo, for: Post` to make use of it. The following example is equivalent to the previous one: ``` defmodule MyAppWeb.Policies.Post do use Dictator.Policies.BelongsTo, for: MyApp.Post end ``` The assumptions made are that: * your resource has a `user_id` foreign key (you can change this with the `:foreign_key` option) * your user has an `id` primary key (you can change this with the `:owner_id` option) If your user has a `uuid` primary key and the post identifies the user through a `:poster_id` foreign key, you can do the following: ``` defmodule MyAppWeb.Policies.Post do use Dictator.Policies.BelongsTo, for: MyApp.Post, foreign_key: :poster_id, owner_id: :uuid end ``` The `key` and `repo` options supported by [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html) are also supported by [`Dictator.Policies.BelongsTo`](Dictator.Policies.BelongsTo.html). ### Plug Options `plug Dictator` supports 3 options: * **only/except:** (optional) - actions subject to authorization. * **policy:** (optional, infers the policy) - policy to be used * **resource_key:** (optional, default: `:current_user`) - key to use in the conn.assigns to load the currently logged in resource. #### Limitting the actions to be authorized If you want to only limit authorization to a few actions you can use the `:only` or `:except` options when calling the plug in your controller: ``` defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator, only: [:create, :update, :delete] # plug Dictator, except: [:show, :index, :new, :edit] # ... end ``` In both cases, all other actions will not go through the authorization plug and the policy will only be enforced for the `create`,`update` and `delete` actions. #### Overriding the policy to be used By default, the plug will automatically infer the policy to be used. `MyWebApp.UserController` would mean a `MyWebApp.Policies.User` policy to use. However, by using the `:policy` option, that can be overriden ``` defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator, policy: MyPolicy # ... end ``` #### Overriding the current user key By default, the plug will automatically search for a `current_user` in the `conn.assigns`. You can change this behaviour by using the `key` option in the `plug` call. This will override the `key` option set in `config.exs`. ``` defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator, key: :current_organization # ... end ``` #### Overriding the current user fetch strategy By default, the plug will assume you want to search for the key set in the previous option in the `conn.assigns`. However, you may have it set in the session or want to use a custom strategy. You can change this behaviour by using the `fetch_strategy` option in the `plug` call. This will override the `fetch_strategy` option set in `config.exs`. There are two strategies available by default: * [`Dictator.FetchStrategies.Assigns`](Dictator.FetchStrategies.Assigns.html) - fetches the given key from `conn.assigns` * [`Dictator.FetchStrategies.Session`](Dictator.FetchStrategies.Session.html) - fetches the given key from the session ``` defmodule ClientWeb.ThingController do use ClientWeb, :controller plug Dictator, fetch_strategy: Dictator.FetchStrategies.Session # ... end ``` ### Configuration Options Dictator supports three options to be placed in `config/config.exs`: * **repo** - default repo to be used by [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html). If not set, you need to define what repo to use in the policy through the `:repo` option. * **key** (optional, defaults to `:key`) - key to be used to find the current user in `conn.assigns`. * **unauthorized_handler** (optional, default: [`Dictator.UnauthorizedHandlers.Default`](Dictator.UnauthorizedHandlers.Default.html)) - module to call to handle unauthorisation errors. #### Setting a default repo [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html) requires a repo to be set to load resource from. It is recommended that you set it in `config/config.exs`: ``` config :dictator, repo: Client.Repo ``` If not configured, it must be provided in each policy. The `repo` option when `use`-ing the policy takes precedence. So you can also set a custom repo for certain resources: ``` defmodule ClientWeb.Policies.Thing do alias Client.Context.Thing alias Client.FunkyRepoForThings use Dictator.Policies.BelongsTo, for: Thing, repo: FunkyRepoForThings end ``` #### Setting a default current user key By default, the plug will automatically search for a `current_user` in the `conn.assigns`. The default value is `:current_user` but this can be overriden by changing the config: ``` config :dictator, key: :current_company ``` The value set by the `key` option when plugging Dictator overrides this one. #### Setting the fetch strategy By default, the plug will assume you want to search for the key set in the previous option in the `conn.assigns`. However, you may have it set in the session or want to use a custom strategy. You can change this behaviour across the whole application by setting the `fetch_strategy` key in the config. There are two strategies available by default: * [`Dictator.FetchStrategies.Assigns`](Dictator.FetchStrategies.Assigns.html) - fetches the given key from `conn.assigns` * [`Dictator.FetchStrategies.Session`](Dictator.FetchStrategies.Session.html) - fetches the given key from the session ``` config :dictator, fetch_strategy: Dictator.FetchStrategies.Session ``` The value set by the `key` option when plugging Dictator overrides this one. #### Setting the unauthorized handler When a user does not have access to a given resource, an unauthorized handler is called. By default this is [`Dictator.UnauthorizedHandlers.Default`](Dictator.UnauthorizedHandlers.Default.html) which sends a simple 401 with the body set to `"you are not authorized to do that"`. You can also make use of the JSON API compatible [`Dictator.UnauthorizedHandlers.JsonApi`](Dictator.UnauthorizedHandlers.JsonApi.html) or provide your own: ``` config :dictator, unauthorized_handler: MyUnauthorizedHandler ``` Contributing --- Feel free to contribute. If you found a bug, open an issue. You can also open a PR for bugs or new features. Your PRs will be reviewed and subject to our style guide and linters. All contributions **must** follow the [Code of Conduct](https://github.com/subvisual/dictator/blob/master/CODE_OF_CONDUCT.md) and [Subvisual's guides](https://github.com/subvisual/guides). Setup --- To clone and setup the repo: ``` git clone git@github.com:subvisual/dictator.git cd dictator bin/setup ``` And everything should automatically be installed for you. To run the development server: ``` bin/server ``` Other projects --- Not your cup of tea? 🍵 Here are some other Elixir alternatives we like: * [@schrockwell/bodyguard](https://github.com/schrockwell/bodyguard) * [@jarednorman/canada](https://github.com/jarednorman/canada) * [@cpjk/canary](https://github.com/cpjk/canary) * [@boydm/policy_wonk](https://github.com/boydm/policy_wonk) About --- [`Dictator`](Dictator.html) is maintained by [Subvisual](http://subvisual.com). [![Subvisual logo](https://raw.githubusercontent.com/subvisual/guides/master/github/templates/subvisual_logo_with_name.png)](https://<EMAIL>) Dictator === Plug that checks if your users are authorised to access the resource. You can use it at the router or controller level: ``` # lib/my_app_web/controllers/post_controller.ex defmodule MyApp.PostController do plug Dictator def show(conn, params) do # ... end end # lib/my_app_web/router.ex defmodule MyAppWeb.Router do pipeline :authorised do plug Dictator end end ``` Requires Phoenix (or at least `conn.private[:phoenix_action]` to be set). To load resources from the database, requires Ecto. See [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html). Dictator assumes your policies are in `lib/my_app_web/policies/` and follow the `MyAppWeb.Policies.Name` naming convention. As an example, for posts, `MyAppWeb.Policies.Post` would be defined in `lib/my_app_web/policies/post.ex`. It is also assumed the current user is loaded and available on `conn.assigns`. By default, it is assumed to be under `conn.assigns[:current_user]`, although this option can be overriden. Plug Options --- Options that you can pass to the module, when plugging it (e.g. `plug Dictator, only: [:create, :update]`). None of the following options are required. * `only`: limits the actions to perform authorisation on to the provided list. * `except`: limits the actions to perform authorisation on to exclude the provided list. * `policy`: policy to apply. See above to understand how policies are inferred. * `key`: key under which the current user is placed in `conn.assigns` or the session. Defaults to `:current_user`. * `fetch_strategy`: Strategy to be used to get the current user. Can be either [`Dictator.FetchStrategies.Assigns`](Dictator.FetchStrategies.Assigns.html) to fetch it from `conn.assigns` or [`Dictator.FetchStrategies.Session`](Dictator.FetchStrategies.Session.html) to fetch it from the session. You can also implement your own strategy and pass it in this option or set it in the config. Defaults to [`Dictator.FetchStrategies.Assigns`](Dictator.FetchStrategies.Assigns.html). Configuration options --- Options that you can place in your `config/*.exs` files. * `key`: Same as the `:key` parameter in the plug option section. The plug option takes precedence, meaning you can place it in a config and then override it in specific controllers or pipelines. * `unauthorized_handler`: Handler to be called when the user is not authorised to access the resource. Defaults to [`Dictator.UnauthorizedHandlers.Default`](Dictator.UnauthorizedHandlers.Default.html). Dictator.Config === Helpers to get the dictator configs. Intended for internal use only. [Link to this section](#summary) Summary === [Functions](#functions) --- [get(key, default \\ nil)](#get/2) [Link to this section](#functions) Functions === Dictator.FetchStrategies.Assigns === Dictator.FetchStrategies.Session === Dictator.FetchStrategy behaviour === [Link to this section](#summary) Summary === [Callbacks](#callbacks) --- [fetch(arg1, any)](#c:fetch/2) [Link to this section](#callbacks) Callbacks === Dictator.Policies.BelongsTo === Policy definition commonly used in typical `belongs_to` associations. This policy assumes the users can read (`:show`, `:index`, `:new`, `:create`) any information but only write (`:edit`, `:update`, `:delete`) their own. As an example, in a typical Twitter-like application, a user `has_many` posts and a post `belongs_to` a user. You can define a policy to let users manage their own posts but read all others by doing the following: ``` defmodule MyAppWeb.Policies.Post do alias MyApp.{Post, User} use Dictator.Policies.EctoSchema, for: Post def can?(_, action, _) when action in [:index, :show, :new, :create], do: true def can?(%User{id: id}, action, %{resource: %Post{user_id: id}}) when action in [:edit, :update, :delete], do: true def can?(_, _, _), do: false end ``` This scenario is so common, it is abstracted completely through this module and you can simply `use Dictator.Policies.BelongsTo, for: Post` to make use of it. The following example is equivalent to the previous one: ``` defmodule MyAppWeb.Policies.Post do use Dictator.Policies.BelongsTo, for: MyApp.Post end ``` Allowed Options --- All options available in [`Dictator.Policies.EctoSchema`](Dictator.Policies.EctoSchema.html) plus the following: * `foreign_key`: foreign key of the current user in the resource being accessed. If a Post belongs to a User, this option would typically be `:user_id`. Defaults to `:user_id`. * `owner_key`: primary key of the current user. Defaults to `:id` Examples --- Assuming a typical `User` schema, with an `:id` primary key, and a typical `Post` schema, with a `belongs_to` association to a `User`: ``` # lib/my_app_web/policies/post.ex defmodule MyAppWeb.Policies.Post do use Dictator.Policies.BelongsTo, for: MyApp.Post end ``` If, however, the user has a `uuid` primary key and the post has an `admin_id` key instead of the typical `uer_id`, you should do the following: ``` # lib/my_app_web/policies/post.ex defmodule MyAppWeb.Policies.Post do use Dictator.Policies.BelongsTo, for: MyApp.Post, owner_key: :uuid, foreign_key: :admin_id end ``` Dictator.Policies.EctoSchema behaviour === Policy definition with resource loading. Requires Ecto. By default, Dictator does not fetch the resource being accessed. As an example, if the user is trying to `GET /posts/1`, no post is actually loaded, unless your policy `use`s [`Dictator.Policies.EctoSchema`](#content). By doing so, the third parameter in the `can?/3` function includes the resource being accessed under the `resource` key. When `use`-ing [`Dictator.Policies.EctoSchema`](#content), the following options are available: * `for` (required): schema to be loaded, e.g `MyApp.Content.Post` * `repo`: [`Ecto.Repo`](https://hexdocs.pm/ecto/3.4.4/Ecto.Repo.html) to be used. Can also be provided through a configuration option. * `key`: resource identifier. Defaults to `:id`. If you want your resource to be fetched through a different key (e.g `uuid`), use this option. Beware that, unless [`load_resource/1`](#c:load_resource/1) is overriden, there needs to be a match between the `key` value and the parameter used. If you want to fetch your resource through a `uuid` attribute, there needs to be a corresponding `"uuid"` parameter. See [Callback Overrides](#module-callback-overrides) for alternatives to loading resources from the database. Configuration Options --- Options that you can place in your `config/*.exs` files. * `repo`: Same as the `:repo` parameter in above section. The `use` option takes precedence, meaning you can place a global repo in your config and then override it in specific policies. Callback Overrides --- By default two callbacks are defined: `c:can?/3` and [`load_resource/1`](#c:load_resource/1). The former defaults to `false`, meaning **you should always override it to correctly define your policy**. The latter attempts to load the resource with a given `:key` (see the allowed parameters), assuming an equivalent string `"key"` is available in the HTTP parameters. This means that if you have a `Post` schema which is identified by an `id`, then you don't need to override, provided all routes refer to the post using an `"id"` parameter: ``` # lib/my_app_web/router.ex resources "/posts", PostController # lib/my_app_web/policies/post.ex defmodule MyAppWeb.Policies.Post do use Dictator.Policies.EctoSchema, for: MyApp.Post # override can?/3 here # ... end ``` If, instead, you use `uuid` to identify posts, you should do the following: ``` # lib/my_app_web/router.ex resources "/posts", PostController, param: "uuid" # lib/my_app_web/policies/post.ex defmodule MyAppWeb.Policies.Post do use Dictator.Policies.EctoSchema, for: MyApp.Post, key: :uuid # override can?/3 here # ... end ``` If, however, you use a mixture of both, you should override `c:load_resource/3`. This example assumes the primary key for your `Post` is `uuid` but the routes use `id`. ``` # lib/my_app_web/router.ex resources "/posts", PostController # lib/my_app_web/policies/post.ex defmodule MyAppWeb.Policies.Post do use Dictator.Policies.EctoSchema, for: MyApp.Post def load_resource(params) do MyApp.Repo.get_by(MyApp.Post, uuid: params["id"]) end # override can?/3 here # ... end ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [default_repo()](#default_repo/0) Fetches the [`Ecto.Repo`](https://hexdocs.pm/ecto/3.4.4/Ecto.Repo.html) from the config. Intended for internal use. [Callbacks](#callbacks) --- [load_resource(map)](#c:load_resource/1) Overridable callback to load from the database the resource being accessed. [Link to this section](#functions) Functions === [Link to this section](#callbacks) Callbacks === Dictator.Policy behaviour === Policy behaviour definition. If your Policy requires the resource to be loaded (e.g. if you want a `Post` to be loaded when users are trying to `GET "/posts/1"`), `use Dictator.Policies.EctoSchema` instead. The most basic policies need only to implement the [`can?/3`](#c:can?/3) callback. [Link to this section](#summary) Summary === [Callbacks](#callbacks) --- [can?(arg1, atom, map)](#c:can?/3) Callback invoked to check if the current user is authorised to perform a given action. [Link to this section](#callbacks) Callbacks === Dictator.UnauthorizedHandlers.Default === Basic unauthorized handler to be called if none is provided. When a user is denied access to a resource, an unauthorized handler is called. This is the most basic definition. Simply returns `401 UNAUTHORIZED` with the text "you are not authorized to do that". No content type or any other header is provided. Dictator.UnauthorizedHandlers.JsonApi === JSON API compatible unauthorized handler. Configure your app to use this handler instead of [`Dictator.UnauthorizedHandlers.Default`](Dictator.UnauthorizedHandlers.Default.html) by setting your `config/*.exs` to: ``` config :dictator, unauthorized_handler: Dictator.UnauthorizedHandlers.JsonApi ``` This handler sets the `content-type` header to `application/json` and sends an empty body with the 401 status as the response.
aglm
cran
R
Package ‘aglm’ June 9, 2021 Type Package Title Accurate Generalized Linear Model Version 0.4.0 Description Provides functions to fit Accurate Generalized Linear Model (AGLM) models, visual- ize them, and predict for new data. AGLM is defined as a regularized GLM which ap- plies a sort of feature transformations using a discretization of numerical features and spe- cific coding methodologies of dummy variables. For more information on AGLM, see Sug- <NAME>, <NAME>, <NAME> and <NAME>- sawa (2020) <https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1>. URL https://github.com/kkondo1981/aglm BugReports https://github.com/kkondo1981/aglm/issues License GPL-2 Encoding UTF-8 Language en-US RoxygenNote 7.1.1 Roxygen list(markdown = TRUE) Depends R (>= 4.0.0), Imports glmnet (>= 4.0.2), assertthat, methods, mathjaxr Suggests testthat, knitr, rmarkdown, MASS, faraway RdMacros mathjaxr R topics documented: aglm-packag... 2 AccurateGLM-clas... 4 agl... 4 AGLM_Input-clas... 8 coef.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 createEqualFreqBins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 createEqualWidthBins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 cv.aglm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 cva.agl... 13 CVA_AccurateGLM-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 deviance.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 executeBinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 getLVarMatForOneVec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 getODummyMatForOneVec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 getUDummyMatForOneVec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 plot.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 predict.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 print.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 residuals.AccurateGLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 aglm-package aglm: Accurate Generalized Linear Model Description Provides functions to fit Accurate Generalized Linear Model (AGLM) models, visualize them, and predict for new data. AGLM is defined as a regularized GLM which applies a sort of feature transformations using a discretization of numerical features and specific coding methodologies of dummy variables. For more information on AGLM, see <NAME>, <NAME>, <NAME> and <NAME> (2020). Details The collection of functions provided by the aglm package has almost the same structure as the famous glmnet package, so users familiar with the glmnet package will be able to handle it easily. In fact, this structure is reasonable in implementation, because what the aglm package does is applying appropriate transformations to the given data and passing it to the glmnet package as a backend. Fitting functions The aglm package provides three different fitting functions, depending on how users want to handle hyper-parameters of AGLM models. Because AGLM is based on regularized GLM, the regularization term of the loss function can be expressed as follows:    p X mj p X mj  X X R({βjk }; λ, α) = λ (1 − α) |βjk |2 + α |βjk | ,   where βj k is the k-th coefficient of auxiliary variables for the j-th column in data, α is a weight which controls how L1 and L2 regularization terms are mixed, and λ determines the strength of the regularization. Searching hyper-parameters α and λ is often useful to get better results, but usually time-consuming. That’s why the aglm package provides three fitting functions with different strategies for specifying hyper-parameters as follows: • aglm: A basic fitting function with given α and λ (s). • cv.aglm: A fitting function with given α and cross-validation for λ. • cva.aglm: A fitting function with cross-validation for both α and λ. Generally speaking, setting an appropriate λ is often important to get meaningful results, and using cv.aglm() with default α = 1 (LASSO) is usually enough. Since cva.aglm() is much time- consuming than cv.aglm(), it is better to use it only if particularly better results are needed. The following S4 classes are defined to store results of the fitting functions. • AccurateGLM-class: A class for results of aglm() and cv.aglm() • CVA_AccurateGLM-class: A class for results of cva.aglm() Using the fitted model Users can use models obtained from fitting functions in various ways, by passing them to following functions: • predict: Make predictions for new data • plot: Plot contribution of each variable and residuals • print: Display textual information of the model • coef: Get coefficients • deviance: Get deviance • residuals: Get residuals of various types We emphasize that plot() is particularly useful to understand the fitted model, because it presents a visual representation of how variables in the original data are used by the model. Other functions The following functions are basically for internal use, but exported as utility functions for conve- nience. • Functions for creating feature vectors – getUDummyMatForOneVec – getODummyMatForOneVec – getLVarMatForOneVec • Functions for binning – createEqualWidthBins – createEqualFreqBins – executeBinning Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 AccurateGLM-class Class for results of aglm() and cv.aglm() Description Class for results of aglm() and cv.aglm() Slots backend_models The fitted backend glmnet model is stored. vars_info A list, each of whose element is information of one variable. lambda Same as in the result of cv.glmnet. cvm Same as in the result of cv.glmnet. cvsd Same as in the result of cv.glmnet. cvup Same as in the result of cv.glmnet. cvlo Same as in the result of cv.glmnet. nzero Same as in the result of cv.glmnet. name Same as in the result of cv.glmnet. lambda.min Same as in the result of cv.glmnet. lambda.1se Same as in the result of cv.glmnet. fit.preval Same as in the result of cv.glmnet. foldid Same as in the result of cv.glmnet. call An object of class call, corresponding to the function call when this AccurateGLM object is created. Author(s) <NAME> aglm Fit an AGLM model with no cross-validation Description A basic fitting function with given α and λ (s). See aglm-package for more details on α and λ. Usage aglm( x, y, qualitative_vars_UD_only = NULL, qualitative_vars_both = NULL, qualitative_vars_OD_only = NULL, quantitative_vars = NULL, use_LVar = FALSE, extrapolation = "default", add_linear_columns = TRUE, add_OD_columns_of_qualitatives = TRUE, add_interaction_columns = FALSE, OD_type_of_quantitatives = "C", nbin.max = NULL, bins_list = NULL, bins_names = NULL, family = c("gaussian", "binomial", "poisson"), ... ) Arguments x A design matrix. Usually a data.frame object is expected, but a matrix object is fine if all columns are of a same class. Each column may have one of the following classes, and aglm will automatically determine how to handle it: • numeric: interpreted as a quantitative variable. aglm performs discretiza- tion by binning, and creates dummy variables suitable for ordered values (named O-dummies/L-variables). • factor (unordered) or logical : interpreted as a qualitative variable with- out order. aglm creates dummy variables suitable for unordered values (named U-dummies). • ordered: interpreted as a qualitative variable with order. aglm creates both O-dummies and U-dummies. These dummy variables are added to x and form a larger matrix, which is used internally as an actual design matrix. See our paper for more details on O- dummies, U-dummies, and L-variables. If you need to change the default behavior, use the following options: qualitative_vars_UD_only, qualitative_vars_both, qualitative_vars_OD_only, and quantitative_vars. y A response variable. qualitative_vars_UD_only Used to change the default behavior of aglm for given variables. Variables specified by this parameter are considered as qualitative variables and only U- dummies are created as auxiliary columns. This parameter may have one of the following classes: • integer: specifying variables by index. • character: specifying variables by name. qualitative_vars_both Same as qualitative_vars_UD_only, except that both O-dummies and U- dummies are created for specified variables. qualitative_vars_OD_only Same as qualitative_vars_UD_only, except that both only O-dummies are created for specified variables. quantitative_vars Same as qualitative_vars_UD_only, except that specified variables are con- sidered as quantitative variables. use_LVar Set to use L-variables. By default, aglm uses O-dummies as the representation of a quantitative variable. If use_LVar=TRUE, L-variables are used instead. extrapolation Used to control values of linear combination for quantitative variables, outside where the data exists. By default, values of a linear combination outside the data is extended based on the slope of the edges of the region where the data exists. You can set extrapolation="flat" to get constant values outside the data instead. add_linear_columns By default, for quantitative variables, aglm expands them by adding dummies and the original columns, i.e. the linear effects, are remained in the resulting model. You can set add_linear_columns=FALSE to drop linear effects. add_OD_columns_of_qualitatives Set to FALSE if you do not want to use O-dummies for qualitative variables with order (usually, columns with ordered class). add_interaction_columns If this parameter is set to TRUE, aglm creates an additional auxiliary variable x_i * x_j for each pair (x_i, x_j) of variables. OD_type_of_quantitatives Used to control the shape of linear combinations obtained by O-dummies for quantitative variables (deprecated). nbin.max An integer representing the maximum number of bins when aglm perform bin- ning for quantitative variables. bins_list Used to set custom bins for variables with O-dummies. bins_names Used to set custom bins for variables with O-dummies. family A family object or a string representing the type of the error distribution. Cur- rently aglm supports gaussian, binomial, and poisson. ... Other arguments are passed directly when calling glmnet(). Value A model object fitted to the data. Functions such as predict and plot can be applied to the returned object. See AccurateGLM-class for more details. Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 Examples #################### Gaussian case #################### library(MASS) # For Boston library(aglm) ## Read data xy <- Boston # xy is a data.frame to be processed. colnames(xy)[ncol(xy)] <- "y" # Let medv be the objective variable, y. ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/4)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[-ncol(xy)] y <- train$y newx <- test[-ncol(xy)] y_true <- test$y ## Fit the model model <- aglm(x, y) # alpha=1 (the default value) ## Predict for various alpha and lambda lambda <- 0.1 y_pred <- predict(model, newx=newx, s=lambda) rmse <- sqrt(mean((y_true - y_pred)^2)) cat(sprintf("RMSE for lambda=%.2f: %.5f \n\n", lambda, rmse)) lambda <- 1.0 y_pred <- predict(model, newx=newx, s=lambda) rmse <- sqrt(mean((y_true - y_pred)^2)) cat(sprintf("RMSE for lambda=%.2f: %.5f \n\n", lambda, rmse)) alpha <- 0 model <- aglm(x, y, alpha=alpha) lambda <- 0.1 y_pred <- predict(model, newx=newx, s=lambda) rmse <- sqrt(mean((y_true - y_pred)^2)) cat(sprintf("RMSE for alpha=%.2f and lambda=%.2f: %.5f \n\n", alpha, lambda, rmse)) #################### Binomial case #################### library(aglm) library(faraway) ## Read data xy <- nes96 ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/5)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] y <- train$vote newx <- test[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] ## Fit the model model <- aglm(x, y, family="binomial") ## Make the confusion matrix lambda <- 0.1 y_true <- test$vote y_pred <- levels(y_true)[as.integer(predict(model, newx, s=lambda, type="class"))] print(table(y_true, y_pred)) #################### use_LVar and extrapolation #################### library(MASS) # For Boston library(aglm) ## Randomly created train and test data set.seed(2021) sd <- 0.2 x <- 2 * runif(1000) + 1 f <- function(x){x^3 - 6 * x^2 + 13 * x} y <- f(x) + rnorm(1000, sd = sd) xy <- data.frame(x=x, y=y) x_test <- seq(0.75, 3.25, length.out=101) y_test <- f(x_test) + rnorm(101, sd=sd) xy_test <- data.frame(x=x_test, y=y_test) ## Plot nbin.max <- 10 models <- c(cv.aglm(x, y, use_LVar=FALSE, extrapolation="default", nbin.max=nbin.max), cv.aglm(x, y, use_LVar=FALSE, extrapolation="flat", nbin.max=nbin.max), cv.aglm(x, y, use_LVar=TRUE, extrapolation="default", nbin.max=nbin.max), cv.aglm(x, y, use_LVar=TRUE, extrapolation="flat", nbin.max=nbin.max)) titles <- c("O-Dummies with extrapolation=\"default\"", "O-Dummies with extrapolation=\"flat\"", "L-Variables with extrapolation=\"default\"", "L-Variables with extrapolation=\"flat\"") par.old <- par(mfrow=c(2, 2)) for (i in 1:4) { model <- models[[i]] title <- titles[[i]] pred <- predict(model, newx=x_test, s=model@lambda.min, type="response") plot(x_test, y_test, pch=20, col="grey", main=title) lines(x_test, f(x_test), lty="dashed", lwd=2) # the theoretical line lines(x_test, pred, col="blue", lwd=3) # the smoothed line by the model } par(par.old) AGLM_Input-class S4 class for input Description S4 class for input Slots vars_info A list, each of whose element is information of one variable. data The original data. coef.AccurateGLM Get coefficients Description Get coefficients Usage ## S3 method for class 'AccurateGLM' coef(object, index = NULL, name = NULL, s = NULL, exact = FALSE, ...) Arguments object A model object obtained from aglm() or cv.aglm(). index An integer value representing the index of variable whose coefficients are re- quired. name A string representing the name of variable whose coefficients are required. Note that if both index and name are set, index is discarded. s Same as in coef.glmnet. exact Same as in coef.glmnet. ... Other arguments are passed directly to coef.glmnet(). Value If index or name is given, the function returns a list with the one or combination of the following fields, consisting of coefficients related to the specified variable. • coef.linear: A coefficient of the linear term. (If any) • coef.OD: Coefficients of O-dummies. (If any) • coef.UD: Coefficients of U-dummies. (If any) • coef.LV: Coefficients of L-variables. (If any) If both index and name are not given, the function returns entire coefficients corresponding to the internal designed matrix. Author(s) <NAME> createEqualFreqBins Create bins (equal frequency binning) Description Create bins (equal frequency binning) Usage createEqualFreqBins(x_vec, nbin.max) Arguments x_vec A numeric vector, whose quantiles are used as breaks. nbin.max The maximum number of bins. Value A numeric vector representing breaks obtained by binning. Note that the number of bins is equal to min(nbin.max,length(x_vec)). Author(s) <NAME> createEqualWidthBins Create bins (equal width binning) Description Create bins (equal width binning) Usage createEqualWidthBins(left, right, nbin) Arguments left The leftmost value of the interval to be binned. right The rightmost value of the interval to be binned. nbin The number of bins. Value A numeric vector representing breaks obtained by binning. Author(s) <NAME> cv.aglm Fit an AGLM model with cross-validation for λ Description A fitting function with given α and cross-validation for λ. See aglm-package for more details on α and λ. Usage cv.aglm( x, y, qualitative_vars_UD_only = NULL, qualitative_vars_both = NULL, qualitative_vars_OD_only = NULL, quantitative_vars = NULL, use_LVar = FALSE, extrapolation = "default", add_linear_columns = TRUE, add_OD_columns_of_qualitatives = TRUE, add_interaction_columns = FALSE, OD_type_of_quantitatives = "C", nbin.max = NULL, bins_list = NULL, bins_names = NULL, family = c("gaussian", "binomial", "poisson"), keep = FALSE, ... ) Arguments x A design matrix. See aglm for more details. y A response variable. qualitative_vars_UD_only Same as in aglm. qualitative_vars_both Same as in aglm. qualitative_vars_OD_only Same as in aglm. quantitative_vars Same as in aglm. use_LVar Same as in aglm. extrapolation Same as in aglm. add_linear_columns Same as in aglm. add_OD_columns_of_qualitatives Same as in aglm. add_interaction_columns Same as in aglm. OD_type_of_quantitatives Same as in aglm. nbin.max Same as in aglm. bins_list Same as in aglm. bins_names Same as in aglm. family Same as in aglm. keep Set to TRUE if you need the fit.preval field in the returned value, as in cv.glmnet(). ... Other arguments are passed directly when calling cv.glmnet(). Value A model object fitted to the data with cross-validation results. Functions such as predict and plot can be applied to the returned object, same as the result of aglm(). See AccurateGLM-class for more details. Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 Examples #################### Cross-validation for lambda #################### library(aglm) library(faraway) ## Read data xy <- nes96 ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/5)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] y <- train$vote newx <- test[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] # NOTE: Codes bellow will take considerable time, so run it when you have time. ## Fit the model model <- cv.aglm(x, y, family="binomial") ## Make the confusion matrix lambda <- model@lambda.min y_true <- test$vote y_pred <- levels(y_true)[as.integer(predict(model, newx, s=lambda, type="class"))] cat(sprintf("Confusion matrix for lambda=%.5f:\n", lambda)) print(table(y_true, y_pred)) cva.aglm Fit an AGLM model with cross-validation for both α and λ Description A fitting function with cross-validation for both α and λ. See aglm-package for more details on α and λ. Usage cva.aglm( x, y, alpha = seq(0, 1, len = 11)^3, nfolds = 10, foldid = NULL, parallel.alpha = FALSE, ... ) Arguments x A design matrix. See aglm for more details. y A response variable. alpha A numeric vector representing α values to be examined in cross-validation. nfolds An integer value representing the number of folds. foldid An integer vector with the same length as observations. Each element should take a value from 1 to nfolds, identifying which fold it belongs. parallel.alpha (not used yet) ... Other arguments are passed directly to cv.aglm(). Value An object storing fitted models and information of cross-validation. See CVA_AccurateGLM-class for more details. Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 Examples #################### Cross-validation for alpha and lambda #################### library(aglm) library(faraway) ## Read data xy <- nes96 ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/5)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] y <- train$vote newx <- test[, c("popul", "TVnews", "selfLR", "ClinLR", "DoleLR", "PID", "age", "educ", "income")] # NOTE: Codes bellow will take considerable time, so run it when you have time. ## Fit the model cva_result <- cva.aglm(x, y, family="binomial") alpha <- cva_result@alpha.min lambda <- cva_result@lambda.min mod_idx <- cva_result@alpha.<EMAIL> model <- cva_result@models_list[[mod_idx]] ## Make the confusion matrix y_true <- test$vote y_pred <- levels(y_true)[as.integer(predict(model, newx, s=lambda, type="class"))] cat(sprintf("Confusion matrix for alpha=%.5f and lambda=%.5f:\n", alpha, lambda)) print(table(y_true, y_pred)) CVA_AccurateGLM-class Class for results of cva.aglm() Description Class for results of cva.aglm() Slots models_list A list consists of cv.glmnet()’s results for all α values. alpha Same as in cv.aglm. nfolds Same as in cv.aglm. alpha.min.index The index of alpha.min in the vector alpha. alpha.min The α value achieving the minimum loss among all the values of alpha. lambda.min The λ value achieving the minimum loss when α is equal to alpha.min. call An object of class call, corresponding to the function call when this CVA_AccurateGLM object is created. Author(s) <NAME> deviance.AccurateGLM Get deviance Description Get deviance Usage ## S3 method for class 'AccurateGLM' deviance(object, ...) Arguments object A model object obtained from aglm() or cv.aglm(). ... Other arguments are passed directly to deviance.glmnet(). Value The value of deviance extracted from the object object. Author(s) <NAME> executeBinning Binning the data to given bins. Description Binning the data to given bins. Usage executeBinning(x_vec, breaks = NULL, nbin.max = 100, method = "freq") Arguments x_vec The data to be binned. breaks A numeric vector representing breaks of bins (If NULL, automatically generated). nbin.max The maximum number of bins (used only if breaks=NULL). method "freq" for equal frequency binning or "width" for equal width binning (used only if breaks=NULL). Value A list with the following fields: • labels: An integer vector with same length as x_vec, where labels[i]==k means the i-th element of x_vec is in the k-th bin. • breaks: Breaks of bins used for binning. Author(s) <NAME> getLVarMatForOneVec Create L-variable matrix for one variable Description Create L-variable matrix for one variable Usage getLVarMatForOneVec(x_vec, breaks = NULL, nbin.max = 100, only_info = FALSE) Arguments x_vec A numeric vector representing original variable. breaks A numeric vector representing breaks of bins (If NULL, automatically generated). nbin.max The maximum number of bins (used only if breaks=NULL). only_info If TRUE, only information fields of returned values are filled and no dummy matrix is returned. Value A list with the following fields: • breaks: Same as input • dummy_mat: The created L-variable matrix (only if only_info=FALSE). Author(s) <NAME> getODummyMatForOneVec Create a O-dummy matrix for one variable Description Create a O-dummy matrix for one variable Usage getODummyMatForOneVec( x_vec, breaks = NULL, nbin.max = 100, only_info = FALSE, dummy_type = NULL ) Arguments x_vec A numeric vector representing original variable. breaks A numeric vector representing breaks of bins (If NULL, automatically generated). nbin.max The maximum number of bins (used only if breaks=NULL). only_info If TRUE, only information fields of returned values are filled and no dummy matrix is returned. dummy_type Used to control the shape of linear combinations obtained by O-dummies for quantitative variables (deprecated). Value A list with the following fields: • breaks: Same as input • dummy_mat: The created O-dummy matrix (only if only_info=FALSE). Author(s) <NAME> getUDummyMatForOneVec Create a U-dummy matrix for one variable Description Create a U-dummy matrix for one variable Usage getUDummyMatForOneVec( x_vec, levels = NULL, drop_last = TRUE, only_info = FALSE ) Arguments x_vec A vector representing original variable. The class of x_vec should be one of integer, character, or factor. levels A character vector representing values of x_vec used to create U-dummies. If NULL, all the unique values of x_vec are used to create dummies. drop_last If TRUE, the last column of the resulting matrix is dropped to avoid multicollinear- ity. only_info If TRUE, only information fields of returned values are filled and no dummy matrix is returned. Value A list with the following fields: • levels: Same as input. • drop_last: Same as input. • dummy_mat: The created U-dummy matrix (only if only_info=FALSE). Author(s) <NAME> plot.AccurateGLM Plot contribution of each variable and residuals Description Plot contribution of each variable and residuals Usage ## S3 method for class 'AccurateGLM' plot( x, vars = NULL, verbose = TRUE, s = NULL, resid = FALSE, smooth_resid = TRUE, smooth_resid_fun = NULL, ask = TRUE, layout = c(2, 2), only_plot = FALSE, main = "", add_rug = FALSE, ... ) Arguments x A model object obtained from aglm() or cv.aglm(). vars Used to specify variables to be plotted (NULL means all the variables). This parameter may have one of the following classes: • integer: specifying variables by index. • character: specifying variables by name. verbose Set to FALSE if textual outputs are not needed. s A numeric value specifying λ at which plotting is required. Note that plotting for multiple λ’s are not allowed and s always should be a single value. When the model is trained with only a single λ value, just set it to NULL to plot for that value. resid Used to display residuals in plots. This parameter may have one of the following classes: • logical(single value): If TRUE, working residuals are plotted. • character(single value): type of residual to be plotted. See residuals.AccurateGLM for more details on types of residuals. • numerical(vector): residual values to be plotted. smooth_resid Used to display smoothing lines of residuals for quantitative variables. This parameter may have one of the following classes: • logical: If TRUE, smoothing lines are drawn. • character: – smooth_resid="both": Balls and smoothing lines are drawn. – smooth_resid="smooth_only": Only smoothing lines are drawn. smooth_resid_fun Set if users need custom smoothing functions. ask By default, plot() stops and waits inputs each time plotting for each variable is completed. Users can set ask=FALSE to avoid this. It is useful, for example, when using devices as bmp to create image files. layout Plotting multiple variables for each page is allowed. To achieve this, set it to a pair of integer, which indicating number of rows and columns, respectively. only_plot Set to TRUE if no automatic graphical configurations are needed. main Used to specify the title of plotting. add_rug Set to TRUE for rug plots. ... Other arguments are currently not used and just discarded. Value No return value, called for side effects. Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 Examples #################### using plot() and predict() #################### library(MASS) # For Boston library(aglm) ## Read data xy <- Boston # xy is a data.frame to be processed. colnames(xy)[ncol(xy)] <- "y" # Let medv be the objective variable, y. ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/4)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[-ncol(xy)] y <- train$y newx <- test[-ncol(xy)] y_true <- test$y ## With the result of aglm() model <- aglm(x, y) lambda <- 0.1 plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred <- predict(model, newx=newx, s=lambda) plot(y_true, y_pred) ## With the result of cv.aglm() model <- cv.aglm(x, y) lambda <- model@lambda.min plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred <- predict(model, newx=newx, s=lambda) plot(y_true, y_pred) predict.AccurateGLM Make predictions for new data Description Make predictions for new data Usage ## S3 method for class 'AccurateGLM' predict( object, newx = NULL, s = NULL, type = c("link", "response", "coefficients", "nonzero", "class"), exact = FALSE, newoffset, ... ) Arguments object A model object obtained from aglm() or cv.aglm(). newx A design matrix for new data. See the description of x in aglm for more details. s Same as in predict.glmnet. type Same as in predict.glmnet. exact Same as in predict.glmnet. newoffset Same as in predict.glmnet. ... Other arguments are passed directly when calling predict.glmnet(). Value The returned object depends on type. See predict.glmnet for more details. Author(s) • <NAME>, • <NAME> and <NAME> (worked on L-Variable related features) References <NAME>, <NAME>, <NAME> and <NAME>. (2020) AGLM: A Hybrid Mod- eling Method of GLM and Data Science Techniques, https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1 Actuarial Colloquium Paris 2020 Examples #################### using plot() and predict() #################### library(MASS) # For Boston library(aglm) ## Read data xy <- Boston # xy is a data.frame to be processed. colnames(xy)[ncol(xy)] <- "y" # Let medv be the objective variable, y. ## Split data into train and test n <- nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id <- sample(n, round(n/4)) # ID numbders for test data. test <- xy[test.id,] # test is the data.frame for testing. train <- xy[-test.id,] # train is the data.frame for training. x <- train[-ncol(xy)] y <- train$y newx <- test[-ncol(xy)] y_true <- test$y ## With the result of aglm() model <- aglm(x, y) lambda <- 0.1 plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred <- predict(model, newx=newx, s=lambda) plot(y_true, y_pred) ## With the result of cv.aglm() model <- cv.aglm(x, y) lambda <- model@lambda.min plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred <- predict(model, newx=newx, s=lambda) plot(y_true, y_pred) print.AccurateGLM Display textual information of the model Description Display textual information of the model Usage ## S3 method for class 'AccurateGLM' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments x A model object obtained from aglm() or cv.aglm(). digits Used to control significant digits in printout. ... Other arguments are passed directly to print.glmnet(). Value No return value, called for side effects. Author(s) <NAME> residuals.AccurateGLM Get residuals of various types Description Get residuals of various types Usage ## S3 method for class 'AccurateGLM' residuals( object, x = NULL, y = NULL, offset = NULL, weights = NULL, type = c("working", "pearson", "deviance"), s = NULL, ... ) Arguments object A model object obtained from aglm() or cv.aglm(). x A design matrix. If not given, x for fitting is used. y A response variable. If not given, y for fitting is used. offset An offset values. If not given, offset for fitting is used. weights Sample weights. If not given, weights for fitting is used. type A string representing type of deviance: • "working" get working residual   ∂η riW = (yi − µi ) , ∂µ µ=µi where yi is a response value, µ is GLM mean, and η = g −1 (µ) with the link function g. • "pearson" get Pearson residuals yi − µi riP = p , V (µi ) where V is the variance function. • "deviance" get deviance residuals p riD = sign(yi − µi ) di , where di is the contribution to deviance. s A numeric value specifying λ at which residuals are calculated. ... Other arguments are currently not used and just discarded. residuals.AccurateGLM 25 Value A numeric vector representing calculated residuals. Author(s) <NAME>
github.com/reactiveops/vpa-analysis
go
Go
README [¶](#section-readme) --- ### VPA Analysis A tool to quickly summarize suggestions for resource requests in your Kubernetes cluster. #### How? By using the kubernetes vertical-pod-autoscaler in recommendation mode, we can se a suggestion for resource requests on each of our apps. This tool just creates a bunch of VPAs and then queries them for information. #### Requirements * kubectl * vertical-pod-autoscaler configured in the cluster * some deployments with pods * metrics-server (a requirement of vpa) * golang 1.11+ #### Usage ``` Usage: vpa-analysis [flags] vpa-analysis [command] Available Commands: create-vpas Create VPAs help Help about any command summary Genarate a summary of the vpa recommendations in a namespace. version Prints the current version of the tool. Flags: --alsologtostderr log to standard error as well as files -h, --help help for vpa-analysis --kubeconfig string Kubeconfig location. [KUBECONFIG] (default "$HOME/.kube/config") --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default false) -n, --namespace string Namespace to install the VPA objects in. (default "default") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging Use "vpa-analysis [command] --help" for more information about a command. ``` ##### create-vpas `vpa-analysis create-vpas -n some-namespace` This will search for any deployments in the given namespace and generate a VPA for each of them. Each vpa will be labelled for use by this tool. ##### summary `vpa-analysis summary -n some-namespace` Queries all the VPA objects that are labelled for this tool and summarizes their suggestions. ##### Development Look at the Makefile. It works sometimes. Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
DZNEmptyDataSet
cocoapods
Objective-C
dznemptydataset === ## Description The DZNEmptyDataSet library is a powerful and flexible solution for managing empty datasets in your iOS applications. It provides an elegant way to display placeholder views whenever there is no content to present. This library is easy to use and highly customizable, allowing developers to create a seamless user experience when dealing with empty screens. ## Features – \*\*Flexible Placeholder Views:\*\* DZNEmptyDataSet allows you to create custom placeholder views to replace empty screens in your app. These views can be tailored to match your app’s design and provide a better user experience. – \*\*Integration with UITableView and UICollectionView:\*\* You can seamlessly integrate DZNEmptyDataSet with UITableView and UICollectionView to handle empty dataset scenarios. The library automatically detects when your data source is empty and displays the appropriate placeholder view. – \*\*Delegate and Data Source Methods:\*\* DZNEmptyDataSet provides a set of delegate and data source methods that allow you to customize the appearance and behavior of the empty dataset. You can easily configure styling and add actions to the placeholder views. – \*\*Interactive Tapping and Scrolling:\*\* DZNEmptyDataSet allows users to interact with the placeholder views by tapping or swiping them. This allows you to provide additional functionality or navigation options directly from the empty dataset screen. – \*\*Localization Support:\*\* The library supports localization, allowing you to provide translated content for different languages. You can display localized empty dataset messages based on the user’s language preferences. – \*\*Easy Setup and Installation:\*\* DZNEmptyDataSet is designed to be easy to integrate into your existing projects. With just a few lines of code, you can start using the library to handle empty datasets in your app. ## Installation To use DZNEmptyDataSet in your iOS project, follow these steps: 1. Install the DZNEmptyDataSet library using one of the following methods: – \*\*CocoaPods\*\*: Add `pod ‘DZNEmptyDataSet’` to your Podfile and run `pod install` in your terminal. – \*\*Carthage\*\*: Add `github “dzenbot/DZNEmptyDataSet”` to your Cartfile and run `carthage update`. 2. Import the DZNEmptyDataSet module into your project: “`swift import DZNEmptyDataSet “` 3. Configure DZNEmptyDataSet in your view controller by implementing the `DZNEmptyDataSetSource` and `DZNEmptyDataSetDelegate` protocols: “`swift class ViewController: UIViewController, DZNEmptyDataSetSource, DZNEmptyDataSetDelegate { // … } “` 4. Implement the required delegate and data source methods to customize the appearance and behavior of the empty dataset: “`swift extension ViewController { func image(forEmptyDataSet scrollView: UIScrollView) -> UIImage? { // Return a custom image for the empty dataset } func title(forEmptyDataSet scrollView: UIScrollView) -> NSAttributedString? { // Return a custom title for the empty dataset } func description(forEmptyDataSet scrollView: UIScrollView) -> NSAttributedString? { // Return a custom description for the empty dataset } // Implement other delegate and data source methods as needed } “` 5. Assign the `emptyDataSetSource` and `emptyDataSetDelegate` properties of your UITableView or UICollectionView to the current view controller instance: “`swift tableView.emptyDataSetSource = self tableView.emptyDataSetDelegate = self “` 6. Enjoy using DZNEmptyDataSet to manage empty datasets in your iOS app! ## Customization DZNEmptyDataSet provides various customization options to tailor the appearance and behavior of the empty dataset placeholder views. These options can be configured within the `DZNEmptyDataSetSource` and `DZNEmptyDataSetDelegate` methods. – \*\*Custom Images:\*\* You can set a custom image to be displayed along with the placeholder view using the `image(forEmptyDataSet:)` method. – \*\*Attributed Text:\*\* Customize the text attributes of the title and description labels in the placeholder view using the `title(forEmptyDataSet:)` and `description(forEmptyDataSet:)` methods. You can set different text colors, fonts, alignments, etc. – \*\*Background Color and Transparency:\*\* Define a custom background color for the empty dataset view by overriding `backgroundColor(forEmptyDataSet:)` method. You can also control the view’s transparency using the `alpha(forEmptyDataSet:)` method. – \*\*Button Configuration:\*\* Add buttons to the placeholder view and define their appearance and actions using the `button(forEmptyDataSet:)`, `buttonTitle(forEmptyDataSet:)`, and `emptyDataSet(\_:didTapButton:)` methods. – \*\*Scaling and Insets:\*\* Adjust the scaling behavior and insets of the empty dataset view using `verticalOffset(forEmptyDataSet:)`, `spaceHeight(forEmptyDataSet:)`, and `customView(forEmptyDataSet:)` methods. – \*\*Scrolling Behavior:\*\* Customize the scrolling behavior when interacting with the empty dataset view using the `emptyDataSetShouldFadeIn(\_:)`, `emptyDataSetWillAppear(\_:)`, `emptyDataSetDidAppear(\_:)`, `emptyDataSetShouldAnimateImageView(\_:)`, and `emptyDataSet(\_:, didTapView:)` methods. Refer to the official documentation of DZNEmptyDataSet for more detailed information on customization options and available methods. ## Conclusion DZNEmptyDataSet is a versatile library for managing empty datasets in your iOS app. With its flexible customization options and seamless integration with UITableView and UICollectionView, you can easily enhance the user experience by displaying placeholder views whenever there is no content to present. Integrate DZNEmptyDataSet into your project today and provide a polished, user-friendly experience for empty screens in your iOS app!
docstring
cran
R
Package ‘docstring’ October 13, 2022 Type Package Title Provides Docstring Capabilities to R Functions Version 1.0.0 Date 2017-03-16 Author <NAME> [aut, cre], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Imports roxygen2, utils, tools Suggests devtools, rstudioapi, knitr, rmarkdown BugReports https://github.com/dasonk/docstring/issues?state=open Description Provides the ability to display something analogous to Python's docstrings within R. By allowing the user to document their functions as comments at the beginning of their function without requiring putting the function into a package we allow more users to easily provide documentation for their functions. The documentation can be viewed just like any other help files for functions provided by packages as well. License GPL-2 URL https://github.com/dasonk/docstring RoxygenNote 6.0.1 VignetteBuilder knitr NeedsCompilation no Repository CRAN Date/Publication 2017-03-24 19:07:24 UTC R topics documented: docstrin... 2 docstring Display a docstring Description Display a docstring using R’s built in help file viewer. Usage docstring(fun, fun_name = as.character(substitute(fun)), rstudio_pane = getOption("docstring_rstudio_help_pane"), default_title = "Title not detected") Arguments fun The function that has the docstring you would like to display fun_name The name of the function. rstudio_pane logical. If running in RStudio do you want the help to show in the help pane? This defaults to TRUE but can be explicitly set using options("docstring_rstudio_help_pane" = TRUE) or options("docstring_rstudio_help_pane" = FALSE) default_title The title you would like to display if no title is detected in the docstring itself. Examples ## Not run: square <- function(x){ #' Square a number #' #' Calculates the square of the input #' #' @param x the input to be squared return(x^2) } docstring(square) ?square mypaste <- function(x, y = "!"){ #' Paste two items #' #' @description This function pastes two items #' together. #' #' By using the description tag you'll notice that I #' can have multiple paragraphs in the description section #' #' @param x character. The first item to paste #' @param y character. The second item to paste Defaults to "!" but #' "?" would be pretty great too #' @usage mypaste(x, y) #' @return The inputs pasted together as a character string. #' @details The inputs can be anything that can be input into #' the paste function. #' @note And here is a note. Isn't it nice? #' @section I Must Warn You: #' The reference provided is a good read. #' \subsection{Other warning}{ #' It is completely irrelevant to this function though. #' } #' #' @references <NAME>. (2001). The visual display of #' quantitative information. Cheshire, Conn: Graphics Press. #' @examples #' mypaste(1, 3) #' mypaste("hey", "you") #' mypaste("single param") #' @export #' @importFrom base paste return(paste(x, y)) } ?mypaste ## End(Not run)
gen_queue
hex
Erlang
Toggle Theme GenQueue === [![Build Status](https://travis-ci.org/nsweeting/gen_queue.svg?branch=master)](https://travis-ci.org/nsweeting/gen_queue) [![GenQueue Version](https://img.shields.io/hexpm/v/gen_queue.svg)](https://hex.pm/packages/gen_queue) GenQueue is a specification for queues. This project currently provides the following functionality: * [`GenQueue`](GenQueue.html) ([docs](https://hexdocs.pm/gen_queue/GenQueue.html)) - a behaviour for queues * [`GenQueue.Adapter`](GenQueue.Adapter.html) ([docs](https://hexdocs.pm/gen_queue/GenQueue.Adapter.html)) - a behaviour for implementing adapters for a [`GenQueue`](GenQueue.html) * [`GenQueue.JobAdapter`](GenQueue.JobAdapter.html) ([docs](https://hexdocs.pm/gen_queue/GenQueue.JobAdapter.html)) - a behaviour for implementing job-based adapters for a [`GenQueue`](GenQueue.html) * [`GenQueue.Job`](GenQueue.Job.html) ([docs](https://hexdocs.pm/gen_queue/GenQueue.Job.html)) - a struct for containing job-enqueuing instructions Installation --- The package can be installed by adding `gen_queue` to your list of dependencies in `mix.exs`: ``` def deps do [ {:gen_queue, "~> 0.1.8"} ] end ``` Documentation --- See [HexDocs](https://hexdocs.pm/gen_queue) for additional documentation. Adapters --- The true functionality of [`GenQueue`](GenQueue.html) comes with use of its adapters. Currently, the following adapters are supported. * [GenQueue Exq](https://github.com/nsweeting/gen_queue_exq) - Redis-backed job queue. * [GenQueue TaskBunny](https://github.com/nsweeting/gen_queue_task_bunny) - RabbitMQ-backed job queue. * `GenQueue Que` - Mnesia-backed job queue. Currently has an Elixir 1.6 bug. Not available until this is fixed. * [GenQueue Toniq](https://github.com/nsweeting/gen_queue_toniq) - Redis-backed job queue. * [GenQueue Verk](https://github.com/nsweeting/gen_queue_verk) - Redis-backed job queue. * [GenQueue OPQ](https://github.com/nsweeting/gen_queue_opq) - GenStage-backed job queue. More adapters are always welcome! Contributors --- * <NAME> - [@nsweeting](https://github.com/nsweeting) * <NAME> - [@halostatue](https://github.com/halostatue) Toggle Theme gen_queue v0.1.8 GenQueue behaviour === A behaviour module for implementing queues. GenQueue relies on adapters to handle the specifics of how the queues are run. At its most simple, this can mean basic memory FIFO queues. At its most advanced, this can mean full async job queues with retries and backoffs. By providing a standard interface for such tools - ease in switching between different implementations is assured. Example --- The GenQueue behaviour abstracts the common queue interactions. Developers are only required to implement the callbacks and functionality they are interested in via adapters. Let’s start with a simple FIFO queue: ``` defmodule Queue do use GenQueue end # Start the queue Queue.start_link() # Push items into the queue Queue.push(:hello) #=> {:ok, :hello} Queue.push(:world) #=> {:ok, :world} # Pop items from the queue Queue.pop() #=> {:ok, :hello} Queue.pop() #=> {:ok, :world} ``` We start our enqueuer by calling `start_link/1`. This call is then forwarded to our adapter. In this case, we dont specify an adapter anywhere, so it defaults to the simple FIFO queue implemented with the included `GenQueue.Adapters.Simple`. We can then add items into our simple FIFO queues with `push/2`, as well as remove them with `pop/1`. use GenQueue and adapters --- As we can see from above - implementing a simple queue is easy. But we can further extend our queues by creating our own adapters or by using external libraries. Simply specify the adapter name in your config. ``` config :my_app, MyApp.Enqueuer, [ adapter: MyApp.MyAdapter ] defmodule MyApp.Enqueuer do use GenQueue, otp_app: :my_app end ``` The adapter can also be specified for the module in line: ``` defmodule MyApp.Enqueuer do use GenQueue, adapter: MyApp.MyAdapter end ``` We can then create our own adapter by creating an adapter module that handles the callbacks specified by [`GenQueue.Adapter`](GenQueue.Adapter.html). ``` defmodule MyApp.MyAdapter do use GenQueue.Adapter def handle_push(gen_queue, item) do IO.inspect(item) {:ok, item} end end ``` Current adapters --- Currently, the following adapters are available: * [GenQueue Exq](https://github.com/nsweeting/gen_queue_exq) - Redis-backed job queue. * [GenQueue TaskBunny](https://github.com/nsweeting/gen_queue_task_bunny) - RabbitMQ-backed job queue. * [GenQueue Verk](https://github.com/nsweeting/gen_queue_verk) - Redis-backed job queue. * [GenQueue OPQ](https://github.com/nsweeting/gen_queue_opq) - GenStage-backed job queue. Job queues --- One of the benefits of using [`GenQueue`](GenQueue.html#content) is that it can abstract common tasks like job enqueueing. We can then provide a common API for the various forms of job enqueing we would like to implement, as well as easily swap implementations. Please refer to the documentation for each adapter for more details. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [adapter(gen_queue, opts \\ [])](#adapter/2) Get the adapter for a GenQueue module based on the options provided [config(gen_queue, opts \\ [])](#config/2) Get the config for a GenQueue module based on the options provided [Callbacks](#callbacks) --- [adapter()](#c:adapter/0) Returns the adapter for a queue [config()](#c:config/0) Returns the application config for a queue [flush(opts)](#c:flush/1) Removes all items from a queue [length(opts)](#c:length/1) Gets the number of items in a queue [pop(opts)](#c:pop/1) Pops an item from a queue [pop!(opts)](#c:pop!/1) Same as `pop/1` but returns the item or raises if an error occurs [push(item, opts)](#c:push/2) Pushes an item to a queue [push!(item, opts)](#c:push!/2) Same as `push/2` but returns the item or raises if an error occurs [start_link(opts)](#c:start_link/1) [Link to this section](#types) Types === [Link to this type](#t:t/0 "Link to this type") t() ``` t() :: [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` [Link to this section](#functions) Functions === [Link to this function](#adapter/2 "Link to this function") adapter(gen_queue, opts \\ []) ``` adapter([GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [GenQueue.Adapter.t](GenQueue.Adapter.html#t:t/0)() ``` Get the adapter for a GenQueue module based on the options provided. If no adapter if specified, the default `GenQueue.Adapters.Simple` is returned. Options: --- * `:adapter` - The adapter to be returned. * `:otp_app` - An OTP application that has your GenQueue adapter configuration. Example --- ``` GenQueue.adapter(MyQueue, [otp_app: :my_app]) ``` [Link to this function](#config/2 "Link to this function") config(gen_queue, opts \\ []) ``` config([GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [GenQueue.Adapter.t](GenQueue.Adapter.html#t:t/0)() ``` Get the config for a GenQueue module based on the options provided. If an `:otp_app` option is provided, this will return the application config. Otherwise, it will return the options given. Options --- * `:otp_app` - An OTP application that has your GenQueue configuration. Example --- ``` # Get the application config GenQueue.config(MyQueue, [otp_app: :my_app]) # Returns the provided options GenQueue.config(MyQueue, [adapter: MyAdapter]) ``` [Link to this section](#callbacks) Callbacks === [Link to this callback](#c:adapter/0 "Link to this callback") adapter() ``` adapter() :: [GenQueue.Adapter.t](GenQueue.Adapter.html#t:t/0)() ``` Returns the adapter for a queue [Link to this callback](#c:config/0 "Link to this callback") config() ``` config() :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)() ``` Returns the application config for a queue [Link to this callback](#c:flush/1 "Link to this callback") flush(opts) ``` flush(opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Removes all items from a queue Example --- ``` case MyQueue.flush() do {:ok, number_of_items} -> # Flushed with success {:error, _} -> # Something went wrong end ``` [Link to this callback](#c:length/1 "Link to this callback") length(opts) ``` length(opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Gets the number of items in a queue Example --- ``` case MyQueue.length() do {:ok, number_of_items} -> # Counted with success {:error, _} -> # Something went wrong end ``` [Link to this callback](#c:pop/1 "Link to this callback") pop(opts) ``` pop(opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Pops an item from a queue Example --- ``` case MyQueue.pop() do {:ok, value} -> # Popped with success {:error, _} -> # Something went wrong end ``` [Link to this callback](#c:pop!/1 "Link to this callback") pop!(opts) ``` pop!(opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | [no_return](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Same as `pop/1` but returns the item or raises if an error occurs. [Link to this callback](#c:push/2 "Link to this callback") push(item, opts) ``` push(item :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Pushes an item to a queue Example --- ``` case MyQueue.push(value) do {:ok, value} -> # Pushed with success {:error, _} -> # Something went wrong end ``` [Link to this callback](#c:push!/2 "Link to this callback") push!(item, opts) ``` push!(item :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | [no_return](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Same as `push/2` but returns the item or raises if an error occurs. [Link to this callback](#c:start_link/1 "Link to this callback") start_link(opts) ``` start_link(opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [GenServer.on_start](https://hexdocs.pm/elixir/GenServer.html#t:on_start/0)() ``` Toggle Theme gen_queue v0.1.8 GenQueue.Adapter behaviour === A behaviour module for implementing queue adapters. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Callbacks](#callbacks) --- [handle_flush(gen_queue, opts)](#c:handle_flush/2) Removes all items from a queue [handle_length(gen_queue, opts)](#c:handle_length/2) Gets the number of items in a queue [handle_pop(gen_queue, opts)](#c:handle_pop/2) Pops an item from a queue [handle_push(gen_queue, item, opts)](#c:handle_push/3) Pushes an item to a queue [start_link(gen_queue, opts)](#c:start_link/2) [Link to this section](#types) Types === [Link to this type](#t:t/0 "Link to this type") t() ``` t() :: [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` [Link to this section](#callbacks) Callbacks === [Link to this callback](#c:handle_flush/2 "Link to this callback") handle_flush(gen_queue, opts) ``` handle_flush(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Removes all items from a queue [Link to this callback](#c:handle_length/2 "Link to this callback") handle_length(gen_queue, opts) ``` handle_length(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Gets the number of items in a queue [Link to this callback](#c:handle_pop/2 "Link to this callback") handle_pop(gen_queue, opts) ``` handle_pop(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Pops an item from a queue [Link to this callback](#c:handle_push/3 "Link to this callback") handle_push(gen_queue, item, opts) ``` handle_push(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), item :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Pushes an item to a queue [Link to this callback](#c:start_link/2 "Link to this callback") start_link(gen_queue, opts) ``` start_link(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), opts :: [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [GenServer.on_start](https://hexdocs.pm/elixir/GenServer.html#t:on_start/0)() ``` Toggle Theme gen_queue v0.1.8 GenQueue.Adapters.MockJob === A simple mock job queue implementation. [Link to this section](#summary) Summary === [Functions](#functions) --- [handle_job(gen_queue, job)](#handle_job/2) Push a job that will be returned to the current (or globally set) processes mailbox [handle_push(gen_queue, item, opts)](#handle_push/3) Callback implementation for `GenQueue.Adapter.push/2` [Link to this section](#functions) Functions === [Link to this function](#handle_job/2 "Link to this function") handle_job(gen_queue, job) ``` handle_job(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), job :: [GenQueue.Job.t](GenQueue.Job.html#t:t/0)()) :: {:ok, [GenQueue.Job.t](GenQueue.Job.html#t:t/0)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Push a job that will be returned to the current (or globally set) processes mailbox. Please see [`GenQueue.Test`](GenQueue.Test.html) for further details. [Link to this function](#handle_push/3 "Link to this function") handle_push(gen_queue, item, opts) Callback implementation for `GenQueue.Adapter.push/2` Toggle Theme gen_queue v0.1.8 GenQueue.Job === [Link to this section](#summary) Summary === [Types](#types) --- [config()](#t:config/0) Any additional configuration that is adapter-specific [delay()](#t:delay/0) A delay to schedule the job with [job()](#t:job/0) Details on how and what to enqueue a job with [options()](#t:options/0) Options for enqueuing jobs [queue()](#t:queue/0) The name of a queue to place the job under [t()](#t:t/0) [Functions](#functions) --- [new(module, opts \\ [])](#new/2) [new(module, args, opts)](#new/3) [Link to this section](#types) Types === [Link to this type](#t:config/0 "Link to this type") config() ``` config() :: [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | nil ``` Any additional configuration that is adapter-specific [Link to this type](#t:delay/0 "Link to this type") delay() ``` delay() :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | [DateTime.t](https://hexdocs.pm/elixir/DateTime.html#t:t/0)() | nil ``` A delay to schedule the job with [Link to this type](#t:job/0 "Link to this type") job() ``` job() :: [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | {[module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | {[module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | {[module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Details on how and what to enqueue a job with [Link to this type](#t:options/0 "Link to this type") options() ``` options() :: [delay: [delay](#t:delay/0)(), queue: [queue](#t:queue/0)(), config: [config](#t:config/0)()] ``` Options for enqueuing jobs [Link to this type](#t:queue/0 "Link to this type") queue() ``` queue() :: [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | nil ``` The name of a queue to place the job under [Link to this type](#t:t/0 "Link to this type") t() ``` t() :: %GenQueue.Job{ args: [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), config: [config](#t:config/0)(), delay: [delay](#t:delay/0)(), module: [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), queue: [queue](#t:queue/0)() } ``` [Link to this section](#functions) Functions === [Link to this function](#new/2 "Link to this function") new(module, opts \\ []) ``` new([job](#t:job/0)(), [options](#t:options/0)()) :: [GenQueue.Job.t](GenQueue.Job.html#t:t/0)() ``` [Link to this function](#new/3 "Link to this function") new(module, args, opts) ``` new([module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [options](#t:options/0)()) :: [GenQueue.Job.t](GenQueue.Job.html#t:t/0)() ``` Toggle Theme gen_queue v0.1.8 GenQueue.JobAdapter behaviour === A behaviour module for implementing job queue adapters. [Link to this section](#summary) Summary === [Callbacks](#callbacks) --- [handle_job(gen_queue, job)](#c:handle_job/2) Pushes a job to a queue [Link to this section](#callbacks) Callbacks === [Link to this callback](#c:handle_job/2 "Link to this callback") handle_job(gen_queue, job) ``` handle_job(gen_queue :: [GenQueue.t](GenQueue.html#t:t/0)(), job :: [GenQueue.Job.t](GenQueue.Job.html#t:t/0)()) :: {:ok, [GenQueue.Job.t](GenQueue.Job.html#t:t/0)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Pushes a job to a queue Toggle Theme gen_queue v0.1.8 GenQueue.Test === Conveniences for testing queues. This module allows us to create or use existing adapter “mock” libraries. A mock adapter is an adapter that mirrors the functionality of an exisiting adapter, but instead sends the item to the mailbox of a specified process. ``` defmodule Adapter do use GenQueue.Adapter def handle_push(gen_queue, item, _opts) do GenQueue.Test.send_item(gen_queue, item) end end ``` We can then test that our items are being pushed correctly. ``` use ExUnit.Case, async: true import GenQueue.Test # This test assumes we have a GenQueue named Queue setup do setup_test_queue(Queue) end test "that our queue works" do Queue.start_link() Queue.push(:foo) assert_recieve(:foo) end ``` Most adapters will provide a mirrored “mock” adapter to use with your tests. [Link to this section](#summary) Summary === [Functions](#functions) --- [reset_test_queue(gen_queue)](#reset_test_queue/1) Removes any current queue receiver for a GenQueue [send_item(gen_queue, item)](#send_item/2) Sends an item to the mailbox of a process set for a GenQueue [setup_global_test_queue(gen_queue, process_name)](#setup_global_test_queue/2) Sets the queue reciever as the current process for a GenQueue [setup_test_queue(gen_queue)](#setup_test_queue/1) Sets the queue reciever as the current process for a GenQueue [Link to this section](#functions) Functions === [Link to this function](#reset_test_queue/1 "Link to this function") reset_test_queue(gen_queue) ``` reset_test_queue([GenQueue.t](GenQueue.html#t:t/0)()) :: :ok ``` Removes any current queue receiver for a GenQueue. [Link to this function](#send_item/2 "Link to this function") send_item(gen_queue, item) ``` send_item([GenQueue.t](GenQueue.html#t:t/0)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Sends an item to the mailbox of a process set for a GenQueue. [Link to this function](#setup_global_test_queue/2 "Link to this function") setup_global_test_queue(gen_queue, process_name) ``` setup_global_test_queue([GenQueue.t](GenQueue.html#t:t/0)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok ``` Sets the queue reciever as the current process for a GenQueue. The current process is also given a name. This ensures queues that run outside of the current process are able to send items to the correct mailbox. [Link to this function](#setup_test_queue/1 "Link to this function") setup_test_queue(gen_queue) ``` setup_test_queue([GenQueue.t](GenQueue.html#t:t/0)()) :: :ok ``` Sets the queue reciever as the current process for a GenQueue. Toggle Theme gen_queue v0.1.8 GenQueue.Error exception === [Link to this section](#summary) Summary === [Functions](#functions) --- [exception(msg)](#exception/1) Callback implementation for [`Exception.exception/1`](https://hexdocs.pm/elixir/Exception.html#c:exception/1) [message(exception)](#message/1) Callback implementation for [`Exception.message/1`](https://hexdocs.pm/elixir/Exception.html#c:message/1) [Link to this section](#functions) Functions === [Link to this function](#exception/1 "Link to this function") exception(msg) Callback implementation for [`Exception.exception/1`](https://hexdocs.pm/elixir/Exception.html#c:exception/1). [Link to this function](#message/1 "Link to this function") message(exception) Callback implementation for [`Exception.message/1`](https://hexdocs.pm/elixir/Exception.html#c:message/1).
ropenblas
cran
R
Package ‘ropenblas’ October 14, 2022 Type Package Title Download, Compile and Link 'OpenBLAS' Library with R Version 0.3.0 Maintainer <NAME> <<EMAIL>> Description The 'ropenblas' package (<https://prdm0.github.io/ropenblas/>) is use- ful for users of any 'GNU/Linux' distribution. It will be possible to download, com- pile and link the 'OpenBLAS' library (<https://www.openblas.net/>) with the R language, al- ways by the same procedure, regardless of the 'GNU/Linux' distribution used. With the 'ropen- blas' package it is possible to download, compile and link the latest version of the 'OpenBLAS' li- brary even the repositories of the 'GNU/Linux' distribution used do not include the latest ver- sions of 'OpenBLAS'. If of interest, older versions of the 'OpenBLAS' library may be consid- ered. Linking R with an optimized version of 'BLAS' (<https://netlib.org/blas/>) may im- prove the computational performance of R code. The 'OpenBLAS' library is an optimized imple- mentation of 'BLAS' that can be easily linked to R with the 'ropenblas' package. Depends R (>= 3.1.0) License GPL-3 URL https://prdm0.github.io/ropenblas/, https://github.com/prdm0/ropenblas BugReports https://github.com/prdm0/ropenblas/issues SystemRequirements GNU Make, GCC Compiler Suite (C and Fortran) Encoding UTF-8 Imports glue, magrittr, getPass, rstudioapi, stringr, git2r, RCurl, XML, cli, pingr, withr, rlang, fs, rvest RoxygenNote 7.2.1 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-1591-8300>), Enes Ahmeti [ctb] Repository CRAN Date/Publication 2022-08-29 17:50:02 UTC R topics documented: last_version_openbla... 2 last_version_... 3 link_agai... 4 rcompile... 5 rnew... 6 ropenbla... 7 last_version_openblas OpenBLAS library versions Description OpenBLAS library versions Usage last_version_openblas() Details This function automatically searches OpenBLAS library versions in the official GitHub project. 1. last_version: Returns the latest stable version of the OpenBLAS library. 2. versions: All stable versions of the OpenBLAS library. 3. n: Total number of versions. See Also last_version_r, ropenblas, rcompiler Examples # last_version_openblas() last_version_r R language versions Description R language versions Usage last_version_r(major = NULL) Arguments major Major release number of R language (e.g. 1L, 2L, 3L, ...). If major = NULL, the function will consider the major release number. Details This function automatically searches R language versions in the official language repositories. That way, doing last_version_r(major = NULL) you will always be well informed about which latest stable version the R language is in. You can also set the higher version and do a search on the versions of the R language whose major version was 1L or 2L, for example. Value A list of two named elements will be returned. Are they: 1. last_version: Returns the latest stable version of the language given a major version (major version). If major = NULL, the latest stable version of the language will be returned based on the set of all language versions. 2. versions: Character vector with all language versions based on a major version (higher ver- sion). If major = NULL, versions will be a vector with the latest language versions. 3. n: Total number of versions of R based on major version. If major = NULL, versions will be a vector with the latest language versions. See Also ropenblas, rcompiler Examples # last_version_r(major = NULL) link_again Linking the OpenBLAS library with R again Description The link_again function links again the OpenBLAS library with the R language, being useful to correct problems of untying the OpenBLAS library that is common when the operating system is updated. Usage link_again(restart_r = TRUE) Arguments restart_r If TRUE (default), a new section of R is started after linking the OpenBLAS library. Details The function link_again be able to link again the R language with the OpenBLAS library. Thus, link_again will only make the relinkagem when in some previous section of R the ropenblas func- tion has been used for the initial binding of the R language with the OpenBLAS library. Relinking is useful in situations of updating the operating system. In some update, it is possible that the OpenBLAS library compiled in the /opt directory is unlinked. In this scenario, when the OpenBLAS library has already been compiled using the ropenblas function, the link_again function performs a new link without the need to recompile, thus making the process less time consuming. Note In situations where there was a disconnection due to an update of the operating system, the ropenblas function can be used to re-link the OpenBLAS library with the R language, however, it will be necessary to compile the OpenBLAS library again. If you are interested in recompiling the Open- BLAS library and linking with R, use the ropenblas function. If the interest is to take advantage of a previous compilation of the OpenBLAS library, the function link_again may be useful. See Also ropenblas Examples # link_again() rcompiler Compile a version of R on GNU/Linux systems Description This function is responsible for compiling a version of the R language. Usage rcompiler(x = NULL, with_blas = NULL, complementary_flags = NULL) Arguments x Version of R you want to compile. By default (x = NULL) will be compiled the latest stable version of the R language. For example, x = "3.6.2" will compile and link R-3.6.2 version as the major version on your system. with_blas String, --with-blas = NULL by default, with flags for --with-blas used in the R compilation process. Details on the use of this flag can be found here. complementary_flags String, complementary_flags = NULL by default, for adding complementary flags in the R language compilation process. Details This function is responsible for compiling a version of the R language. The x argument is the version of R that you want to compile. For example, x = "4.0.0" will compile and link R-4.0.0 version as the major version on your system. By default (x = NULL) will be compiled the latest stable version of the R. For example, to compile the latest stable version of the R language, do: rcompiler() Regardless of your GNU/Linux distribution and what version of R is in your repositories, you can have the latest stable version of the R language compiled into your computer architecture. You can use the rcompiler() function to compile different versions of R. For example, running rcompiler(x = "3.6.3") and rcompiler() will install versions 3.6.3 and 4.0.0 on its GNU/Linux distribution, respectively. If you are in version 4.0.0 of R and run the code rcompiler(x = "3.6.3") again, the function will identify the existence of version 3.6.3 in the system and give you the option to use the binaries that were built in a previous compilation. This avoids unnecessarys compilations. In addition to the x argument, the rcompiler() function has two other arguments that will allow you to change and pass new compilation flags. Are they: 1. with_blas: This argument sets the --with-blas flag in the R language compilation process and must be passed as a string. Details on the use of this flag can be found here. If with_blas = NULL (default), then it will be considered: ./configure --prefix=/opt/R/version_r --enable-memory-profiling --enable-R-shlib --enable-threads=posix --with-blas="-L/opt/OpenBLAS/lib -I/opt/OpenBLAS/include -lpthread -lm" Most likely, you will have little reason to change this aprgument. Unless you know what you’re doing, consider with_blas = NULL. Do not change the installation directory, that is, always consider --prefix = /opt/R/version_r, where version_r is a valid version of R. For a list of valid versions of R, run the last_version_r(). Installing R in the /opt/R/version_r directory is important because some functions in the package require this. Both the R language and the OpenBLAS library will be installed in the /opt directory. If this directory does not exist in your GNU/Linux distribution, it will be created; 2. complementary_flags: String (complementary_flags = NULL by default) for adding com- plementary flags in the R language compilation process. Passing a string to complementary_flags will compile it in the form: ./configure --with-blas="..." complementary_flags Value Returns a warning message informing you if the procedure occurred correctly. You will also be able to receive information about missing dependencies. See Also ropenblas, last_version_r Examples # rcompiler() rnews R News file Description Returns the contents of the NEWS.html file in the standard browser installed on the operating sys- tem. Usage rnews(pdf = FALSE, dev = FALSE) Arguments pdf If FALSE (default), the NEWS.html file will open in the browser, otherwise NEWS.pdf will be opened. dev If FALSE (default), it will not show changes made to the language development version. To see changes in the development version, do dev = TRUE. Details The NEWS.html file contains the main changes from the recently released versions of the R lan- guage. The goal is to facilitate the query by invoking it directly from the R command prompt. The rnews function is analogous to the news function of the utils package. However, using the news command in a terminal style bash shell is possible to receive a message like: news() starting httpd help server ... done Error in browseURL(url) : 'browser' must be a non-empty character string This is an error that may occur depending on the installation of R. Always prefer the use of the news function but if you need to, use the rnews function. ropenblas Download, Compile and Link OpenBLAS Library with R Description Link R with an optimized version of the BLAS library (OpenBLAS). Usage ropenblas(x = NULL, restart_r = TRUE) Arguments x OpenBLAS library version to be considered. By default, x = NULL. restart_r If TRUE, a new section of R is started after compiling and linking the OpenBLAS library. Details The ropenblas() function will only work on Linux systems. When calling the ropenblas() function on Windows, no settings will be made. Only a warning message will be issued informing you that the configuration can only be performed on Linux systems. The function will automatically download the latest version of the OpenBLAS library. However, it is possible to inform olds versions to the single argument of ropenblas(). The ropenblas() function downloads, compiles and link R to use of the OpenBLAS library. Everything is done very simply, just loading the library and invok the function ropenblas(). Considering using the OpenBLAS library rather than the BLAS may bring extra optimizations for your code and improved computational performance for your simulations, since OpenBLAS is an optimized implementation of the library BLAS. You must install the following dependencies on your operating system (Linux): 1. GNU Make; 2. GNU GCC Compiler (C and Fortran). Your linux operating system may already be configured to use the OpenBLAS library. Therefore, most likely R will already be linked to this library. To find out if the R language is using the OpenBLAS library, at R, do: extSoftVersion()["BLAS"] If R is using the OpenBLAS library, something like /any_directory/libopenblas.so should be returned. Therefore, there should be the name openblas in the shared object returned (file extension .so). If the ropenblas() function can identify that the R language is using the version of OpenBLAS you wish to configure, a warning message will be returned asking if you really would like to proceed with the configuration again. The ropenblas() function will download the desired version of the library OpenBLAS, compile and install the library in the /opt directory of your operational system. If the directory does not exist, it will be created so that the installation can be completed. Subsequently, files from the version of BLAS used in R will be symbolically linked to the shared object files of the library version OpenBLAS compiled and installed in /opt. You must be the operating system administrator to use this library. Therefore, do not attempt to use it without telling your system administrator. If you have the ROOT password, you will be responsible for everything you do on your operating system. Other details you may also find here. Value Returns a warning message informing you if the procedure occurred correctly. You will also be able to receive information about missing dependencies. Note You do not have to in every section of R make use of the ropenblas() function. Once the function is used, R will always consider using the OpenBLAS library in future sections. Author(s) <NAME> (e-mail: <<EMAIL>>) See Also rcompiler, last_version_r Examples # ropenblas()
@yckao/postgres
npm
JavaScript
* [🚀 Fastest full featured PostgreSQL node client](https://github.com/porsager/postgres-benchmarks#results) * 🚯 1250 LOC - 0 dependencies * 🏷 ES6 Tagged Template Strings at the core * 🏄‍♀️ Simple surface API * 💬 Chat on [Gitter](https://badges.gitter.im/porsager/postgres.svg) Getting started --- **Install** ``` $ npm install postgres ``` **Use** ``` // db.js const postgres = require('postgres') const sql = postgres({ ...options }) // will default to the same as psql module.exports = sql ``` ``` // other.js const sql = require('./db.js') await sql` select name, age from users ` // > [{ name: 'Murray', age: 68 }, { name: 'Walter', age 78 }] ``` Connection options `postgres([url], [options])` --- You can use either a `postgres://` url connection string or the options to define your database connection properties. Options in the object will override any present in the url. ``` const sql = postgres('postgres://username:password@host:port/database', { host : '', // Postgres ip address or domain name port : 5432, // Postgres server port path : '', // unix socket path (usually '/tmp') database : '', // Name of database to connect to username : '', // Username of database user password : '', // Password of database user ssl : false, // True, or options for tls.connect max : 10, // Max number of connections idle_timeout : 0, // Idle connection timeout in seconds connect_timeout : 30, // Connect timeout in seconds no_prepare : false, // No automatic creation of prepared statements types : [], // Array of custom types, see more below onnotice : fn // Defaults to console.log onparameter : fn // (key, value) when server param change debug : fn // Is called with (connection, query, params) transform : { column : fn, // Transforms incoming column names value : fn, // Transforms incoming row values row : fn // Transforms entire rows }, connection : { application_name : 'postgres.js', // Default application_name ... // Other connection parameters } }) ``` More info for the `ssl` option can be found in the [Node.js docs for tls connect options](https://nodejs.org/dist/latest-v10.x/docs/api/tls.html#tls_new_tls_tlssocket_socket_options) ### Environment Variables for Options It is also possible to connect to the database without a connection string or options, which will read the options from the environment variables in the table below: ``` const sql = postgres() ``` | Option | Environment Variables | | --- | --- | | `host` | `PGHOST` | | `port` | `PGPORT` | | `database` | `PGDATABASE` | | `username` | `PGUSERNAME` or `PGUSER` | | `password` | `PGPASSWORD` | Query `sql` ` -> Promise` --- A query will always return a `Promise` which resolves to a results array `[...]{ rows, command }`. Destructuring is great to immediately access the first element. ``` const [new_user] = await sql` insert into users ( name, age ) values ( 'Murray', 68 ) returning * ` // new_user = { user_id: 1, name: 'Murray', age: 68 } ``` #### Query parameters Parameters are automatically inferred and handled by Postgres so that SQL injection isn't possible. No special handling is necessary, simply use JS tagged template literals as usual. ``` let search = 'Mur' const users = await sql` select name, age from users where name like ${ search + '%' } ` // users = [{ name: 'Murray', age: 68 }] ``` Arrays will be handled by replacement parameters too, so `where in` queries are also simple. ``` const users = await sql` select * from users where age in (${ [68, 75, 23] }) ` ``` Stream `sql` `.stream(fn) -> Promise` --- If you want to handle rows returned by a query one by one, you can use `.stream` which returns a promise that resolves once there are no more rows. ``` await sql` select created_at, name from events `.stream(row => { // row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' } }) // No more rows ``` Cursor `sql` `.cursor([rows = 1], fn) -> Promise` --- Use cursors if you need to throttle the amount of rows being returned from a query. New results won't be requested until the promise / async callback function has resolved. ``` await sql` select * from generate_series(1,4) as x `.cursor(async row => { // row = { x: 1 } await http.request('https://example.com/wat', { row }) }) // No more rows ``` A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument. That is usefull if you can do work with the rows in parallel like in this example: ``` await sql` select * from generate_series(1,1000) as x `.cursor(10, async rows => { // rows = [{ x: 1 }, { x: 2 }, ... ] await Promise.all(rows.map(row => http.request('https://example.com/wat', { row }) )) }) ``` If an error is thrown inside the callback function no more rows will be requested and the promise will reject with the thrown error. You can also stop receiving any more rows early by returning an end token `sql.END` from the callback function. ``` await sql` select * from generate_series(1,1000) as x `.cursor(row => { return Math.random() > 0.9 && sql.END }) ``` Listen and notify --- When you call listen, a dedicated connection will automatically be made to ensure that you receive notifications in real time. This connection will be used for any further calls to listen. Listen returns a promise which resolves once the `LISTEN` query to Postgres completes, or if there is already a listener active. ``` await sql.listen('news', payload => { const json = JSON.parse(payload) console.log(json.this) // logs 'is' }) ``` Notify can be done as usual in sql, or by using the `sql.notify` method. ``` sql.notify('news', JSON.stringify({ no: 'this', is: 'news' })) ``` Tagged template function `sql``` --- [Tagged template functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Tagged_templates) are not just ordinary template literal strings. They allow the function to handle any parameters within before interpolation. This means that they can be used to enforce a safe way of writing queries, which is what Postgres.js does. Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholders `$1, $2, ...` and then sent to the database as a parameter to let it handle any need for escaping / casting. This also means you cannot write dynamic queries or concat queries together by simple string manipulation. To enable dynamic queries in a safe way, the `sql` function doubles as a regular function which escapes any value properly. It also includes overloads for common cases of inserting, selecting, updating and querying. Dynamic query helpers - `sql()` inside tagged template --- Postgres.js has a safe, ergonomic way to aid you in writing queries. This makes it easier to write dynamic `insert`, `select` and `update` queries, and pass `where` parameters. #### Insert ``` const user = { name: 'Murray', age: 68 } sql` insert into users ${ sql(user, 'name', 'age') } ` // Is translated into this query: insert into users (name, age) values ($1, $2) ``` You can leave out the column names and simply do `sql(user)` if you want to get all fields from the object as columns, but be careful not to allow users to supply columns you don't want. #### Multiple inserts in one query If you need to insert multiple rows at the same time it's also much faster to do it with a single `insert`. Simply pass an array of objects to `sql()`. ``` const users = [{ name: 'Murray', age: 68, garbage: 'ignore' }, { name: 'Walter', age: 78 }] sql` insert into users ${ sql(users, 'name', 'age') } ` ``` #### Update This is also useful for update queries ``` const user = { id: 1, name: 'Muray' } sql` update users set ${ sql(user, 'name') } where id = ${ user.id } ` // Is translated into this query: update users set name = $1 where id = $2 ``` #### Select ``` const columns = ['name', 'age'] sql` select ${ sql(columns) } from users ` // Is translated into this query: select name, age from users ``` #### Dynamic table name ``` const table = 'users' sql` select id from ${sql(table)} ` // Is translated into this query: select id from users ``` #### Arrays `sql.array(Array)` PostgreSQL has a native array type which is similar to js arrays, but only allows the same type and shape for nested items. This method automatically infers the item type and serializes js arrays into PostgreSQL arrays. ``` const types = sql` insert into types ( integers, strings, dates, buffers, multi ) values ( ${ sql.array([1,2,3,4,5]) }, ${ sql.array(['Hello', 'Postgres']) }, ${ sql.array([new Date(), new Date(), new Date()]) }, ${ sql.array([Buffer.from('Hello'), Buffer.from('Postgres')]) }, ${ sql.array([[[1,2],[3,4]][[5,6],[7,8]]]) }, ) ` ``` #### JSON `sql.json(object)` ``` const body = { hello: 'postgres' } const [{ json }] = await sql` insert into json ( body ) values ( ${ sql.json(body) } ) returning body ` // json = { hello: 'postgres' } ``` #### Partial `sql.partial``` and Skip `sql.skip()` With `sql.partial``` you can pass partial query into `sql``` that can use for dynamic query building. And use `sql.skip()` if there is no partial query needed. ``` await sql`select * as user from users ${id ? sql.partial`where id = ${id}` : sql.skip() }` ``` File query `sql.file(path, [args], [options]) -> Promise` --- Using an `.sql` file for a query. The contents will be cached in memory so that the file is only read once. ``` sql.file(path.join(__dirname, 'query.sql'), [], { cache: true // Default true - disable for single shot queries or memory reasons }) ``` Transactions --- #### BEGIN / COMMIT `sql.begin(fn) -> Promise` Calling begin with a function will return a Promise which resolves with the returned value from the function. The function provides a single argument which is `sql` with a context of the newly created transaction. `BEGIN` is automatically called, and if the Promise fails `ROLLBACK` will be called. If it succeeds `COMMIT` will be called. ``` const [user, account] = await sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Alice' ) ` const [account] = await sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) ` return [user, account] }) ``` #### SAVEPOINT `sql.savepoint([name], fn) -> Promise` ``` sql.begin(async sql => { const [user] = await sql` insert into users ( name ) values ( 'Alice' ) ` const [account] = (await sql.savepoint(sql = sql` insert into accounts ( user_id ) values ( ${ user.user_id } ) ` ).catch(err => { // Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection. })) || [] return [user, account] }) .then(([user, account])) => { // great success - COMMIT succeeded }) .catch(() => { // not so good - ROLLBACK was called }) ``` Do note that you can often achieve the same result using [`WITH` queries (Common Table Expressions)](https://www.postgresql.org/docs/current/queries-with.html) instead of using transactions. Types --- You can add ergonomic support for custom types, or simply pass an object with a `{ type, value }` signature that contains the Postgres `oid` for the type and the correctly serialized value. Adding Query helpers is the recommended approach which can be done like this: ``` const sql = sql({ types: { rect: { to : 1337, from : [1337], serialize : ({ x, y, width, height }) => [x, y, width, height], parse : ([x, y, width, height]) => { x, y, width, height } } } }) const [custom] = sql` insert into rectangles ( name, rect ) values ( 'wat', ${ sql.types.rect({ x: 13, y: 37: width: 42, height: 80 }) } ) returning * ` // custom = { name: 'wat', rect: { x: 13, y: 37: width: 42, height: 80 } } ``` Teardown / Cleanup --- To ensure proper teardown and cleanup on server restarts use `sql.end({ timeout: null })` before `process.exit()`. Calling `sql.end()` will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a timeout is provided any pending queries will be rejected once the timeout is reached and the connections will be destroyed. #### Sample shutdown using [Prexit](http://npmjs.com/prexit) ``` import prexit from 'prexit' prexit(async () => { await sql.end({ timeout: 5 }) await new Promise(r => server.close(r)) }) ``` Numbers, bigint, numeric --- `Number` in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs like `bigint` and `numeric` won't fit into `Number`. Since Node.js v10.4 we can use [`BigInt`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) to match the PostgreSQL type `bigint` which is returned for eg. `count(*)`. Unfortunately it doesn't work with `JSON.stringify` out of the box, so Postgres.js will return it as a string. If you want to use `BigInt` you can add this custom type: ``` const sql = postgres({ types: { bigint: postgres.BigInt } }) ``` There is currently no way to handle `numeric / decimal` in a native way in Javascript, so these and similar will be returned as `string`. You can also handle types like these using [custom types](#types) if you want to. The Connection Pool --- Connections are created lazily once a query is created. This means that simply doing const `sql = postgres(...)` won't have any effect other than instantiating a new `sql` instance. > No connection will be made until a query is made. This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using `sql.begin()`, and automatically returned to the pool once your transaction is done. Any query which was already sent over the wire will be rejected if the connection is lost. It'll automatically defer to the error handling you have for that query, and since connections are lazy it'll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic "onerror" handler that tries to get things back to normal, and also simpler application code since you don't have to handle errors out of context. There are no guarantees about queries executing in order unless using a transaction with `sql.begin()` or setting `max: 1`. Of course doing a series of queries, one awaiting the other will work as expected, but that's just due to the nature of js async/promise handling, so it's not necessary for this library to be concerned with ordering. Prepared statements --- Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using the `no_prepare` option. For instance — this is useful when [using PGBouncer in `transaction mode`](https://github.com/porsager/postgres/issues/93). `sql.unsafe` - Advanced unsafe use cases ### Unsafe queries `sql.unsafe(query, [args], [options]) -> promise` If you know what you're doing, you can use `unsafe` to pass any string you'd like to postgres. Please note that this can lead to sql injection if you're not careful. ``` sql.unsafe('select ' + danger + ' from users where id = ' + dragons) ``` Errors --- Errors are all thrown to related queries and never globally. Errors coming from PostgreSQL itself are always in the [native Postgres format](https://www.postgresql.org/docs/current/errcodes-appendix.html), and the same goes for any [Node.js errors](https://nodejs.org/api/errors.html#errors_common_system_errors) eg. coming from the underlying connection. Query errors will contain a stored error with the origin of the query to aid in tracing errors. Query errors will also contain the `query` string and the `parameters` which are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically access `error.query` and `error.parameters`. There are also the following errors specifically for this library. ##### UNDEFINED_VALUE > Undefined values are not allowed Postgres.js won't accept `undefined` as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, use `null` explicitly. ##### MESSAGE_NOT_SUPPORTED > X (X) is not supported Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing. ##### MAX_PARAMETERS_EXCEEDED > Max number of parameters (65534) exceeded The postgres protocol doesn't allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using `sql([...])` to escape values instead of passing them as parameters. ##### SASL_SIGNATURE_MISMATCH > Message type X not supported When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid [man in the middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack). If you receive this error the connection was cancelled because the server did not reply with the expected signature. ##### NOT_TAGGED_CALL > Query not called as a tagged template literal Making queries has to be done using the sql function as a [tagged template](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Tagged_templates). This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection. ##### AUTH_TYPE_NOT_IMPLEMENTED > Auth type X not implemented Postgres supports many different authentication types. This one is not supported. ##### CONNECTION_CLOSED > write CONNECTION_CLOSED host:port This error is thrown if the connection was closed without an error. This should not happen during normal operation, so please create an issue if this was unexpected. ##### CONNECTION_ENDED > write CONNECTION_ENDED host:port This error is thrown if the user has called [`sql.end()`](#sql_end) and performed a query afterwards. ##### CONNECTION_DESTROYED > write CONNECTION_DESTROYED host:port This error is thrown for any queries that were pending when the timeout to [`sql.end({ timeout: X })`](#sql_destroy) was reached. ##### CONNECTION_CONNECT_TIMEOUT > write CONNECTION_CONNECT_TIMEOUT host:port This error is thrown if the startup phase of the connection (tcp, protocol negotiation and auth) took more than the default 30 seconds or what was specified using `connect_timeout` or `PGCONNECT_TIMEOUT`. Migration tools --- Postgres.js doesn't come with any migration solution since it's way out of scope, but here are some modules that supports Postgres.js for migrations: * <https://github.com/lukeed/leyThank you --- A really big thank you to [@JAForbes](https://twitter.com/jmsfbs) who introduced me to Postgres and still holds my hand navigating all the great opportunities we have. Thanks to [@ACXgit](https://twitter.com/andreacoiutti) for initial tests and dogfooding. Also thanks to [<NAME>](http://github.com/ry) for letting me have the `postgres` npm package name. Readme --- ### Keywords * driver * postgresql * postgres.js * postgres * postrges * postgre * client * sql * db * pg * database
LSDinterface
cran
R
Package ‘LSDinterface’ October 12, 2022 Type Package Title Interface Tools for LSD Simulation Results Files Version 1.2.1 Date 2022-5-12 Description Interfaces R with LSD simulation models. Reads object-oriented data in re- sults files (.res[.gz]) produced by LSD and creates appropriate multi-dimensional ar- rays in R. Supports multiple core parallel threads of multi-file data reading for increased perfor- mance. Also provides functions to extract basic information and statis- tics from data files. LSD (Laboratory for Simulation Development) is free software devel- oped by <NAME> and <NAME> (documentation and downloads avail- able at <https://www.labsimdev.org/>). Depends R (>= 3.2.0) Imports stats, boot, utils, parallel, abind, TSdist Suggests LSDsensitivity License GPL-3 Language en-US Encoding UTF-8 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-8069-2734>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-05-13 20:30:02 UTC R topics documented: LSDinterface-packag... 2 info.details.ls... 3 info.dimensions.ls... 4 info.distance.ls... 5 info.init.ls... 8 info.names.ls... 9 info.stats.ls... 10 list.files.ls... 12 name.check.ls... 14 name.clean.ls... 15 name.nice.ls... 16 name.r.unique.ls... 17 name.var.ls... 18 read.3d.ls... 19 read.4d.ls... 21 read.list.ls... 23 read.multi.ls... 26 read.raw.ls... 28 read.single.ls... 30 select.colattrs.ls... 32 select.colnames.ls... 34 LSDinterface-package Interface Tools for LSD Simulation Results Files Description Interfaces R with LSD simulation models. Reads object-oriented data in results files (.res[.gz]) produced by LSD and creates appropriate multi-dimensional arrays in R. Supports multiple core parallel threads of multi-file data reading for increased performance. Also provides functions to extract basic information and statistics from data files. LSD (Laboratory for Simulation Develop- ment) is free software developed by <NAME> and <NAME> (documentation and downloads available at <https://www.labsimdev.org/>). Details There are specific read.xxx.lsd() functions for different types of LSD data structures. read.raw.lsd() simply import LSD saved data in tabular (data frame) format (variables in columns and time steps in rows). read.single.lsd() is appropriate to simple LSD data structures where each saved variable is single-instanced (inside an object with a single copy). read.multi.lsd() reads all instances of all variables from the LSD results file, renaming multi-instanced variables. read.list.lsd() is similar to read.multi.lsd() but saves multiple-instanced variables as R lists, preventing renaming. read.3d.lsd() and read.4d.lsd() are specialized versions for extracting data from multiple LSD results files simultaneously. The files must have the same structure (selected variables and number of time steps). They are frequently used to acquire data from Monte Carlo experiments or sensitivity analysis. read.3d.lsd() operates like read.single.lsd() but add each additional results file into a separate dimension of the produced 3-dimensional array (variable x time step x file). read.4d.lsd() adds the ability to read each instance of a multi-instanced variable to the fourth dimension of the generated 4D array (variable x instance x time step x file). list.files.lsd() is a helper function to simplify the collection of results files to be used by the other functions in this package. It can be directly used to supply the files argument in the read.xxx.lsd() family of functions. select.colattrs.lsd() and select.colnames.lsd() provide methods to extract/summarize in- formation from previously imported LSD data structures. info.xxx.lsd() functions provide information about LSD data structures. name.xxx.lsd() func- tions offer tools for dealing with LSD variable names in R. For a complete list of exported functions, use library( help = "LSDinterface" ). Author(s) NA Maintainer: NA References LSD documentation is available at https://www.labsimdev.org/. The latest LSD binaries and source code can be downloaded at https://github.com/marcov64/ Lsd/. info.details.lsd Get detailed information from a LSD results file Description This function reads, analyze and organize the information from a LSD results file (.res). Usage info.details.lsd( file ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). Value Returns a data frame containing detailed description (columns) of all variables (rows) contained in the selected results file. Author(s) <NAME> See Also list.files.lsd() info.init.lsd(), info.names.lsd() info.dimensions.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # get details about all variables in first file info.details.lsd( files[ 1 ] ) info.dimensions.lsd Dimension information for a LSD results file Description This function reads some dimension information from a LSD results file (.res): number of time steps, number of variables and the original column (variable) names. Usage info.dimensions.lsd( file ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). Details The returned number of time steps does not include the initial value (t = 0) for lagged variables (the second line of a .res format file). Value Returns a list containing two integer values and a character vector describing the selected results file. tSteps Number of time steps in file nVars Number of variables (including duplicated instances) in file varNames Names of variables (including duplicated instances) in file, after R name con- version Author(s) <NAME> See Also list.files.lsd() info.details.lsd(), info.names.lsd(), info.init.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # get dimensions from second file info.dimensions.lsd( files[ 2 ] ) info.distance.lsd Compute distance measure between LSD Monte Carlo time series and a set of references Description This function reads a 3 or 4-dimensional array produced by read.3d.lsd or read.4d.lsd and computes several types of distance measures between the time series from a set of Monte Carlo runs and a set of reference time series (like the Monte Carlo average or median). Usage info.distance.lsd( array, references, instance = 1, distance = "euclidean", std.dist = FALSE, std.val = FALSE, rank = FALSE, weights = 1, seed = 1, ... ) Arguments array a 3D or 4D array as produced by read.3d.lsd and read.4d.lsd, where in the first dimension (rows) you have the time steps, in the second (columns), the variables and in the third/fourth dimension, the Monte Carlo experiments, and the instances in the third dimension (4D arrays only). When 4D arrays are provided, only first instances are used in the computation. references a 2D matrix containing the reference time series, time in rows and variable values in named columns, from which the distance measures are to be com- puted. Columns must be named for the exact match to the names of the desired variables (contained in array). Only variables contained in both array and references are considered in the computation. According to the distance measure chosen, the number of time steps in array and references must be the same (as in the default Euclidean distance). instance integer: the instance of the variable to be read, for variables that exist in more than one object (4D array only). The default (1) is to read first instances. distance string: the distance measure to be used. The default is to compute the Euclidean distance ("euclidean"). For a comprehensive list of measure options, please refer to TSDistances. Measure names can be abbreviated. std.dist a logical value indicating, if TRUE, that the computed distances must be standard- ized with respect of the number of time steps involved. The default, FALSE, is not standardizing distances. This is relevant for properly comparing the metrics of series containing NAs. std.val a logical value indicating, if TRUE, that the series values must be standardized before computing the distances. The default, FALSE, is not standardizing val- ues. This is relevant for properly comparing the metrics of series for different variables which are not distributed over the same range of values. rank a logical value indicating, if TRUE, that the Monte Carlo runs must be ranked in terms of closeness to the references. The default is not computing the run ranking, as this may be computationally expensive for some distance mea- sures. weights a numerical vector containing the weights to be used for each variable in references when rank = TRUE. If vector has named elements, the vector names must exactly match the names of variables in references, order is not important, If variable names not present in vector, the missing ones are not considered in the ranking. If the vector is not named, the order of the weights must be the same as the one used for the variables (columns) in the references matrix. If the length of weigths is smaller the number of variables and not named, the vector is recy- cled. The default is to use the same weight for all variables. seed a single value, interpreted as an integer to define the pseudo-random number generator state used when sampling data, or NULL, to re-initialize the generator as if no seed had yet been set (a new state is created from the current time and the process ID). ... additional parameters required by the specific method (see TSDistances). Details This function is a front-end to the extensive TSdist package for interfacing it with LSD generated data. Please check the associated documentation for further information. TSdist package provides many different distance measure alternatives, including many that allow for different number of time steps among runs and references. This function may also search the Monte Carlo run which has the overall smallest (standardized) distances from the given references. Irrespective of the options std.dist and std.val, the search uses always standardized values and distances for computation (this does not affect the distance measure matrix values). One typical application of distance metrics is to select runs which are closer to the Monte Carlo average or median, that is, the runs which are more representative of the Monte Carlo Experiment. As there is no single criteria to define such "closeness", multiple distance measures may help to identify the set of most interesting runs. Value Returns a list containing: dist a named matrix containing the distances for each Monte Carlo run (lines) and variables (columns) contained both in array and references (and weights, if provided) close a named matrix of Monte Carlo run (sample) names, one column per variable, sorted in increasing distance order (closest runs in first line), which can be used to index the 3D or 4D array rank (only if rank = TRUE) a named vector of weighted Monte Carlo run standardized distances, sorted in increasing distance order (closest run first) Note When comparing distance measures between different Monte Carlo runs and variables, it is impor- tant to standardize the distances and values to ensure consistency. For variables which may present NA values, setting std.dist = TRUE ensures distance comparability by dividing the absolute dis- tance of each run-reference pair by the number of effective (non-NA) time steps. When comparing variables which are dimensionally heterogeneous, std.val = TRUE uses the relative measure (be- tween 1 and the run value divided by the corresponding reference value) to compute the distances. When setting std.val = TRUE, all points in which the references’ values are equal to zero are effectively removed from calculations. This behavior is always applied when searching for the closest Monte Carlo run(s). Author(s) <NAME> See Also read.3d.lsd(), read.4d.lsd(), info.stats.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read first instance of all variables from MC files (3D array) inst1Array <- read.3d.lsd( files ) # create statistics data frames for the variables inst1Stats <- info.stats.lsd( inst1Array ) # compute the Euclidean distance to the mean for all variables and runs inst1dist <- info.distance.lsd( inst1Array, inst1Stats$avg ) inst1dist$dist inst1dist$close # the same exercise but for a 4D array and Manhattan distance to the median # plus indicating the Monte Carlo run closest to the median allArray <- read.4d.lsd( files ) allStats <- info.stats.lsd( allArray, median = TRUE ) allDist <- info.distance.lsd( allArray, allStats$med, distance = "manhattan", rank = TRUE ) allDist$dist allDist$close allDist$rank names( allDist$rank )[ 1 ] # results file name of the closest run info.init.lsd Read initial conditions from a LSD results file Description This function reads the initial condition values from a LSD results file (.res). Usage info.init.lsd( file ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). Value Returns a 1 line matrix containing the initial conditions (row 1) of all variables contained in the selected results file. Note The returned matrix contains all variables in the results file, even the ones that don’t have an initial condition (indicated as NA). Only variables automatically initialized automatically by LSD in t = 1 are included here. Author(s) <NAME> See Also list.files.lsd() info.details.lsd(), info.names.lsd() info.dimensions.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # get initialization data from first and second files init1 <- info.init.lsd( files[ 1 ] ) init1[ , 4 : 8 ] init2 <- info.init.lsd( files[ 2 ] ) init2[ , 4 : 8 ] info.names.lsd Read unique variable names from a LSD results file (no duplicates) Description This function reads the variable names (columns) from a LSD results file (.res). The names returned are converted to the original LSD names whenever possible and duplicates are removed. Usage info.names.lsd( file ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). Value Returns a character vector containing the names of all unique variables contained in the selected results file. Note Not all names can be automatically reconverted to the original LSD names, using LSD/C++ naming conventions. The conversion may be incorrect if the original LSD variable is named in the format "X_...". Author(s) <NAME> See Also list.files.lsd() info.details.lsd(), info.init.lsd() info.dimensions.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # get variable names from first file info.names.lsd( files[ 1 ] ) info.stats.lsd Compute Monte Carlo statistics from a set of LSD runs Description This function reads a 3 or 4-dimensional array produced by read.3d.lsd or read.4d.lsd and produces a list with 2D data frames containing the (Monte Carlo) mean, the standard deviation, the maximum, the minimum, and other optional statistics for each variable, at each time step. Usage info.stats.lsd( array, rows = 1, cols = 2, median = FALSE, ci = c( "none", "mean", "median", "auto" ), ci.conf = 0.95, ci.boot = NULL, boot.R = 999, seed = 1, na.rm = TRUE, inf.rm = TRUE ) Arguments array a 3D or 4D array as produced by read.3d.lsd and read.4d.lsd, where in the first dimension (rows) you have the time steps, in the second (columns), the variables and in the third/fourth dimension, the Monte Carlo experiments, and the instances in the third dimension (4D arrays only). rows an integer array dimension to be used as the rows for the statistics matrices, default is to use first array dimension. cols an integer array dimension to be used as the columns for the statistics matrices, default is to use second array dimension. median a logical value indicating if (TRUE) the median and the median absolute deviation should also be computed. The default (FALSE) is not to compute these statistics. ci a character string specifying the type of confidence interval to compute, must be one of "none" (default) for no confidence interval computation, "mean", to com- pute a confidence interval for the mean, "median", for the median, or "auto", to use the option set for the median argument (above). This option can be ab- breviated. ci.conf confidence level of the confidence interval. ci.boot a character string specifying the type of bootstrap confidence interval to com- pute, must be one of "basic", "perc" (percentile interval), or "bca" (BCa - ad- justed percentile interval). If set to NULL or an empty string, a regular asymptotic confidence interval is produced (no bootstrap), assuming normal distribution for the mean or using a non-parametric rank test for the median. Non-bootstrap percentiles are much faster to compute but generally less accurate. boot.R number of bootstrap replicates. seed a single value, interpreted as an integer to define the pseudo-random number generator state used for the bootstrap process, or NULL, to re-initialize the gen- erator as if no seed had yet been set (a new state is created from the current time and the process ID). na.rm a logical value indicating whether NA values should be stripped before the com- putation proceeds. inf.rm a logical value indicating whether non-finite values should be stripped before the computation proceeds. Value Returns a list containing four to seven matrices, with the original size and naming of the selected 2 dimensions of the argument. avg a matrix with the mean of the MC experiments sd a matrix with the standard deviation of the MC experiments max a matrix with the maximum value of the MC experiments min a matrix with the minimum value of the MC experiments med a matrix with the median of the MC experiments (only present if argument median = TRUE) mad a matrix with the median absolute deviation of the MC experiments (only present if argument median = TRUE) ci.hi a matrix with the maximum value of the MC experiments (only present if argu- ment ci is not set to "none") ci.lo a matrix with the minimum value of the MC experiments (only present if argu- ment ci is not set to "none") n a matrix with the number of observations available for computation of statistics Author(s) <NAME> See Also list.files.lsd() read.3d.lsd(), read.4d.lsd(), info.dimensions.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read first instance of all variables from MC files (3D array) inst1Array <- read.3d.lsd( files ) # create statistics data frames for the variables inst1Stats <- info.stats.lsd( inst1Array ) print( inst1Stats$avg[ 10 : 20, ] ) print( inst1Stats$sd[ 10 : 20, ] ) # organize the stats, including medians, by variable (dim=2) and file (dim=3) inst1Stats2 <- info.stats.lsd( inst1Array, rows = 2, cols = 3, median = TRUE ) print( inst1Stats2$med[ , 1 : 2 ] ) # the same but for all instance of all variables (from a 4D array) # and a normal (non-boostrap) confidence intervals for the means allArray <- read.4d.lsd( files ) allStats <- info.stats.lsd( allArray, ci = "auto" ) print( allStats$ci.lo[ 3, 1 : 7 ] ) print( allStats$avg[ 3, 1 : 7 ] ) print( allStats$ci.hi[ 3, 1 : 7 ] ) # organize the stats by file (dim=4) and variable (dim=2) # plus boostrat confidence intervals for the median allStats2 <- info.stats.lsd( allArray, rows = 4, cols = 2, median = TRUE, ci = "auto", ci.boot = "bca" ) print( allStats2$ci.lo[ , 1 : 3 ] ) print( allStats2$med[ , 1 : 3 ] ) print( allStats2$ci.hi[ , 1 : 3 ] ) list.files.lsd List results files from a set of LSD runs Description This function produce a character vector of the names of results files produced after the execution of LSD simulation runs. The list can be used with all function in this package requiring the argument files. Usage list.files.lsd( path = ".", conf.name = "", type = c( "res", "tot", "csv" ), compressed = NULL, recursive = FALSE, join = FALSE, full.names = FALSE, sensitivity = FALSE ) Arguments path a character vector of full or relative path name to the base directory from where to search the files; the default corresponds to the working directory, getwd(). Tilde expansion is performed. Alternatively, the full path and name of the corre- sponding LSD configuration file (including the .lsd extension) can be provided. conf.name the LSD configuration file name (optionally including the .lsd extension) used to generate the desired results files; the default is to return all results files, irre- spective of the configuration file used. Alternatively, a regular expression can be supplied. This argument takes precedence of any configuration file name provided together with the path argument. type the type (format/extension) of LSD results files to use among the options c( "res", "tot", "csv" ), used to define the extension of the files to be consid- ered. "res" is the default. This option can be abbreviated. compressed a logical value indicating if (TRUE) to look only for compressed files with .gz extension, or uncompressed ones otherwise (FALSE). The default (NULL) is to list files irrespective if compressed or not. recursive a logical value indicating if the listing should recurse into sub-directories of path. The default (FALSE) is to scan just the sub-directory with the same name as conf.name (without the .lsd extension or numeric tags), if present (regular expression in conf.name is not considered), and path. If TRUE, the entire sub- directory tree, starting at path, is scanned for files. join a logical value indicating if results files from multiple sub-directories should be joined together in the return list. The default (FALSE) is to list files from just a single sub-directory, the first one found during the search starting from path. full.names a logical value specifying if (TRUE) the file names should be expanded to abso- lute path names. The default (FALSE) is to use relative (to path) file names. sensitivity a logical value specifying if (TRUE) the target results files are part of a sensitivity analysis design of experiment (DoE), which are double numbered in a particular format (conf.name_XXX_YYY.res[.gz]). The default (FALSE) is to assume files are just single numbered, which is usually inappropriate for DoE results files. See LSDsensitivity package documentation for details. Details The order by which sub-directories are explored may be relevant. By default, the function scans for results files in a sub-directory named as conf.name, if present, in the given initial directory path. Next, if conf.name has a numeric suffix in the format name_XXX, where XXX is any number of algarisms, it searches the sub-directory name, if present. Finally, it scans the initial path itself. If results files are present in more than one sub-directory, function returns only the files found in first one (except if join = TRUE), and issues a warning message. If recursive = TRUE, file search starts from path and proceeds until it encompasses the entire sub-directory tree. In this case, if multiple sub-directories contain the desired files, only the initial path takes precedence, and the rest of the tree is recurred in alphabetical order. Please note that joining files from different sub-directories (join = TRUE) may combine results with incompatible data which cannot be processed together by the read.xxx.lsd() family of functions. Value A character vector containing the names of the found results files in the specified (sub) directories (empty if there were no files). If a path does not exist or is not a directory or is unreadable it is skipped. Note File naming conventions are platform dependent. The pattern matching works with the case of file names as returned by the OS. path must specify paths which can be represented in the current codepage, and files/directories below path whose names cannot be represented in that codepage will most likely not be found. Author(s) <NAME> See Also read.3d.lsd(), read.4d.lsd(), read.raw.lsd(), read.single.lsd(), read.multi.lsd(), read.list.lsd(), LSDsensitivity package, Examples # get the names of all files the example directory list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # expand search to the entire example directory tree # for results from a configuration file named "Sim1.lsd" # and join files found in all sub-directories conatining data list.files.lsd( system.file( "extdata", package = "LSDinterface" ), "Sim1.lsd", recursive = TRUE, join = TRUE ) name.check.lsd Check a set of LSD variables names against a LSD results file Description This function checks if all variable names in a set are valid for a LSD results file (.res). If no name is provided, the function returns all the valid unique variable names in the file. Usage name.check.lsd( file, col.names = NULL, check.names = TRUE ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all (unique) variable names. check.names logical. If TRUE then the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted to ensure that there are no duplicates. Value Returns a string vector containing the (original) valid variable names contained in the results file, using LSD/C++ naming conventions. Author(s) <NAME> See Also list.files.lsd() info.names.lsd(), Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # check all variable names name.check.lsd( files[ 1 ] ) # check just two names name.check.lsd( files[ 2 ], col.names = c( "GDP", "_growth1" ) ) name.clean.lsd Get clean (R) variable name Description This function produces a more appropriate variable name from R initial column name conversion. Usage name.clean.lsd( r.name ) Arguments r.name a string, a vector of strings, or an object which can be coerced to a character vector by as.character, from the column names produced by reading a LSD results file. Details The function removes the extra/ending ’.’ characters introduced by R and introduces a ’_’ between time span values. Value A string or a string vector with the same attributes as x (after possible coercion) and the format NAME.POSITION.INI_END. Author(s) <NAME> See Also name.var.lsd(), name.nice.lsd(), info.names.lsd() Examples name.clean.lsd( "Var1.1_1..1.100." ) name.clean.lsd( c( "Var1.1_1..1.100.", "Var2.1_2_3..50.70." ) ) name.nice.lsd Get a nice (R) variable name Description This function produces a nicer variable name from R initial column name conversion, in particular removing leading underscores. Usage name.nice.lsd( r.name ) Arguments r.name a string, a vector of strings, or an object which can be coerced to a character vector by as.character, from the column names produced by reading a LSD results file. Details The function removes the extra/ending ’.’ characters introduced by R and introduces a ’_’ between time span values and deletes leading underscores (’_’), converted to ’X_’ by R. Value A string or a string vector with the same attributes as x (after possible coercion) and the format NAME[.POSITION.INI_END]. Author(s) <NAME> See Also name.var.lsd(), name.clean.lsd(), info.names.lsd() Examples name.nice.lsd( "X_Var1.1_1..1.100." ) name.nice.lsd( c( "_Var1.1_1..1.100.", "X_Var2.1_2_3..50.70." ) ) name.nice.lsd( c( "_Var1", "X_Var2" ) ) name.r.unique.lsd Get valid unique R variable name Description This function produces a valid and unique variable name from names produced from multi-instanced LSD variables (as in read.raw.lsd). Usage name.r.unique.lsd( r.name ) Arguments r.name a string, a vector of strings, or an object which can be coerced to a character vector by as.character, from the column names produced by reading a LSD results file. Details The function removes the trailing ’.’ characters, and the text between, introduced during the con- version from LSD results files, add an ’X’ prefix to names started by an ’_’. After this initial trans- formation, all repeated variable names (originated from multi-instanced variables) are removed. The produced names are R valid variable names, similar to the original LSD/C++ variable names, but with an ’X’ prepended to variables starting with an ’_’ (which are invalid in R). Value A string or a string vector of converted string(s) including only non-repeated ones. Author(s) <NAME> See Also name.var.lsd(), name.clean.lsd(), name.nice.lsd(), info.names.lsd() Examples name.r.unique.lsd( "Var1.1_1.1_100" ) name.r.unique.lsd( c( "Var1.1_1.1_100", "_Var2.1_1.1_100", "_Var2.1_2.50_70" ) ) name.var.lsd Get original LSD variable name Description This function generates the original LSD variable name, as it was defined in LSD and before R adjusts the name, from a R column name (with or without position or timing information appended). Usage name.var.lsd( r.name ) Arguments r.name a string, a vector of strings, or an object which can be coerced to a character vector by as.character, from the column names produced by reading a LSD results file. Details The conversion may be incorrect if the original LSD variable is named in the format "X_...". No checking is done to make sure the variable really exists. Value A string or a string vector with the same attributes as x (after possible coercion). Author(s) <NAME> See Also name.clean.lsd(), info.names.lsd() Examples name.var.lsd( "label" ) name.var.lsd( c( "label", "X_underlinelabel" ) ) read.3d.lsd Read one instance of LSD variables (time series) from multiple LSD results files into a 3D array Description This function reads the data series associated to a specific instance of each selected variable from a set of LSD results files (.res) and saves them into a 3-dimensional array (time step x variable x file). Usage read.3d.lsd( files, col.names = NULL, nrows = -1, skip = 0, check.names = TRUE, instance = 1, nnodes = 1, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments files a character vector containing the names of the LSD results files which the data are to be read from. If they do not contain an absolute path, the file names are relative to the current working directory, getwd(). These can be compressed files and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all variables. nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates. instance integer: the instance of the variable to be read, for variables that exist in more than one object. This number is based on the position (column) of the variable in the results file. The default (1) is to read first instances. nnodes integer: the maximum number of parallel computing nodes (parallel threads) in the current computer to be used for reading the files. The default, nnodes = 1, means single thread processing (no parallel threads). If equal to zero, creates up to one node per CPU core. Only PSOCK clusters are used, to ensure compatibility with any platform. Please note that each node requires its own memory space, so memory usage increases linearly with the number of nodes. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix may contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are considered, but just one instance is considered. When posit is supplied together with col.names or instance, the selection process is done in two steps. Firstly, the column names and the instance position set by col.names and instance are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a 3D array containing data series from the selected variables. The array dimension order is: time x variable x file. Note If the selected files don’t have the same columns available (names and instances), after column selection, an error is produced. Author(s) <NAME> See Also list.files.lsd() read.4d.lsd(), read.single.lsd(), read.multi.lsd(), read.list.lsd(), read.raw.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read first instance of all variables from files (one level each), # pasting the directory where the example files are (not required if in working dir) inst1Array <- read.3d.lsd( files ) print( inst1Array[ 5 : 10, 1 : 7, 1 ] ) print( inst1Array[ 5 : 10, 1 : 7, 2 ] ) print( inst1Array[ 5 : 10, 1 : 7, 3 ] ) # read first instance of a set of variables named _A1p and _growth1 ab1Array <- read.3d.lsd( files, c( "_A1p", "_growth1" ) ) print( ab1Array[ 20 : 25, , 1 ] ) print( ab1Array[ 20 : 25, , 2 ] ) print( ab1Array[ 20 : 25, , 3 ] ) # read instance 2 of all variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30) inst2Array21_30 <- read.3d.lsd( files, skip = 20, nrows = 30, instance = 2 ) print( inst2Array21_30[ , , "Sim1_1" ] ) # use the file name to retrieve print( inst2Array21_30[ , , "Sim1_2" ] ) # read instance 5 of all variables in second-level objects, using up to 2 cores inst5array2 <- read.3d.lsd( files, instance = 2, posit = "*_*", posit.match = "glob", nnodes = 2 ) print( inst5array2[ 11 : 20, , 1 ] ) read.4d.lsd Read multiple instances of LSD variables (time series) from a set of LSD results file into a 4D array Description This function reads the data series associated to a set of instances of each selected variable from a set of LSD results files (.res) and saves them into a 4-dimensional array (time x variable x instance x file). Usage read.4d.lsd( files, col.names = NULL, nrows = -1, skip = 0, check.names = TRUE, pool = FALSE, nnodes = 1, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments files a character vector containing the names of the LSD results files which the data are to be read from. If they do not contain an absolute path, the file names are relative to the current working directory, getwd(). These can be compressed files and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all variables. nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates. pool logical. If TRUE, variables instances from all files are concatenated (by columns) as a single 3-dimensional array. If FALSE (the default), each file is saved as a separated dimension (fourth) in the array. nnodes integer: the maximum number of parallel computing nodes (parallel threads) in the current computer to be used for reading the files. The default, nnodes = 1, means single thread processing (no parallel threads). If equal to zero, creates up to one node per CPU core. Only PSOCK clusters are used, to ensure compatibility with any platform. Please note that each node requires its own memory space, so memory usage increases linearly with the number of nodes. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. When posit is supplied together with col.names, the selection process is done in two steps. Firstly, the column names set by col.names are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a 4D array containing data series for each instance from the selected variables. The array dimension order is: time x variable x instance x file. When pool = TRUE, the produced array is 3-dimensional. Pooling require that selected columns contains EXACTLY the same variables (number of instances may be different). Note If the selected files don’t have the same columns available (names), after column selection, an error is produced. When using the option pool = TRUE, columns from multiple files are consolidated with their original names plus the file name, to keep all column names unique. Use name.var.lsd to get just the LSD name of the variable corresponding to each column. Author(s) <NAME> See Also list.files.lsd() read.3d.lsd(), read.single.lsd(), read.multi.lsd(), read.list.lsd(), read.raw.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read all instances of all variables from files, allArray <- read.4d.lsd( files ) print( allArray[ 1 : 10, 1 : 7, 1, 1 ] ) # 1st instance of 1st file (7 vars and 10 times) print( allArray[ 11 : 20, "X_A1p", , "Sim1_2" ] ) # all instances of _A1p in Sim1_2 (10 times) print( allArray[ 50, 9, , ] ) # all instances of all files of 9th variable for t=50 # the same, but pooling all files into a single (3D!) array allArrayPool <- read.4d.lsd( files, pool = TRUE ) print( allArrayPool[ 1 : 10, 8 : 9, 3 ] ) # 3rd instances of last 2 vars (10 times) print( allArrayPool[ 11 : 20, "X_A1p", 4 : 9 ] ) # 6 instances of _A1p variable (10 times) print( allArrayPool[ 50, 9, 4 : 9 ] ) # 6 instances of all files of 9th variable for t=50 # read instances of a set of variables named '_A1p' and '_growth1' abArray <- read.4d.lsd( files, c( "_A1p", "_growth1" ) ) print( abArray[ 1 : 10, , 1, 2 ] ) # 1st instances of 2nd file (all vars and 10 times) print( abArray[ 11 : 20, 2, , "Sim1_3" ] ) # all instances of 2nd variable in Sim1_3 (10 times) print( abArray[ 50, "X_A1p", , ] ) # all instances of all files of _A1p variable for t=50 # read all variables/variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30) allArray21_30 <- read.4d.lsd( files, skip = 20, nrows = 30 ) print( allArray21_30[ , "X_growth1", , 2 ] ) # all instances of _growth1 variable in 2nd file print( allArray21_30[ 10, 8, , ] ) # all instances of all files of 8th variable for t=30 # read all variables in second-level objects, using up to 2 cores for processing abArray2 <- read.4d.lsd( files, posit = "*_*", posit.match = "glob", nnodes = 2 ) print( abArray2[ 11 : 20, , 5, "Sim1_1" ] ) # 5th instances in Sim1_1 file read.list.lsd Read one or more instances of LSD variables (time series) from a set of LSD results file into a list Description This function reads the data series associated to a specific or a set of instances of each selected variable from a set of LSD results file (.res) and saves them into separated matrices (one per file). Usage read.list.lsd( files, col.names = NULL, nrows = -1, skip = 0, check.names = TRUE, instance = 0, pool = FALSE, nnodes = 1, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments files a character vector containing the names of the LSD results files which the data are to be read from. If they do not contain an absolute path, the file names are relative to the current working directory, getwd(). These can be compressed files and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all variables. nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates. instance integer: the instance of the variable to be read, for variables that exist in more than one object. This number is based on the position (column) of the variable in the results file. The default (0) is to read all instances. pool logical. If TRUE, variables instances from all files are concatenated (by columns) into a single matrix. If FALSE (the default), each file is saved as a separated matrix in a list. nnodes integer: the maximum number of parallel computing nodes (parallel threads) in the current computer to be used for reading the files. The default, nnodes = 1, means single thread processing (no parallel threads). If equal to zero, creates up to one node per CPU core. Only PSOCK clusters are used, to ensure compatibility with any platform. Please note that each node requires its own memory space, so memory usage increases linearly with the number of nodes. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. When posit is supplied together with col.names or instance, the selection process is done in two steps. Firstly, the column names and instance positions set by col.names and instance are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a named list of matrices with the selected variables’ time series in the results files. If pool = TRUE, the return value is a single, consolidated matrix (column names are not unique). The matrices dimension order is: time x variable. Matrix column names are only "cleaned" if there are just single instanced variables selected. When multiple instanced variables are present, the column names include all the header information con- tained in the LSD results file. The name of the LSD variable associated to any column name can be retrieved with name.var.lsd. Note When using the option pool = TRUE, columns from multiple files are consolidated with their original names plus the file name, to keep all column names unique. Use name.var.lsd to get just the LSD name of the variable corresponding to each column. The returned matrices may be potentially very wide, in particular if variables are not well se- lected(see col.names above) or if there is a large number of instances. Author(s) <NAME> See Also list.files.lsd() name.var.lsd() read.single.lsd(), read.multi.lsd(), read.3d.lsd(), read.4d.lsd(), read.raw.lsd(), Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read all instances of all variables from three files (one matrix each), tableList <- read.list.lsd( files ) print( tableList[[ 1 ]][ 1 : 5, 1 : 7 ] ) print( tableList[[ 2 ]][ 1 : 5, 1 : 7 ] ) print( tableList[[ 3 ]][ 1 : 5, 1 : 7 ] ) # read all instances of a set of variables named '_A1p' and '_growth1' # and pool data into a single matrix abTable <- read.list.lsd( files, c( "_A1p", "_growth1" ), pool = TRUE ) print( abTable[ 10 : 20, 10 : 12 ] ) # read instance 4 of all variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30) inst4List21_30 <- read.list.lsd( files, skip = 20, nrows = 30, instance = 4 ) print( inst4List21_30[[ 1 ]] ) print( inst4List21_30[[ 2 ]] ) # read all variables in top-level objects, using up to 2 cores for processing instTop <- read.list.lsd( files, posit = 1, nnodes = 2 ) print( instTop$Sim1_1[ 11 : 20, ] ) # use the file name to retrieve list item print( instTop$Sim1_2[ 11 : 20, ] ) read.multi.lsd Read all instances of LSD variables (time series) from a LSD results file Description This function reads the data series associated to all instances of each selected variable from a LSD results file (.res). Usage read.multi.lsd( file, col.names = NULL, nrows = -1, skip = 0, check.names = TRUE, posit = NULL, posit.match = c( "fixed", "glob", "regex" ), posit.cols = FALSE ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all variables. nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. posit.cols logical. If TRUE just the position information is used as the names of the columns in each variable list. If FALSE, the default, the column names include all the header information contained in the LSD results file (name, position and time span). Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. When posit is supplied together with col.names, the selection process is done in two steps. Firstly, the column names set by col.names are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a named list of matrices, each containing one of the selected variables’ time series from the results file. Variable names are converted to valid R ones when defining list names. Matrix column names are not "cleaned", even for single instanced variables. The column names include all the header information contained in the LSD results file. Note For extracting data from multiple similar files (like sensitivity analysis results), see read.list.lsd. Author(s) <NAME> See Also list.files.lsd() read.single.lsd(), read.list.lsd(), read.3d.lsd(), read.4d.lsd(), read.raw.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # load first .res file into a simple matrix (all instances), macroList <- read.multi.lsd( files[ 1 ] ) length( macroList ) # number of lists holding variables names( macroList ) # name of each list print( macroList[[ 1 ]][ 1 : 5, , drop = FALSE ] ) print( macroList$X_A1p[ 10 : 20, ] ) # read first instance of 2 variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30), positions in cols varsList21_30 <- read.multi.lsd( files[ 2 ], c( "_A1p", "_growth1" ), skip = 20, nrows = 30, posit.cols = TRUE ) print( varsList21_30[[ 1 ]] ) print( varsList21_30$X_growth1 ) read.raw.lsd Read LSD results file and clean variables names Description This function reads all the data series in a LSD results file (.res). Usage read.raw.lsd( file, nrows = -1, skip = 0, col.names = NULL, check.names = TRUE, clean.names = FALSE, instance = 0, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). col.names a vector of optional names for the variables. The default is to read all variables. The names must to be in LSD/C++ format, without dots (".") in the name. Any dot (and trailing characters) will be automatically removed. check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted to ensure that there are no duplicates. clean.names logical. If TRUE the names of the variables in the columns are "cleaned" to remove extra information from the header in the LSD results file. This option is incompatible (and will be ignored) when multiple instances of a single variable are selected. If FALSE, the default, preserve extra information in the names. instance integer: the instance of the variable to be read, for variables that exist in more than one object. This number is based on the relative position (column) of the variable in the results file. The default (0) is to read all instances. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. When posit is supplied together with col.names or instance, the selection process is done in two steps. Firstly, the column names and instance positions set by col.names and instance are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a single matrix containing all variables’ time series contained in the results file. Note The returned matrix may be potentially very wide. See read.single.lsd for more polished column names. To use multiple results files simultaneously, see read.list.lsd and read.3d.lsd. Author(s) <NAME> See Also list.files.lsd() read.single.lsd(), read.multi.lsd(), read.list.lsd(), read.3d.lsd(), read.4d.lsd(), select.colattrs.lsd(), select.colnames.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read all instances of all variables of first file, bigTable <- read.raw.lsd( files[ 1 ] ) print( bigTable[ 1 : 5, 1 : 7 ] ) # read all instances of all variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30) all21_30 <- read.raw.lsd( files[ 2 ], skip = 20, nrows = 30 ) print( all21_30[ , 1 : 7 ] ) # read the third instances of a set of variables named '_A1p' and '_growth1' abTable <- read.raw.lsd( files[ 1 ], col.names = c( "_A1p", "_growth1" ), instance = 3 ) print( abTable[ 10 : 20, ] ) # read instances of variable '_A1p' for the second and fourth objects under # any top-level object (use globbing) a24 <- read.raw.lsd( files[ 1 ], col.names = "_A1p", posit = c( "*_2", "*_4" ), posit.match = "glob" ) print( a24[ 1 : 10, ] ) read.single.lsd Read LSD variables (time series) from a LSD results file (a single in- stance of each variable only) Description This function reads the data series associated to one instance of each selected variable from a LSD results file (.res). Just a single instance (time series of a single LSD object) is read at each call. Usage read.single.lsd( file, col.names = NULL, nrows = -1, skip = 0, check.names = TRUE, instance = 1, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments file the name of the LSD results file which the data are to be read from. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). This can be a compressed file (see file) and must include the appropriated extension (usually .res or .res.gz). col.names a vector of optional names for the variables. The default is to read all variables. The names must to be in LSD/C++ format, without dots (".") in the name. Any dot (and trailing characters) will be automatically removed. nrows integer: the maximum number of time steps (rows) to read in. Negative and other invalid values are ignored. The default is to read all rows. skip integer: the number of time steps (rows) of the results file to skip before begin- ning to read data. The default is to read from the first time step (t = 1). check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted to ensure that there are no duplicates. instance integer: the instance of the variable to be read, for variables that exist in more than one object. This number is based on the relative position (column) of the variable in the results file. The default (0) is to read all instances. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are considered, but just one instance is considered. When posit is supplied together with col.names or instance, the selection process is done in two steps. Firstly, the column names and the instance position set by col.names and instance are selected. Secondly, the instances defined by posit are selected from the first selection set. See select.colnames.lsd and select.colattrs.lsd for examples on how to apply advanced selection options. Value Returns a matrix containing the selected variables’ time series contained in the results file. Note This function is useful to extract time series for variables that are single instanced, like summary statistics. For multi-instanced variables, see read.multi.lsd. For extracting data from multiple similar files (like sensitivity analysis results), see read.list.lsd (multi-instanced variables) and read.3d.lsd (single-instanced variables). Author(s) <NAME> See Also list.files.lsd() read.multi.lsd(), read.list.lsd(), read.3d.lsd(), read.4d.lsd(), read.raw.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # load first .res file into a simple matrix (first instances only) macroVar <- read.single.lsd( files[ 1 ] ) print( macroVar[ 10 : 20, 5 : 9 ] ) # read second instance of a set of variables named '_A1p' and '_growth1' ag2Table <- read.single.lsd( files[ 2 ], col.names = c( "_A1p", "_growth1" ), instance = 2 ) print( ag2Table[ 10 : 15, ] ) # read first instance of all variables, skipping the initial 20 time steps # and keeping up to 30 time steps (from t = 21 up to t = 30) var21_30 <- read.single.lsd( files[ 3 ], skip = 20, nrows = 30 ) print( var21_30[ , 1 : 7 ] ) # read third instance of all variables at the second object level var2_3_5 <- read.single.lsd( files[ 1 ], instance = 3, posit = "*_*", posit.match = "glob" ) print( var2_3_5[ 20 : 25, ] ) select.colattrs.lsd Select a subset of a LSD results matrix (by variable attributes) Description This function select a subset of a LSD results matrix (as produced by read.raw.lsd) by the variable attributes, considering the LSD object position and the time span. Usage select.colattrs.lsd( dataSet, info, col.names = NULL, init.value = NA, init.time = NA, end.time = NA, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments dataSet matrix produced by the invocation of read.raw.lsd, read.single.lsd, read.multi.lsd or read.list.lsd (a single matrix a time) functions. info data frame produced by info.details.lsd for the same results file from where dataSet was extracted. col.names a vector of optional names for the variables to select from. The default is to select from all variables. init.value initial value attributed to the variable(s) to select. init.time initial time attributed to the variable(s) to select. end.time end time attributed to the variable(s) to select. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. When posit is supplied together with other attribute filters, the selection process is done in two steps. Firstly, the column names set by otter attribute filters are selected. Secondly, the instances defined by posit are selected from the first selection set. See also the read.XXXX.lsd functions which may select just specific posit object instances when loading LSD results. If only a single set of instances is required, this would be more efficient than using this function. Value Returns a single matrix containing the selected variables’ time series contained in the original data set. Note If only variable names selection is needed, select.colnames.lsd is more efficient because infor- mation pre-processing (info.details.lsd) is not required. Author(s) <NAME> See Also list.files.lsd() info.details.lsd(), select.colnames.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read all instances of all variables of first file bigTable <- read.raw.lsd( files[ 1 ] ) # build the info table info <- info.details.lsd( files[ 1 ] ) # extract specific instances of a set of variables named '_A1p' and '_growth1' abFirst2 <- select.colattrs.lsd( bigTable, info, c( "_A1p", "_growth1" ), posit = c( "1_2", "1_5" ) ) print( abFirst2[ 50 : 60, ] ) # extract instances of variable '_A1p' that start at time step t = 1 # for the second and fourth objects under any top-level object (use globbing) a24 <- select.colattrs.lsd( bigTable, info, "_A1p", init.time = 1, posit = c( "*_2", "*_4" ), posit.match = "glob" ) print( a24[ 1 : 10, ] ) # extract all second-level object instances of all variables aSec <- select.colattrs.lsd( bigTable, info, posit = "*_*", posit.match = "glob" ) print( aSec[ 1 : 10, ] ) # extract just top-level object instances variables aTop <- select.colattrs.lsd( bigTable, info, posit = "^[0-9]+$", posit.match = "regex" ) print( aTop[ 1 : 10, ] ) select.colnames.lsd Select a subset of a LSD results matrix (by column/variable names) Description This function select a subset of a LSD results matrix (as produced by read.raw.lsd) by the column (variable) names, considering only the name part of the column labels. Usage select.colnames.lsd( dataSet, col.names = NULL, instance = 0, check.names = TRUE, posit = NULL, posit.match = c( "fixed", "glob", "regex" ) ) Arguments dataSet matrix produced by the invocation of read.raw.lsd, read.single.lsd, read.multi.lsd or read.list.lsd (a single matrix a time) functions. col.names a vector of optional names for the variables. The default is to read all variables. The names must to be in LSD/C++ format, without dots (".") in the name. Any dot (and trailing characters) will be automatically removed. instance integer: the instance of the variable to be read, for variables that exist in more than one object. This number is based on the relative position (column) of the variable in the results file. The default (0) is to read all instances. check.names logical. If TRUE the names of the variables are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted to ensure that there are no duplicates. posit a string, a vector of strings or an integer vector describing the LSD object posi- tion of the variable(s) to select. If an integer vector, it should define the position of a SINGLE LSD object. If a string or vector of strings, each element should define one or more different LSD objects, so the returning matrix will contain variables from more than one object. By setting posit.match, globbing (wild- card), and regular expressions can be used to select multiple objects at once; in this case, all matching objects are returned. This option only operates if dataSet was generated by read.raw.lsd WITHOUT argument clean.names = TRUE. posit.match a string defining how the posit argument, if provided, should be matched against the LSD object positions. If equal to "fixed", the default, only exact matching is done. "glob" allows using simple wildcard characters (’*’ and ’?’) in posit for matching. If posit.match="regex" interpret posit as POSIX 1003.2 ex- tended regular expression(s). See regular expressions for details of the dif- ferent types of regular expressions. Options can be abbreviated. Details Selection restriction arguments can be provided as needed; when not specified, all available cases are selected. The selection of specific posit object positions require full detail on dataSet column names, as produced by read.raw.lsd and clean.names = TRUE is NOT used. Other read.XXXX.lsd func- tions do NOT produce the required detail on the data matrices to do object position selection. If such datasets are used to feed this function and posit is set, the return value will be NULL. In this case, consider using select.colattrs.lsd, or specifying posit when calling read.XXXX.lsd functions. When posit is supplied together with other attribute filters, the selection process is done in two steps. Firstly, the column names set by otter attribute filters are selected. Secondly, the instances defined by posit are selected from the first selection set. See also the read.XXXX.lsd functions which may select just specific col.names columns, instance instances, or posit positions when loading LSD results. If only a single set of columns/instance/positions is required, this may be more efficient than using this function. Value Returns a single matrix containing the selected variables’ time series contained in the original data set. Note The variable/column names must be valid R or LSD column names. Author(s) <NAME> See Also list.files.lsd(), select.colattrs.lsd(), read.raw.lsd() Examples # get the list of file names of example LSD results files <- list.files.lsd( system.file( "extdata", package = "LSDinterface" ) ) # read all instances of all variables in first file bigTable <- read.raw.lsd( files[ 1 ] ) print( bigTable[ 1 : 10, 1 : 7 ] ) # extract all instances of a set of variables named '_A1p' and '_growth1' abTable <- select.colnames.lsd( bigTable, c( "_A1p", "_growth1" ) ) print( abTable[ 11 : 15, ] ) # extract specific instances of a set of variables named '_A1p' and '_growth1' abFirst2 <- select.colnames.lsd( bigTable, c( "_A1p", "_growth1" ), posit = c( "1_2", "1_5" ) ) print( abFirst2[ 50 : 60, ] ) # extract all second-level object instances of all variables aSec <- select.colnames.lsd( bigTable, posit = "*_*", posit.match = "glob" ) print( aSec[ 1 : 10, ] ) # extract just top-level object instances variables aTop <- select.colnames.lsd( bigTable, posit = "^[0-9]+$", posit.match = "regex" ) print( aTop[ 1 : 10, ] )
spyking-circus
readthedoc
Markdown
SpyKING CIRCUS 1.0.1 documentation [SpyKING CIRCUS](index.html#document-index) --- Welcome to the SpyKING CIRCUS’s documentation![¶](#welcome-to-the-spyking-circus-s-documentation) === The SpyKING CIRCUS is a massively parallel code to perform semi automatic spike sorting on large extra-cellular recordings. Using a smart clustering and a greedy template matching approach, the code can solve the problem of overlapping spikes, and has been tested both for *in vitro* and *in vivo* data, from tens of channels to up to 4225 channels. Results are very good, cross-validated on several datasets, and details of the algorithm can be found in the following publication: <https://elifesciences.org/articles/34518>. Note that the datasets used in the paper are freely available on Zenodo <https://zenodo.org/record/1205233/export/hx#.WrORP3XwaV4> if you want to try/benchmark your own spike sorting algorithms. Introduction[¶](#introduction) --- In this section, you will find all basic information you need about the software. Why you should use it or at least give it a try, how to get it, and how to install it. To know more about how to use it, see the following sections. ### Why using it?[¶](#why-using-it) SpyKING CIRCUS is a free, open-source, spike sorting software written entirely in python. In a nutshell, this is a fast and efficient way to perform spike sorting using a template-matching based algorithm. #### Because you have too many channels[¶](#because-you-have-too-many-channels) Classical algorithms of spike sorting are not properly scaling up when the number of channels is increasing. Most, if not all of them would have a very hard time dealing with more than 100 channels. However, the new generation of electrodes, either *in vitro* (MEA with 4225 channels) or *in vivo* (IMEC probe with 128 channels) are providing more and more channels, such that there is a clear need for a software that would properly scale with the size of the electrodes. Note → The SpyKING CIRCUS, based on the [MPI](https://www.mpich.org/) library, can be launched on several processors. Execution time scales linearly as function of the number of computing nodes, and the memory consumption scales only linearly as function of the number of channels. So far, the code can handle 4225 channels in parallel. #### Because of overlapping spikes[¶](#because-of-overlapping-spikes) With classical spike sorting algorithms, overlapping spikes are leading to outliers in your clusters, such that they are discarded. Therefore, each time two neurons have overlapping waveforms, their spikes are ignored. This can be problematic when you are addressing questions relying on fine temporal interactions within neurons. It is even more problematic with large and dense electrodes, with many recording sites close from each others, because those overlapping spikes start to be the rule instead of the exception. Therefore, you need to have a spike sorting algorithm that can disentangle those overlapping spikes. Note → The SpyKING CIRCUS, using a template-matching based algorithm, reconstructs the signal as a linear sum of individual waveforms, such that it can resolve the fine cross-correlations between neurons. #### Because you want to automatize[¶](#because-you-want-to-automatize) For large number of channels, a lot of clusters (or equivalently templates, or cells) can be detected by spike sorting algorithms, and the time spent by a human to review those results should be reduced as much as possible. Note → The SpyKING CIRCUS, in its current form, aims at automatizing as much as possible the whole workflow of spike sorting, reducing the human interaction. Not that it can be zero, but the software aims toward a drastic reduction of the manual curation, and results shows that performances as good or even better than with classical spike sorting approaches can be obtained. ### How to get the code[¶](#how-to-get-the-code) The code is currently hosted on [github](https://github.com), in a public repository, relying on [Git](https://git-scm.com/), at <https://github.com/spyking-circus/spyking-circus>. The following explanations are only for those that want to get a copy of the git folder, with a cutting-edge version of the software. Note The code can be installed automatically to its latest release using `pip` or `conda` (see [How to install](index.html#document-introduction/install)). #### Cloning the source[¶](#cloning-the-source) Create a folder called `spyking-circus`, and simply do: ``` >> git clone https://github.com/spyking-circus/spyking-circus.git spyking-circus ``` The advantages of that is that you can simply update the code, if changes have been made, by doing: ``` >> git pull ``` ##### Without git[¶](#without-git) If you do not have git installed, and want to get the source, then one way to proceed is: > 1. Download and install [SourceTree](https://www.sourcetreeapp.com/) > 2. > 3. Click on the `Clone in SourceTree` button, and use the following link with [SourceTree](https://www.sourcetreeapp.com/) <https://github.com/spyking-circus/spyking-circus> > 4. In [SourceTree](https://www.sourcetreeapp.com/) you just need to click on the `Pull` button to get the latest version of the software. #### Download the archive[¶](#download-the-archive) All released versions of the code can now be downloaded in the `Download` section of the [github](https://github.com) project, as `.tar.gz` files (pip install) To know more about how to install the sofware, (see [How to install](index.html#document-introduction/install)) ### Installation[¶](#installation) The SpyKING CIRCUS comes as a python package, and it at this stage, note that mostly unix systems have been tested. However, users managed to get the software running on Mac OS X, and on Windows 7,8, or 10. We are doing our best, using your feedbacks, to improve the packaging and make the whole process as smooth as possible on all platforms. #### How to install[¶](#how-to-install) Note We recommend using [Anaconda](https://www.anaconda.com/distribution/), with a simple install: * [see here for detailed instructions on Windows](index.html#document-introduction/windows) * [see here for detailed instructions on Mac OS X](index.html#document-introduction/mac) ##### Using with CONDA[¶](#using-with-conda) Install [Anaconda](https://www.anaconda.com/distribution/) or [miniconda](https://docs.conda.io/en/latest/miniconda.html), e.g. all on the terminal (but there is also a .exe installer for Windows, etc.): As an example for linux, just type: ``` >> wget https://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh >> bash Miniconda-latest-Linux-x86_64.sh ``` If you want, first, the best is to create a dedicated environment: ``` >> conda create -n circus python=3.6 ``` Then activate the environment: ``` >> conda activate circus ``` Then install the software itself: ``` (circus) >> conda install -c conda-forge -c spyking-circus spyking-circus ``` ##### Using pip[¶](#using-pip) To do so, use the `pip` utility: ``` >> pip install spyking-circus ``` Note that if you are using a linux distribution, you must be sure that you have `mpich` instead of `openmpi` (default on Ubuntu). To do that, please do: ``` >> sudo apt remove openmpi >> sudo apt install mpich libmpich-dev ``` And to be sure that mpi4py is not installed with precompiled binary that would link with openmpi, you need to do: ``` >> pip install spyking-circus --no-binary=mpi4py ``` You might want to add the `--user` flag, to install SpyKING CIRCUS for the local user only, which means that you don’t need administrator privileges for the installation. In principle, the above command also install SpyKING CIRCUS’s dependencies. Once the install is complete, you need to add the `PATH` where SpyKING CIRCUS has been installed into your local `PATH`, if not already the case. To do so, simply edit your `$HOME/.bashrc` and add the following line: ``` export PATH=$PATH:$HOME/.local/bin ``` Then you have to relaunch the shell, and you should now have the SpyKING CIRCUS installed! ##### Using sources[¶](#using-sources) Alternatively, you can download the source package directly and uncompress it, or work directly with the git folder <https://github.com/spyking-circus/spyking-circus> to be in sync with bug fixes. You can then simply run: ``` >> pip install . --user ``` Or even better, you can install it with the develop mode: ``` >> pip install -e . --user ``` Such that if you do a git pull in the software directory, you do not need to reinstall it. For those that are not pip users, it is equivalent to: ``` >> python setup.py install ``` Or to keep the folder in sync with the install in a develop mode: ``` >> python setup.py develop ``` A potential disadvantage of circumventing pip like this is the lack of a easy possibility to remove the installed files at a later point in time. Note If you want to install `scikit-learn`, needed to get the BEER estimates, you need to add `[beer]` to any pip install Note If you experience some issues with Qt or pyQt, you may need to install it manually on your system. For linux users, simply use your software distribution system (apt for example). For windows user, please see [here](http://doc.qt.io/qt-5/windows-support.html) ##### Installing phy 2.0[¶](#installing-phy-2-0) If you want to use the phy GUI to visualize your results, you may need to install [phy](https://github.com/cortex-lab/phy) 2.0 (only compatible with python 3). If you have installed SpyKING CIRCUS within a conda environment, first activate it: ``` >> conda activate circus ``` Once this is done, install [phy](https://github.com/cortex-lab/phy) 2.0: ``` (circus) >> pip install colorcet pyopengl qtconsole requests traitlets tqdm joblib click mkdocs dask toolz mtscomp (circus) >> pip install --upgrade https://github.com/cortex-lab/phy/archive/master.zip (circus) >> pip install --upgrade https://github.com/cortex-lab/phylib/archive/master.zip ``` You can see more details on the [phy website](https://phy.readthedocs.io/en/latest/installation/) #### Home Directory[¶](#home-directory) During the install, the code creates a `spyking-circus` folder in `/home/user` where it will copy several probe designs, and a copy of the default parameter file. Note that if you are always using the code with a similar setup, you can edit this template, as this is the one that will be used by default. #### Parallelism[¶](#parallelism) ##### Using MPI[¶](#using-mpi) If you are planning to use [MPI](https://www.mpich.org/), the best solution is to create a file `$HOME/spyking-circus/circus.hosts` with the lists of available nodes (see [Configuration of MPI](index.html#document-introduction/mpi)). You should also make sure, for large number of electrodes, that your MPI implementation is compatible recent enough such that it can allow shared memory within processes. ##### Using HDF5 with MPI[¶](#using-hdf5-with-mpi) If you are planning to use large number of electrodes (> 500), then you may use the fact that the code can use parallel [HDF5](https://www.hdfgroup.org). This will speed everything and reduce disk usage. To know more about how to activate it, see (see [Parallel HDF5](index.html#document-introduction/hdf5)). #### Dependencies[¶](#dependencies) For information, here is the list of all the dependencies required by the SpyKING CIRCUS: 1. `tqdm` 2. `mpi4py` 3. `numpy` 4. `cython` 5. `scipy` 6. `matplotlib` 7. `h5py` 8. `colorama` 9. `blosc` 10. `scikit-learn` 11. `statsmodels` ### Configuration of MPI[¶](#configuration-of-mpi) The code is able to use multiple CPU to speed up the operations. It can even use GPU during the fitting phase. However, you need to have a valid hostfile to inform MPI of what are the available nodes on your computer. By default, the code searches for the file `circus.hosts` in the spyking-circus folder, create during the installation `$HOME/spyking-circus/`. Otherwise, you can provide it to the main script with the `-H` argument (see [documentation on the parameters](index.html#document-code/parameters)): ``` >> spyking-circus path/mydata.extesion -H mpi.hosts ``` #### Structure of the hostfile[¶](#structure-of-the-hostfile) Such a hostfile may depend on the fork of MPI you are using. For [MPICH](https://www.mpich.org/), this will typically look like (if you want to use only 4 cores per machine): ``` 192.168.0.1:4 192.168.0.2:4 192.168.0.3:4 192.168.0.4:4 192.168.0.5:4 ``` For [OpenMPI](https://www.mpich.org/), this will typically look like (if you want to use only 4 cores per machine): ``` 192.168.0.1 max-slots=4 192.168.0.2 max-slots=4 192.168.0.3 max-slots=4 192.168.0.4 max-slots=4 192.168.0.5 max-slots=4 ``` If this is your parameter file, and if you launch the code with 20 CPUs: ``` >> spyking-circus path/mydata.extension -c 20 ``` Then the code will launch 4 instances of the program on the 5 nodes listed in the hostname.hosts file Note If you are using multiple machines, all should read/write in a **shared** folder. This can be done with [NFS](https://en.wikipedia.org/wiki/Network_File_System) or [SAMBA](https://support.microsoft.com/en-us/kb/224967) on Windows. Usually, most clusters will provide you such a shared `/home/user` folder, be sure this is the case Warning For now, the code is working with [MPICH](https://www.mpich.org/) versions higher than 3.0, and [OpenMPI](https://www.mpich.org/) versions below 3.0. We plan to make this more uniform in a near future, but the two softwares made different implementation choices for the MPI library #### Shared Memory[¶](#shared-memory) With recent versions of MPI, you can share memory on a single machine, and this is used by the code to reduce the memory footprint. If you have large number of channels and/or templates, be sure to use a recent version of [MPICH](https://www.mpich.org/) (>= 3.0) or [OpenMPI](https://www.mpich.org/) (> 1.8.5) ### Release notes[¶](#release-notes) #### Spyking CIRCUS 0.8[¶](#spyking-circus-0-8) This is the 0.8 release of the SpyKING CIRCUS, a new approach to the problem of spike sorting. The code is based on a smart clustering with sub sampling, and a greedy template matching approach, such that it can resolve the problem of overlapping spikes. The publication about the software is available at <https://elifesciences.org/articles/34518The software can be used with command line, or a dedicated GUI Warning The code may still evolve. Even if results are or should be correct, we can expect some more optimizations in a near future, based on feedbacks obtained on multiple datasets. If you spot some problems with the results, please be in touch with [<EMAIL>](mailto:pierre.yger%40inserm.fr) ##### Contributions[¶](#contributions) Code and documentation contributions (ordered by the number of commits): * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> ###### Release 1.0.2[¶](#release-1-0-2) * possibility for the amplitudes [a_min, a_max] to depend on time * median is now removed per shanks * common ground syntax slightly changed, to allow one ground per shank * fix a bug when collect_all and dense templates * fix if no templates are found * improvements of the smart search * add the option to collect normalized MSE during fitting. False by default * fix the rhd wrapper * divide and conquer assigment now based on barycenters instead of simple extremas * exit the clustering if too many centroids are found (sign of bad channels) * fixes in the meta merging GUI (RPV and dip) * optimizations for the second component, less double counting * fix to use at least 1 CPU * better estimation of amplitudes for axonal spikes * enhance the estimation of amplitudes by proper alignement ###### Release 1.0.0[¶](#release-1-0-0) * prevent the use of negative indices for channels * fix if no templates are found * fix if the dead file is empty * fix for recent versions of MPI (if dead times used) * fix if dead times are not sorted or overlapping * add the auto_cluster param in [data] to force gobal_tmp if needed * fix when no cluster are found on some electrodes * fix in the MATLAB Gui if no spikes are found * support for the maxwell file format (MaxOne and MaxTwo) * optimizations for faster fitting * templates are densified dring fitting if not enough sparse (faster) ###### Release 0.9.9[¶](#release-0-9-9) * fix for shanks (because of optimization in 0.9.8) * fix for clusters (if global tmp is not created) * fix for recent versions of MPI (shared memory issues) * still speeding up the fitting procedure, as a final bottleneck * fix in the smart search and chunks exploration ###### Release 0.9.8[¶](#release-0-9-8) * fix a bug while filtering HDF5 file with overwrite set to False * fix a bug for windows and Intel MPI * speeding up the fitting procedure * reducing the memory footprint while optimizing amplitudes for large number of templates * changing the way of saving overlaps, making use of internal symmetry. Lot of memory saved ###### Release 0.9.7[¶](#release-0-9-7) * fix a bug in the preview mode * fix a bug while converting with export_all set to True * fix a rare bug when both peaks are detect in clustering with smart search * fix a bug if removing reference channel after filtering has been already done * fix a bug while converting with export_all * fix a bug in the filtering introduced in 0.9.6 (last chunk not filtered) * fix a possible bug in smart search with dynamic bins * enhance the robustness of the whitening for very large arrays * speeding up the fitting procedure * enhancing the non-selection of noisy snippets, and thus clustering * option to dynamically adapt cc_merge for large number of electrodes * remove putative mixtures based on variance, speeding up drastically CC estimation ###### Release 0.9.6[¶](#release-0-9-6) * fixes in the smart search (not all rare cases were covered in 0.9.5) * fix a bug if multi file is activated with very small chunks * speeding up the estimation of the templates: less snippets, closer to centroids * speeding up the estimation of the amplitudes: less noise snippets * speeding up isolation step during the smart search * number of bins is adapted during the smart search as function of noise levels * add the possibility to hide the status bars (for SpikeInterface logs) ###### Release 0.9.5[¶](#release-0-9-5) * speeding up the optimization of the amplitudes with MPI * speeding up the processing of numpy datafiles (SpikeInterface) * speeding up the smart search step (pre-generation of random numbers) * speeding up the clustering step * fix a bug while filtering in the preview mode introduced in 0.9.2 * speeding up the fitting step ###### Release 0.9.4[¶](#release-0-9-4) * speeding up the optimization of the amplitudes with MPI * speeding up the processing of numpy datafiles (SpikeInterface) * speeding up the smart search step (pre-generation of random numbers) ###### Release 0.9.2[¶](#release-0-9-2) * speeding up the algorithm * fixing a bug in the clustering while assigining labels * better detection of noise snippets discarded during clustering * cosmetic changes in the sanity plots (clusters) * better handling of overlapping chunks while filtering, removing filtering artefacts * templates are restricted within shanks * optimization of the amplitudes once all templates have been found * export of a purity value, for phy, to assess how good a cluster is (between 0 and 1) * display the purity value in MATLAB * fix a (silent) bug in the supports introduced in 0.9.0, preventing mixture removal * nb_chances is automatically adapted during the fitting procedure * drifts are now automatically handled by the meta merging procedure * enhancement in the automatic merging of drifts ###### Release 0.9.1[¶](#release-0-9-1) * Minor bug fixes in spyking-circus-launcher * fix a bug in the amplitude display. Values were shuffled when several CPU were used * add the option to ignore second component [clustering]->two_components ###### Release 0.9.0[¶](#release-0-9-0) * can now fit spikes below detection threshold (with spike_thresh_min) * templates are now estimated without any spatial restrictions * display a warning if N_t is not optimally chosen ###### Release 0.8.9[¶](#release-0-8-9) * fix a small bug in the smart search, introduced while refactoring in 0.8.7 ###### Release 0.8.8[¶](#release-0-8-8) * fix a regression introduced in 0.8.7 for non contiguous probe indices ###### Release 0.8.7[¶](#release-0-8-7) * new methods to detect the peaks, more robust when low thresholds are fixed * more accurate fitting procedure, slightly slower * minor bug fixes * addition of a sparsity_limit parameter in the meta merging GUI, to remove noise more precisely * new parameter file is properly copied * enhancement of the smoothing/alignement procedure, more accurate estimation of noisy templates * better estimation of the amplitudes boundaries used during fitting * optimization while removing mixtures and important bug fixes * fix a bug in the thresholding method * minor updates to get more refined spikes during whitening and clustering * tests with SpikeInterface, showing clear increase in performance * some cleaning in the parameter file * default value for cc_merge is now 0.95, since merging functions are more robust * noisy templates are removed by default while meta merging with a lower threshold (0.75) * speeding up whitening and clustering steps ###### Release 0.8.6[¶](#release-0-8-6) * Export from manual sorting with MATLAB to phy is now possible * Modification to pass SpikeSorters test suite ###### Release 0.8.5[¶](#release-0-8-5) * fix a bug while removing noisy templates in meta merging * refactoring of the meta merging GUI, addition of bhatta distances * meta merging more robust for non stationary recordings * enhance logging if parameters are missing and/or not defined * can now display the electrode labels in preview GUI * detects if a wrong install of MPI is present (linking with mpi4py) * conda install overwrites the old parameter file * raw dispay of the MUA in the result GUI (to be improved) * display an error if not all nodes on a cluster can read the datafiles * fix a bug for thresholding method using dead times ###### Release 0.8.4[¶](#release-0-8-4) * fix if no spikes are found on some electrodes * fix as mean/median-pca methods were broken (albeit not used) * fix to prevent a rare crash while loading too sparse overlaps * fix a bug with the new dip method in python 2.7 * add the thresholding method to extract only MUA activity (requested by users) * channel lists in probe files can be non sorted * memory usage is dynamically adapted to reduce memory footprint * hdf5 and npy file format can now work with 3D arrays (x, y, time) or (time, x, y) * fix a bug if basis for pos and neg spikes have different sizes * add some docstrings (thanks to <NAME>) * sparse export for phy is now the default * comments can now be added in the trigger/dead times files * 4096 channels can now run on a single machine, with low memory consumption * basic support for 3d probes, without any visualization * more robust to saturating channels with nan_to_num * cc_merge set to 1 automatically if templates on few channels are detected * fix a bug if only one artefact type is given * fix a bug if only 2 spikes are found on a single electrode * former parameters sim_same_elec and dip_threshold renamed into merge_method and merge_param * sanity plots for local merges can now be produced during clustering (debug_plots in [clustering]) ###### Release 0.8.3[¶](#release-0-8-3) * automatic suppression, during meta merging, of noisy templates (for SpikeToolKit/Forest) * during the phy export, we can automatically pre-assign labels to neurons * fix a bug when converting to phy with dead channels * fix a bug when converting to phy with file formats without data_offset * speedup the estimation of the amplitude distribution * minor fixes for clusters * smoothing of the templates thanks to Savitzky-Golay filtering * fix a bug when launching GUIs for file format without data offset * can now work with scipy 1.3 and statsmodels 0.10 * isolation mode is improved, set as default and leading to better performance * reducing overclustering with the Hartigan dip-test of unimodality * can now set the number of dimensions for local PCA (10 by default) ###### Release 0.8.2[¶](#release-0-8-2) * add a docker file to build the software * add support for shanks in phy 2.0 * add support for deconverting in the qt launcher * do not create a Qt App if merging in auto mode * waveforms are convolved with a Hanning window to boost PCA * oversampling in now adapted as function of the sampling rate * reduction of I/O while oversampling * speed improvement with undersampling while cleaning the dictionary * automation of the software for SpikeForest/SpikeToolkit benchmarks * merging is now included in the default pipeline * normalization of the metrics in the meta merging GUI ###### Release 0.8.0[¶](#release-0-8-0) * major improvement in the clustering. No more max_clusters parameters * much faster clustering (thanks to <NAME>) * added the statsmodels library as a required dependency * enhancement of the smart search mode * enhancement of the bicubic spline interpolation * fix a typo when using dead times and the collect mode * fix a minor bug when small amount of spikes are found during smart search * fix a bug in the wrapper for BRW files * support for phy 2.0 and phylib * remove the strongly time shifted templates * additing of a wrapper for MDA file format * amplitudes for unfitted spikes is now 1 when exporting to phy * default install is now qt5, to work with phy 2.0 ###### Release 0.7.6[¶](#release-0-7-6) * cosmetic changes in the GUI * adding a deconverting method to switch back from phy to MATLAB * support for the lags between templates in the MATLAB GUI * warn user if data are corrupted because of interrupted filtering * reduction of the size for saved clusters * display the file name in the header * fix a nasty bug allowing spikes at the border of chunks to be fitted even during dead periods ###### Release 0.7.5[¶](#release-0-7-5) * fix a bug for MPICH when large dictionaries. * fix a bug for numpy files when used with new numpy versions * add the possibility to subtract one channel as a reference channel from others * native support for blackrock files (only .ns5 tested so far) * simplifications in the parameter file * fix for display of progress bars with tqdm * addition of a multi-folders mode for openephys * hide GPU support for now, as this is not actively maintained and optimized * fix in the MATLAB GUI for float32 data * fix the broken log files * default cpu number is now half the available cores ###### Release 0.7.4[¶](#release-0-7-4) * fix a regression with spline interpolation, more investigation needed ###### Release 0.7.0[¶](#release-0-7-0) * fix a possible rounding bug if triggers are given in ms * artefacts are computed as medians and not means over the signal * can turn off shared memory if needed * a particular pattern can be specified for neuralynx files * fix bugs with output_dir, as everything was not saved in the folder * add a circus-folders script to process virtually files within several folders as a single recording * add a circus-artefacts script to concatenate artefact files before using stream mode * multi-files mode is now enabled for Neuralynx data * fixes for conversion of old dataset with python GUI * smooth exit if fitting with 0 templates (thanks to <NAME>) * enhance the bicubic spline interpolation for oversampling * spike times are now saved as uint32 for long recordings ###### Release 0.6.7[¶](#release-0-6-7) * optimizations for clusters (auto blosc and network bandwith) * addition of a dead_channels option in the [detection] section, as requested * prevent user to remove median with only 1 channel * fix for parallel writes in HDF5 files * hide h5py FutureWarning ###### Release 0.6.6[¶](#release-0-6-6) * fix for matplotlib 2.2.2 * fix a bug when loading merged data with phy GUI * faster support for native MCD file with pyMCStream * more robust whitening for large arrays with numerous overlaps * add an experimental mode to refine coreset (isolated spikes) * put merging units in Hz^2 in the merging GUI * add a HDF5 compression mode to greatly reduce disk usage for very large probe * add a Blosc compression mode to save bandwith for clusters * fix a display bug in the merging GUI when performing multiple passes ###### Release 0.6.5[¶](#release-0-6-5) * reduce memory consumption for mixture removal with shared memory * made an explicit parameter cc_mixtures for mixture removal in the [clustering] section * Minor fixes in the MATLAB GUI * fix in the exact times shown during preview if second is specified * prevent errors if filter is False and overwrite is False ###### Release 0.6.4[¶](#release-0-6-4) * fix a bug in the BEER for windows platforms, enhancing robustness to mpi data types * speed up the software when using ignore_dead_times * ensure backward compatibility with hdf5 version for MATLAB * fix a rare bug in clustering, when no spikes are found on electrodes * fix a bug in the MATLAB GUI when reloading saved results, skipping overlap fixes ###### Release 0.6.3[¶](#release-0-6-3) * fix a bug if the parameter file have tabulations characters * add a tab to edit parameters directly in the launcher GUI * fix dtype offset for int32 and int64 * minor optimizations for computations of overlaps * explicit message displayed on screen if filtering has already been performed * can specify a distinct folder for output results with output_dir parameter * fix a bug when launching phy GUI for datafiles without data_offset parameter (HDF5) * fix a memory leak when using dead_times * fix a bug for BRW and python3 * fix a bug in the BEER * pin HDF5 to 1.8.18 versions, as MATLAB is not working well with 1.10 * fix a bug when relaunching code and overwrite is False * fix a bug when peak detection is set on both with only one channel ###### Release 0.6.2[¶](#release-0-6-2) * fix for openephys and new python syntax * fix in the handling of parameters * fix a bug on windows with unclosed hdf5 files * fix a bug during converting with multi CPU on windows * minor optimization in the fitting procedure * support for qt5 (and backward compatibility with qt4 as long as phy is using Qt4) ###### Release 0.6.1[¶](#release-0-6-1) * fix for similarities and merged output from the GUIs * fix for Python 3 and HDF5 * fix for Python 3 and launcher GUI * fix for maxlag in the merging GUI * optimization in the merging GUI for pairs suggestion * addition of an auto_mode for meta merging, to suppress manual curation * various fixes in the docs * fix a bug when closing temporary files on windows * allow spaces in names of probe files * collect_all should take dead times into account * patch to read INTAN 2.0 files * fix in the MATLAB GUI when splitting neurons * fix in the MATLAB GUI when selecting individual amplitudes ###### Release 0.6.0[¶](#release-0-6-0) * fix an IMPORTANT BUG in the similarities exported for phy/MATLAB, affect the suggestions in the GUI * improvements in the neuralynx wrapper * add the possibility to exclude some portions of the recordings from the analysis (see documentation) * fix a small bug in MS-MPI (Windows only) when shared memory is activated and emtpy arrays are present ###### Release 0.5.9[¶](#release-0-5-9) * The validating step can now accept custom spikes as inputs * Change the default frequency for filtering to 300Hz instead of 500Hz ###### Release 0.5.8[¶](#release-0-5-8) * fix a bug for int indices in some file wrappers (python 3.xx) (thanks to <NAME>) * fix a bug in the preview gui to write threshold * fix a bug for some paths in Windows (thanks to <NAME>) * add a wrapper for NeuraLynx (.ncs) file format * fix a bug in the installation of the MATLAB GUI * fix a bug to see results in MATLAB GUI with only 1 channel * fix a bug to convert data to phy with only positive peaks * add builds for python 3.6 * optimizations in file wrappers * fix a bug for MCS headers in multifiles, if not all with same sizes * add the possibility (with a flag) to turn off parallel HDF5 if needed * fix a bug with latest version of HDF5, related to flush issues during clustering ###### Release 0.5.7[¶](#release-0-5-7) * Change the strsplit name in the MATLAB GUI * Fix a bug in the numpy wrapper * Fix a bug in the artefact removal (numpy 1.12), thanks to <NAME> * Fixes in the matlab GUI to ease a refitting procedure, thanks to <NAME> * Overlaps are recomputed if size of templates has changed (for refitting) * Addition of a “second” argument for a better control of the preview mode * Fix when using the phy GUI and the multi-file mode. * Add a file wrapper for INTAN (RHD) file format ###### Release 0.5.6[¶](#release-0-5-6) * Fix in the smart_search when only few spikes are found * Fix a bug in density estimation when only few spikes are found ###### Release 0.5.5[¶](#release-0-5-5) * Improvement in the smart_select option given various datasets * Fix a regression for the clustering introduced in 0.5.2 ###### Release 0.5.2[¶](#release-0-5-2) * fix for the MATLAB GUI * smart_select can now be used [experimental] * fix for non 0: DISPLAY * cosmetic changes in the clustering plots * ordering of the channels in the openephys wrapper * fix for rates in the MATLAB GUI * artefacts can now be given in ms or in timesteps with the trig_unit parameter ###### Release 0.5rc[¶](#release-0-5rc) * fix a bug when exporting for phy in dense mode * compatibility with numpy 1.12 * fix a regression with artefact removal * fix a display bug in the thresholds while previewing with a non unitary gain * fix a bug when filtering in multi-files mode (overwrite False, various t_starts) * fix a bug when filtering in multi-files mode (overwrite True) * fix a bug if matlab gui (overwrite False) * fix the gathering method, not working anymore * smarter selection of the centroids, leading to more clusters with the smart_select option * addition of a How to cite section, with listed publications ###### Release 0.5b9[¶](#release-0-5b9) * switch from progressbar2 to tqdm, for speed and practical issues * optimization of the ressources by preventing numpy to use multithreading with BLAS * fix MPI issues appearing sometimes during the fitting procedure * fix a bug in the preview mode for OpenEphys files * slightly more robust handling of openephys files, thanks to <NAME> * remove the dependency to mpi4py channel on osx, as it was crashing * fix a bug in circus-multi when using extensions ###### Release 0.5b8[¶](#release-0-5b8) * fix a bug in the MATLAB GUI in the BestElec while saving * more consistency with “both” peak detection mode. Twice more waveforms are always collect during whitening/clustering * sparse export for phy is now available * addition of a dir_path parameter to be compatible with new phy * fix a bug in the meta merging GUI when only one template left ###### Release 0.5b7[¶](#release-0-5b7) * fix a bug while converting data to phy with a non unitary gain * fix a bug in the merging gui with some version of numpy, forcing ucast * fix a bug if no spikes are detected while constructing the basis * Optimization if both positive and negative peaks are detected * fix a bug with the preview mode, while displaying non float32 data ###### Release 0.5b6[¶](#release-0-5b6) * fix a bug while launching the MATLAB GUI ###### Release 0.5b3[¶](#release-0-5b3) * code is now hosted on GitHub * various cosmetic changes in the terminal * addition of a garbage collector mode, to collect also all unfitted spikes, per channel * complete restructuration of the I/O such that the code can now handle multiple file formats * internal refactoring to ease interaction with new file formats and readibility * because of the file format, slight restructuration of the parameter files * N_t and radius have been moved to the [detection] section, more consistent * addition of an explicit file_format parameter in the [data] section * every file format may have its own parameters, see documentation for details (or –info) * can now work natively with open ephys data files (.openephys) * can now work natively with MCD data files (.mcd) [using neuroshare] * can now work natively with Kwik (KWD) data files (.kwd) * can now work natively with NeuroDataWithoutBorders files (.nwb) * can now work natively with NiX files (.nix) * can now work natively with any HDF5-like structure data files (.h5) * can now work natively with Arf data files (.arf) * can now work natively with 3Brain data files (.brw) * can now work natively with Numpy arrays (.npy) * can now work natively with all file format supported by NeuroShare (plexon, blackrock, mcd, …) * can still work natively with raw binary files with/without headers :) * faster IO for raw binary files * refactoring of the exports during multi-file/preview/benchmark: everything is now handled in raw binary * fix a bug with the size of the safety time parameter during whitening and clustering * all the interactions with the parameters are now done in the circus/shared/parser.py file * all the interactions with the probe are now done in the circus/shared/probes.py file * all the messages are now handled in circus/shared/messages.py * more robust and explicit logging system * more robust checking of the parameters * display the electrode number in the preview/result GUI * setting up a continuous integration workflow to test all conda packages with appveyor and travis automatically * cuda support is now turned off by default, for smoother install procedures (GPU yet do not bring much) * file format can be streamed. Over several files (former multi-file mode), but also within the same file * several cosmetic changes in the default parameter file * clustering:smart_search and merging:correct_lag are now True by default * fix a minor bug in the smart search, biasing the estimation of densities * fix a bug with the masks and the smart-search: improving results * addition of an overwrite parameter. Note that any t_start/t_stop infos are lost * if using streams, or internal t_start, output times are on the same time axis than the datafile * more robust parameter checking ###### Release 0.4.3[¶](#release-0-4-3) * cosmetic changes in the terminal * suggest to reduce chunk sizes for high density probes (N_e > 500) to save memory * fix a once-in-a-while bug in the smart-search ###### Release 0.4.2[¶](#release-0-4-2) * fix a bug in the test suite * fix a bug in python GUI for non integer thresholds * fix a bug with output strings in python3 * fix a bug to kill processes in windows from the launcher * fix graphical issues in the launcher and python3 * colors are now present also in python3 * finer control of the amplitudes with the dispersion parameter * finer control of the cut off frequencies during the filtering * the smart search mode is now back, with a simple True/False flag. Use it for long or noisy recordings * optimizations in the smart search mode, now implementing a rejection method based on amplitudes * show the mean amplitude over time in the MATLAB GUI * MATLAB is automatically closed when closing the MATLAB GUI * mean rate is now displayed in the MATLAB GUI, for new datasets only * spike times are now saved as uint32, for new datasets only * various fixes in the docs * improvements when peak detection is set on “both” * message about cc_merge for low density probes * message about smart search for long recordings * various cosmetic changes * add a conda app for anaconda navigator ###### Release 0.4.1[¶](#release-0-4-1) * fix a bug for converting millions of PCs to phy, getting rid of MPI limitation to int32 * fix bugs with install on Windows 10, forcing int64 while default is int32 even on 64bits platforms * improved errors messages if wrong MCS headers are used * Various cosmetic changes ###### Release 0.4[¶](#release-0-4) First realease of the software ### Future plans and contributions[¶](#future-plans-and-contributions) Here is a non-exhaustive list of the features that we are currently working on, and that should make it into future releases of the software #### Real Time spike sorting[¶](#real-time-spike-sorting) This is the most challenging task, and we are thinking about what is the best way to properly implement it. Such a real-time spike sorting for dense arrays is within reach, but several challenges need to be addressed to make it possible. Data will be read from memory streams, and templates will be updated on-the-fly. The plan is to have spatio-temporal templates tracking cells over time, at a cost of a small temporal lag that can not be avoided because of the template-matching step. #### Better, faster, stronger[¶](#better-faster-stronger) GPU kernels should be optimized to increase the speed of the algorithm, and we are always seeking for optimizations along the road. For Real-Time spike sorting, if we want it to be accurate for thousands of channels, any optimizations is welcome. #### Contributions[¶](#contributions) If you have ideas, or if you want to contribute to the software, with the same idea that we should develop a proper and unified framework for semi-automated spike sorting, please do not hesitate to contact [<EMAIL>](mailto:pierre.yger%40inserm.fr) . Currently, the code itself is not properly documented, as our main focus was to first get a stable working algorithm. Now that this goal is now achieved, we can dive more into software development and enhance its modularity. Launching the code[¶](#launching-the-code) --- In this section, you will find all the information you need to be able to launch the code, and obtain results on any given dataset. To know more about how to visualize them, you will find everything in the following section ### Quickstart[¶](#quickstart) #### Running the algorithm[¶](#running-the-algorithm) ##### Copy your files[¶](#copy-your-files) First, you will need to create a directory (we call it `path` – usually you put both the date of the experiment and the name of the person doing the sorting). Your data file should have a name like `path/mydata.extension` Warning Your data should not be filtered, and by default the filtering will be done only once **onto** the data. So you need to keep a copy elsewhere of you raw data. If you really do not want to filter data on site, you can use the `overwrite` parameter (see [documentation on the code](index.html#document-code/config) for more information). ##### Generate a parameter file[¶](#generate-a-parameter-file) Before running the algorithm, you will always need to provide parameters, as a parameter file. Note that this parameter file has to be in the same folder than your data, and should be named `path/mydata.params`. If you have already yours, great, just copy it in the folder. Otherwise, just launch the algorithm, and the algorithm will ask you if you want to create a template one, that you have to edit before launching the code: ``` >> spyking-circus.py path/mydata.extension ################################################################## ##### Welcome to the SpyKING CIRCUS (0.7.6) ##### ##### ##### ##### Written by P.Yger and O.Marre ##### ################################################################## The parameter file is not present! You must have a file named path/mydata.params, properly configured, in the same folder, with the data file. Do you want SpyKING CIRCUS to create a template there? [y/n] ``` In the parameter file, you mostly have to change only informations in the `data` section (see [documentation on the code](index.html#document-code/config) for more information). ##### Run the algorithm[¶](#run-the-algorithm) Then you should run the algorithm by typing the following command(s): ``` >> spyking-circus path/mydata.extension ``` It should take around the time of the recording to run – maybe a bit more. The typical output of the program will be something like: ``` ################################################################## ##### Welcome to the SpyKING CIRCUS (0.7.6) ##### ##### ##### ##### Written by P.Yger and O.Marre ##### ################################################################## File : /home/test.dat Steps : filtering, whitening, clustering, fitting Number of CPU : 1 Parallel HDF5 : True Shared memory : True Hostfile : /home/pierre/spyking-circus/circus.hosts ################################################################## --- Informations --- | Number of recorded channels : 252 | Number of analyzed channels : 252 | File format : RAW_BINARY | Data type : int16 | Sampling rate : 20 kHz | Duration of the recording : 4 min 0 s 0 ms | Width of the templates : 3 ms | Spatial radius considered : 200 um | Threshold crossing : negative --- --- Informations --- | Filtering has already been done with cut off at 500Hz --- Analyzing data to get whitening matrices and thresholds... We found 20s without spikes for whitening matrices... Because of whitening, we need to recompute the thresholds... Searching spikes to construct the PCA basis... 100%|#################################################### ``` Note that you can of course change the number of CPU/GPU used, and also launch only a subset of the steps. See the help of the code to have more informations. #### Using Several CPUs[¶](#using-several-cpus) To use several CPUs, you should have a proper installation of MPI, and a valid hostfile given to the program. See [documentation on MPI](index.html#document-introduction/mpi). And then, you simply need to do, if *N* is the number of processors: ``` >> spyking-circus path/mydata.extension -c N ``` #### Using the GUI[¶](#using-the-gui) ##### Get the data[¶](#get-the-data) Once the algorithm has run on the data path/mydata.extension, you should have the following files in the directory path: * `path/mydata/mydata.result.hdf5` * `path/mydata/mydata.cluster.hdf5` * `path/mydata/mydata.overlap.hdf5` * `path/mydata/mydata.templates.hdf5` * `path/mydata/mydata.basis.hdf5` See the details here see [file formats](index.html#document-advanced/files) to know more how those files are structured. Since 0.8.2, you should also have the same files, but with the `-merged` extension for some of them. This is because the merging step has been included in the default pipeline of the algorithm. Both results (with or without this extra merging) can be visualized, and/or exported for [MATLAB](http://fr.mathworks.com/products/matlab/) and [phy](https://github.com/cortex-lab/phy). ##### Matlab GUI[¶](#matlab-gui) To launch the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI provided with the software, you need of course to have a valid installation of [MATLAB](http://fr.mathworks.com/products/matlab/), and you should be able to simply do: ``` >> circus-gui-matlab path/mydata.extension ``` ##### Python GUI[¶](#python-gui) An experimental GUI derived from [phy](https://github.com/cortex-lab/phy) and made especially for template-matching based algorithms can be launched by doing: ``` >> spyking-circus path/mydata.extension -m converting >> circus-gui-python path/mydata.extension ``` To enable it, you must have a valid installation of [phy](https://github.com/cortex-lab/phy) and [phylib](https://github.com/cortex-lab/phylib) To know more about the GUI section, see [documentation on the GUI](index.html#document-GUI/index) ### Parameters[¶](#parameters) #### Display the helpers[¶](#display-the-helpers) To know what are all the parameters of the software, just do: ``` >> spyking-circus -h ``` To know what are all the file formats supported by the software, just do: ``` >> spyking-circus help -i ``` To know more what are the parameter of a given file format *X*, just do ``` >> spyking-circus X -i ``` #### Command line Parameters[¶](#command-line-parameters) The parameters to launch the program are: * `-m` or `--method` What are the steps of the algorithm you would like to perform. Defaults steps are: > 1. filtering > 2. whitening > 3. clustering > 4. fitting > 5. merging Note that filtering is performed only once, and if the code is relaunched on the same data, a flag in the parameter file will prevent the code to filter twice. You can specify only a subset of steps by doing: ``` >> spyking-circus path/mydata.extension -m clustering,fitting ``` Note The results of the `merging` step are still saved with a different extension compared to the full results of the algorithm. This is because we don’t claim that a full automation of the software can work out of the box for all dataset, areas, species, … So if you want to work from merged results, use the `-e merged` extension while converting/displaying results. But otherwise, just look to the ra results, without the merging step (see the devoted section [documentation on Meta Merging](index.html#document-code/merging)), or even more ([documentation on extra steps](index.html#document-advanced/extras)). * `-c` or `--cpu` The number of CPU that will be used by the code. For example, just do: ``` >> spyking-circus path/mydata.extension -m clustering,fitting -c 10 ``` * `-H` or `--hostfile` The CPUs used depends on your MPI configuration. If you wan to configure them, you must provide a specific hostfile and do: ``` >> spyking-circus path/mydata.extension -c 10 -H nodes.hosts ``` To know more about the host file, see the MPI section [documentation on MPI](index.html#document-introduction/mpi) * `-b` or `--batch` The code can accept a text file with several commands that will be executed one after the other, in a batch mode. This is interesting for processing several datasets in a row. An example of such a text file `commands.txt` would simply be: ``` path/mydata1.extension -c 10 path/mydata2.extension -c 10 -m fitting path/mydata3.extension -c 10 -m clustering,fitting,converting ``` Then simply launch the code by doing: ``` >> spyking-circus commands.txt -b ``` Warning When processing files in a batch mode, be sure that the parameters file have been pre-generated. Otherwise, the code will hang asking you to generate them * `-p` or `--preview` To be sure that data are properly loaded before filtering everything on site, the code will load only the first second of the data, computes thresholds, and show you an interactive GUI to visualize everything. Please see the [documentation on Python GUI](index.html#document-GUI/python) Note The preview mode does not modify the data file! * `-r` or `--result` Launch an interactive GUI to show you, superimposed, the activity on your electrodes and the reconstruction provided by the software. This has to be used as a sanity check. Please see the [documentation on Python GUI](index.html#document-GUI/python) * `-s` or `--second` If the preview mode is activated, by default, it will show the first 2 seconds of the data. But you can specify an offset, in second, with this extra parameter such that the preview mode will display the signal in [second, second+2] * `-o` or `--output` If you want to generate synthetic benchmarks from a dataset that you have already sorted, this allows you, using the `benchmarking` mode, to produce a new file `output` based on what type of benchmarks you want to do (see `type`) * `-t` or `--type` While generating synthetic datasets, you have to chose from one of those three possibilities: `fitting`, `clustering`, `synchrony`. To know more about what those benchmarks are, see the [documentation on extra steps](index.html#document-advanced/extras) Note Benchmarks will be better integrated soon into an automatic test suite, use them at your own risks for now. To know more about the additional extra steps, [documentation on extra steps](index.html#document-advanced/extras) #### Configuration File[¶](#configuration-file) The code, when launched for the first time, generates a parameter file. The default template used for the parameter files is the one located in `/home/user/spyking-circus/config.params`. You can edit it in advance if you are always using the same setup. To know more about what is in the configuration file, [documentation on the configuration](index.html#document-code/config) ### Designing your probe file[¶](#designing-your-probe-file) #### What is the probe file?[¶](#what-is-the-probe-file) In order to launch the code, you must specify a mapping for your electrode, i.e you must tell the code how your recorded data can be mapped onto the physical space, and what is the spatial position of all your channels. Examples of such probe files (with the extension `.prb`) can be seen in the `probes` folder of the code. They will all look like the following one: ``` total_nb_channels = 32 radius = 100 channel_groups = { 1: { 'channels': list(range(32)), 'graph' : [], 'geometry': { 0: [ 0.0 , 0.0], 1: [ 0.0 , 50.0], 2: [+21.65, 262.5], 3: [+21.65, 237.5], 4: [+21.65, 187.5], 5: [+21.65, 137.5], 6: [+21.65, 87.5], 7: [+21.65, 37.5], 8: [ 0.0 , 200.0], 9: [ 0.0 , 250.0], 10: [+21.65, 62.5], 11: [+21.65, 112.5], 12: [+21.65, 162.5], 13: [+21.65, 212.5], 14: [ 0.0 , 150.0], 15: [ 0.0 , 100.0], 16: [ 0.0 , 125.0], 17: [ 0.0 , 175.0], 18: [-21.65, 212.5], 19: [-21.65, 162.5], 20: [-21.65, 112.5], 21: [-21.65, 62.5], 22: [ 0.0 , 275.0], 23: [ 0.0 , 225.0], 24: [-21.65, 37.5], 25: [-21.65, 87.5], 26: [-21.65, 137.5], 27: [-21.65, 187.5], 28: [-21.65, 237.5], 29: [-21.65, 262.5], 30: [ 0.0 , 75.0], 31: [ 0.0 , 25.0], } } } ``` An example of a probe mapping, taken from [<NAME>](http://www.kampff-lab.org/) This `prb` format is inherited from the [phy](https://github.com/cortex-lab/phy) documentation, in order to ensure compatibility. #### Key parameters[¶](#key-parameters) As you can see, an extra requirement of the SpyKING CIRCUS is that you specify, at the top of the probe file, two parameters: * `total_nb_channels`: The total number of channels currently recorded. This has to be the number of rows in your data file * `radius`: The default spatial extent [in um] of the templates that will be considered for that given probe. Note that for *in vitro* recording, such as the MEA with 252 channels, a spike can usually be seen in a physical radius of 250um. For *in vivo* data, 100um seems like a more reasonable value. You can change this value in the parameter file generated by the algorithm (see [documentation on the configuration file](index.html#document-code/config)) #### Channel groups[¶](#channel-groups) The channel_group is a python dictionary where you’ll specify, for every electrodes (you can have several of them), the exact geometry of all the recording sites on that probe, and what are the channels that should be processed by the algorithm. To be more explicit, in the previous example, there is one entry in the dictionary (with key 1), and this entry is itself a dictionary with three entries: * `channels`: The list of the channels that will be considered by the algorithm. Note that even if your electrode has *N* channels, some can be discarded if they are not listed in this `channels` list. * `graph`: Not used by the SpyKING CIRCUS, only here to ensure compatibility with [phy](https://github.com/cortex-lab/phy) * `geometry`: This is where you have to specify all the physical positions of your channels. This is itself a dictionary, whose entries are the number of the channels, and whose values are the position [in um], of the recoding sites on your probe. Note You only need, in the `geometry` dictionary, to have entries for the channels you are listing in the `channels` list. The code only needs positions for analyzed channels #### Examples[¶](#examples) By default, during the install process, the code should copy some default probe files into `/home/user/spyking-circus/probes`. You can have a look at them. #### How do deal with several shanks ?[¶](#how-do-deal-with-several-shanks) There are two ways to simply handle several shanks: * in the `.prb` file, you can create a single large channel group, where all the shanks are far enough (for example in the x direction), such that templates will not interact (based on the physical `radius`). If your radius is 200 um, for example, if you set x to 0 for the first shank, 300 for the second one, and so on, templates will be confined per shank. * in the `.prb` file, you can also have several channel groups (see for example adrien.prb in the probes folder). What is done by the code, then, is that during internal computations templates are confined to each channel groups. However, for graphical purpose, when you’ll use the GUI, the global x/y coordinates across all shanks are used. Therefore, if you do not want to have them plotted on top of each other, you still need to add a x/y padding for all of them. #### How do deal with dead channels ?[¶](#how-do-deal-with-dead-channels) You have two ways to deal with channels that you want to discard from the analysis: * in the `.prb` file, you can specify a given list of channels to analyse `channels` that may not have all the recorded channels. For example, if you have a probe with 32 channels, but `channels` set to range(28), then the two last channels will be ignored * with the `dead_channels` parameter of the configuration file. Coming back to the previous example, you can set such a parameter to {1 : [28, 29]} in order to exclude the last two channels #### How do deal with 1D or 3D probes ?[¶](#how-do-deal-with-1d-or-3d-probes) Since release 0.8.4, the code is able to work with 1D or 3D coordinates for the positions of the channels. However, currently, none of the visualization GUIs (preview, matlab, phy) will work properly if 3D coordinates are used. ### Configuration File[¶](#configuration-file) This is the core of the algorithm, so this file has to be filled properly based on your data. Even if all key parameters of the algorithm are listed in the file, only few are likely to be modified by a non-advanced user. The configuration file is divided in several sections. For all those sections, we will review the parameters, and tell you what are the most important ones #### Data[¶](#data) The data section is: ``` file_format = # Can be raw_binary, openephys, hdf5, ... See >> spyking-circus help -i for more info stream_mode = None # None by default. Can be multi-files, or anything depending to the file format mapping = # Mapping of the electrode (see http://spyking-circus.rtfd.org) suffix = # Suffix to add to generated files overwrite = True # If you want to filter or remove artefacts on site. Data are duplicated otherwise output_dir = # By default, generated data are in the same folder as the data. parallel_hdf5 = True # Use the parallel HDF5 feature (if available) ``` Warning This is the most important section, that will allow the code to properly load your data. If not properly filled, then results will be wrong. Note that depending on your file_format, you may need to add here several parameters, such as `sampling_rate`, `data_dtype`, … They will be requested if they can not be infered from the header of your data structure. To check if data are properly loaded, consider using [the preview mode](index.html#document-GUI/python) before launching the whole algorithm Parameters that are most likely to be changed: * `file_format` You must select a supported file format (see [What are the supported formats](index.html#document-code/fileformat)) or write your own wrapper (see [Write your own data format](index.html#document-advanced/datafile)) * `mapping` This is the path to your probe mapping (see [How to design a probe file](index.html#document-code/probe)) * `stream_mode` If streams in you data (could be multi-files, or even in the same file) should be processed together (see [Using multi files](index.html#document-code/multifiles)) * `overwrite` If True, data are overwritten during filtering, assuming the file format has write access. Otherwise, an external raw_binary file will be created during the filtering step, if any. * `ouput_dir` If you want all the file generated by SpyKING CIRCUS to be in a particular directory, instead of next to the raw data * `parallel_hdf5` Try to use the option for parallel write of HDF5. Need to be configured (see [how to install hdf5](index.html#document-introduction/hdf5)) #### Detection[¶](#detection) The detection section is: ``` radius = auto # Radius [in um] (if auto, read from the prb file) N_t = 5 # Width of the templates [in ms] spike_thresh = 6 # Threshold for spike detection peaks = negative # Can be negative (default), positive or both dead_channels = # If not empty or specified in the probe, a dictionary {channel_group : [list_of_valid_ids]} weird_thresh = # If not empty, threshold [in MAD] for artefact detection ``` Parameters that are most likely to be changed: * `N_t` The temporal width of the templates. For *in vitro* data, 5ms seems a good value. For *in vivo* data, you should rather use 3 or even 2ms * `radius` The spatial width of the templates. By default, this value is read from the probe file. However, if you want to specify a larger or a smaller value [in um], you can do it here * `spike_thresh` The threshold for spike detection. 6-7 are good values * `peaks` By default, the code detects only negative peaks, but you can search for positive peaks, or both * `dead_channels` You can exclude dead channels either directly in the probe file, with the `channels` list, or with this `dead_channels` parameter. To do so, you must enter a dictionary of the following form {channel_group : [list_of_valid_ids]} * `ẁeird_thresh` If you want to explicit tell the code to ignore all peaks that will be abnormally large. All peaks (in abs value) higher than `weird_thresh`. MAD will be discarded #### Filtering[¶](#filtering) The filtering section is: ``` cut_off = 300, auto # Min and Max (auto=nyquist) cut off frequencies for the band pass butterworth filter [Hz] filter = True # If True, then a low-pass filtering is performed remove_median = False # If True, median over all channels is substracted to each channels (movement artefacts) common_ground = # If you want to use a particular channel as a reference ground: should be a valid channel number sat_value = # Values higher than sat_value are set to 0 during filtering (in % of max dtype) [0,1] ``` Warning The code performs the filtering of your data writing on the file itself. Therefore, you `must` have a copy of your raw data elsewhere. Note that as long as your keeping the parameter files, you can relaunch the code safely: the program will not filter multiple times the data, because of the flag `filter_done` at the end of the configuration file. Parameters that are most likely to be changed: * `cut_off` The default value of 500Hz has been used in various recordings, but you can change it if needed. You can also specify the upper bound of the Butterworth filter * `filter` If your data are already filtered by a third program, turn that flag to False * `remove_median` If you have some movement artefacts in your *in vivo* recording, and want to substract the median activity over all analysed channels from each channel individually * `common_ground` If you want to use a particular channel as a reference, and subtract its activity from all others. Note that the activity on this particular channel will thus be null * `sat_value` If your recording has some saturation problems, this might lead to artefacts while filtering. This option prevents the problem, by tagging all the values, in the raw recording (before filtering) that will be higher than `sat_value` times the maximal values allowed in the raw data given the data type. These values will be set to 0 and logged to disk in a file. #### Triggers[¶](#triggers) The triggers section is: ``` trig_file = # External stimuli to be considered as putative artefacts [in trig units] (see documentation) trig_windows = # The time windows of those external stimuli [in trig units] trig_unit = ms # The unit in which times are expressed: can be ms or timestep clean_artefact = False # If True, external artefacts induced by triggers will be suppressed from data dead_file = # Portion of the signals that should be excluded from the analysis [in dead units] dead_unit = ms # The unit in which times for dead regions are expressed: can be ms or timestep ignore_times = False # If True, any spike in the dead regions will be ignored by the analysis make_plots = # Generate sanity plots of the averaged artefacts [Nothing or None if no plots] ``` Parameters that are most likely to be changed: * `trig_file` The path to the file where your artefact times and labels. See [how to deal with stimulation artefacts](index.html#document-code/artefacts) * `trig_windows` The path to file where your artefact temporal windows. See [how to deal with stimulation artefacts](index.html#document-code/artefacts) * `clean_artefact` If you want to remove any stimulation artefacts, defined in the previous files. See [how to deal with stimulation artefacts](index.html#document-code/artefacts) * `make_plots` The default format to save the plots of the artefacts, one per artefact, showing all channels. You can set it to None if you do not want any * `trig_unit` If you want times/duration in the `trig_file` and `trig_windows` to be in timestep or ms * `dead_file` The path to the file where the dead portions of the recording, that should be excluded from the analysis, are specified. . See [how to deal with stimulation artefacts](index.html#document-code/artefacts) * `dead_unit` If you want times/duration in the `dead_file` to be in timestep or ms * `ignore_times` If you want to remove any dead portions of the recording, defined in `dead_file`. See [how to deal with stimulation artefacts](index.html#document-code/artefacts) #### Whitening[¶](#whitening) The whitening section is: ``` spatial = True # Perform spatial whitening max_elts = 10000 # Max number of events per electrode (should be compatible with nb_elts) nb_elts = 0.8 # Fraction of max_elts that should be obtained per electrode [0-1] output_dim = 5 # Can be in percent of variance explain, or num of dimensions for PCA on waveforms ``` Parameters that are most likely to be changed: * `output_dim` If you want to save some memory usage, you can reduce the number of features kept to describe a waveform. #### Clustering[¶](#clustering) The clustering section is: ``` extraction = median-raw # Can be either median-raw (default), median-pca, mean-pca, mean-raw, or quadratic sub_dim = 10 # Number of dimensions to keep for local PCA per electrode max_elts = 10000 # Max number of events per electrode (should be compatible with nb_elts) nb_elts = 0.8 # Fraction of max_elts that should be obtained per electrode [0-1] nb_repeats = 3 # Number of passes used for the clustering make_plots = # Generate sanity plots of the clustering merging_method = nd-bhatta # Method to perform local merges (distance, dip, folding, nd-folding, bhatta) merging_param = default # Merging parameter (see docs) (3 if distance, 0.5 if dip, 1e-9 if folding, 2 if bhatta) sensitivity = 3 # The only parameter to control the cluster. The lower, the more sensitive cc_merge = 0.95 # If CC between two templates is higher, they are merged dispersion = (5, 5) # Min and Max dispersion allowed for amplitudes [in MAD] smart_search = True # Parameter to activate the smart search mode ``` Note This is the a key section, as bad clustering will implies bad results. However, the code is very robust to parameters changes. Parameters that are most likely to be changed: * `extraction` The method to estimate the templates. `Raw` methods are slower, but more accurate, as data are read from the files. `PCA` methods are faster, but less accurate, and may lead to some distorted templates. `Quadratic` is slower, and should not be used. * `max_elts` The number of elements that every electrode will try to collect, in order to perform the clustering * `nb_repeats` The number of passes performed by the algorithm to refine the density landscape * `smart_search` By default, the code will collect only a subset of spikes, randomly, on all electrodes. However, for long recordings, or if you have low thresholds, you may want to select them in a smarter manner, in order to avoid missing the large ones, under represented. If the smart search is activated, the code will first sample the distribution of amplitudes, on all channels, and then implement a rejection algorithm such that it will try to select spikes in order to make the distribution of amplitudes more uniform. * `cc_merge` After local merging per electrode, this step will make sure that you do not have duplicates in your templates, that may have been spread on several electrodes. All templates with a correlation coefficient higher than that parameter are merged. Remember that the more you merge, the faster is the fit * `merging_method` Several methods can be used to perform greedy local merges on each electrodes. Each of the method has a parameter, defined by `merge_param`. This replaces former parameters `sim_same_elec` and `dip_threshold` * `dispersion` The spread of the amplitudes allowed, for every templates, around the centroid. * `make_plots` By default, the code generates sanity plots of the clustering, one per electrode. #### Fitting[¶](#fitting) The fitting section is: ``` amp_limits = (0.3, 30) # Amplitudes for the templates during spike detection amp_auto = True # True if amplitudes are adjusted automatically for every templates collect_all = False # If True, one garbage template per electrode is created, to store unfitted spikes ratio_thresh = 0.9 # Ratio of the spike_threshold used while fitting [0-1]. The lower the slower mse_error = False # If True, RMS is collected over time, to assess quality of reconstruction ``` Parameters that are most likely to be changed: * `collect_all` If you want to also collect all the spike times at which no templates were fitted. This is particularly useful to debug the algorithm, and understand if something is wrong on a given channel * `ratio_thresh` If you want to get more spikes for the low amplitudes templates, you can decrease this value. It will slow down the fitting procedure, but collect more spikes for the templates with an amplitude close to threshold #### Merging[¶](#merging) The merging section is: ``` erase_all = True # If False, a prompt will ask you to remerge if merged has already been done cc_overlap = 0.85 # Only templates with CC higher than cc_overlap may be merged cc_bin = 2 # Bin size for computing CC [in ms] default_lag = 5 # Default length of the period to compute dip in the CC [ms] auto_mode = 0.75 # Between 0 (aggressive) and 1 (no merging). If empty, GUI is launched remove_noise = False # If True, meta merging will remove obvious noise templates (weak amplitudes) noise_limit = 0.75 # Amplitude at which templates are classified as noise sparsity_limit = 0.75 # Sparsity level (in percentage) for selecting templates as putative noise (in [0, 1]) time_rpv = 5 # Time [in ms] to consider for Refraction Period Violations (RPV) (0 to disable) rpv_threshold = 0.02 # Percentage of RPV allowed while merging merge_drifts = True # Try to automatically merge drifts, i.e. non overlapping spiking neurons drift_limit = 0.1 # Distance for drifts. The higher, the more non-overlapping the activities should be ``` To know more about how those merges are performed and how to use this option, see [Automatic Merging](index.html#document-code/merging). Parameters that are most likely to be changed: * `erase_all` If you want to always erase former merging, and skip the prompt * `auto_mode` If your recording is stationary, you can try to perform a fully automated merging. By setting a positive value, you control the level of merging performed by the software. Values such as 0.75 should be a good start, but see see [Automatic Merging](index.html#document-code/merging) for more details. The lower, the more the merging will be aggressive. * `remove_noise` If you want to automatically get rid of noise templates (very weak ones), just set this value to True. * `noise_limit` normalized amplitude (with respect to the detection threshold) below which templates are considered as noise * `sparsity_limit` To be considered as noisy templates, sparsity level that must be achieved by the templates. Internally, the code sets to 0 channels without any useful information. So the sparsity is the ratio between the number of channels with non-zero values divided by the number of channels that should have had a signal. Usually, noise tends to only be defined on few channels (if not only one) * `time_rpv` When performing merges, the code wil check if the merged unit has a valid ISI without any RPV. If yes, then merge is performed, and otherwise this is avoided. This is the default time using to compute RPV. If you want to disable this feature, set this value to 0. * `rpv_threshold` Percentage of RPV allowed while merging, you can increase it if you want to be less stringent. * `drift_limit` To assess if a unit is drifting or not, we compute distances between the histograms of the spike times, for a given pair of cells, and assess how much do they overlap. For drifting units, they should not overlap by much, and the threshold can be set by this value. The higher, the more histograms should be distinct to be merged. #### Converting[¶](#converting) The converting section is: ``` erase_all = True # If False, a prompt will ask you to export if export has already been done sparse_export = True # If True, data for phy are exported in a sparse format. Need recent version of phy export_pcs = prompt # Can be prompt [default] or in none, all, some export_all = False # If True, unfitted spikes will be exported as the last Ne templates ``` Parameters that are most likely to be changed: * `erase_all` If you want to always erase former export, and skip the prompt * `sparse_export` If you have a large number of templates or a very high density probe, you should use the sparse format for phy * `export_pcs` If you already know that you want to have all, some, or no PC and skip the prompt * `export_all` If you used the `collect_all` mode in the `[fitting]` section, you can export unfitted spike times to phy. In this case, the last N templates, if N is the number of electrodes, are the garbage collectors. #### Extracting[¶](#extracting) The extracting section is: ``` safety_time = 1 # Temporal zone around which spikes are isolated [in ms] max_elts = 10000 # Max number of events per templates (should be compatible with nb_elts) nb_elts = 0.8 # Fraction of max_elts that should be obtained per electrode [0-1] output_dim = 5 # Percentage of variance explained while performing PCA cc_merge = 0.975 # If CC between two templates is higher, they are merged noise_thr = 0.8 # Minimal amplitudes are such than amp*min(templates) < noise_thr*threshold ``` This is an experimental section, not used by default in the algorithm, so nothing to be changed here #### Validating[¶](#validating) The validating section is: ``` nearest_elec = auto # Validation channel (e.g. electrode closest to the ground truth cell) max_iter = 200 # Maximum number of iterations of the stochastic gradient descent (SGD) learning_rate = 1.0e-3 # Initial learning rate which controls the step-size of the SGD roc_sampling = 10 # Number of points to estimate the ROC curve of the BEER estimate test_size = 0.3 # Portion of the dataset to include in the test split radius_factor = 0.5 # Radius factor to modulate physical radius during validation juxta_dtype = uint16 # Type of the juxtacellular data juxta_thresh = 6 # Threshold for juxtacellular detection juxta_valley = False # True if juxta-cellular spikes are negative peaks juxta_spikes = # If none, spikes are automatically detected based on juxta_thresh filter = True # If the juxta channel need to be filtered or not make_plots = png # Generate sanity plots of the validation [Nothing or None if no plots] ``` Please get in touch with us if you want to use this section, only for validation purposes. This is an implementation of the [BEER metric](index.html#document-advanced/beer) ### Supported File Formats[¶](#supported-file-formats) To get the list of supported file format, you need to do: ``` >> spyking-circus help -i --- Informations --- | The file formats that are supported are: | | -- RAW_BINARY (read/parallel write) | Extensions : | Supported streams: multi-files | -- MCS_RAW_BINARY (read/parallel write) | Extensions : .raw, .dat | Supported streams: multi-files | -- HDF5 (read/write) | Extensions : .h5, .hdf5 | Supported streams: multi-files | -- OPENEPHYS (read/parallel write) | Extensions : .openephys | Supported streams: multi-folders | -- KWD (read/write) | Extensions : .kwd | Supported streams: multi-files, single-file | -- NWB (read/write) | Extensions : .nwb, .h5, .hdf5 | Supported streams: multi-files | -- NIX (read/write) | Extensions : .nix, .h5, .hdf5 | Supported streams: multi-files | -- ARF (read/write) | Extensions : .arf, .hdf5, .h5 | Supported streams: multi-files, single-file | -- BRW (read/write) | Extensions : .brw | Supported streams: multi-files | -- NUMPY (read/parallel write) | Extensions : .npy | Supported streams: multi-files | -- RHD (read/parallel write) | Extensions : .rhd | Supported streams: multi-files | -- NEURALYNX (read/parallel write) | Extensions : .ncs | Supported streams: multi-files, multi-folders | -- BLACKROCK (read only) | Extensions : .ns1, .ns2, .nss3, .ns4, .ns5, .ns6 | Supported streams: multi-files | -- MDA (read/parallel write) | Extensions : .mda | Supported streams: multi-files --- ``` This list will tell you what are the wrappers available, and you need to specify one in your configuration file with the `file_format` parameter in the `[data]` section. To know more about the mandatory/optional parameters for a given file format, you should do: ``` >> spyking-circus raw_binary -i --- Informations --- | The parameters for RAW_BINARY file format are: | | -- sampling_rate -- <type 'float'> [** mandatory **] | -- data_dtype -- <type 'str'> [** mandatory **] | -- nb_channels -- <type 'int'> [** mandatory **] | | -- data_offset -- <type 'int'> [default is 0] | -- dtype_offset -- <type 'str'> [default is auto] | -- gain -- <type 'int'> [default is 1] --- ``` Note Depending on the file format, the parameters needed in the `[data]` section of the parameter file can vary. Some file format are self-contained, while some others need extra parameters to reconstruct the data. For all the needed parameters, you need to add in the `[data]` section of the parameter file a line with `parameter = value` Warning As said after, only file format derived from `raw_binary`, and without streams are currently supported by the phy and MATLAB GUI, if you want to see the raw data. All other views, that do not depend on the raw data, will stay the same, so you can still sort your data. #### Neuroshare support[¶](#neuroshare-support) Some of the file formats (plexon, …) can be accessed only if you have the [neuroshare](https://pythonhosted.org/neuroshare/) library installed. Note that despite a great simplicity of use, this library provides only very slow read access and no write access to the file formats. Therefore, this is not an efficient wrapper, and it may slow down considerably the code. Feel free to contribute if you have better ideas about what to do! #### Multi-Channel support[¶](#multi-channel-support) To be able to read efficiently native mcd files, you must have the [pymcstream](https://bitbucket.org/galenea/pymcstream/src) python package installed. This is a cross-platform packages (Windows/Mac/Linux) and the installation procedure can be found on the webwsite. #### HDF5-like file[¶](#hdf5-like-file) This should be easy to implement any HDF5-like file format. Some are already available, feel free to add yours. Note that to allow parallel write with HDF5, you must have a version of HDF5 compiled with the MPI option activated. This means that you need to do a [manual install](index.html#document-introduction/hdf5). #### Raw binary File[¶](#raw-binary-file) The simplest file format is the raw_binary one. Suppose you have *N* channels \[c_0, c_1, ... , c_N\] And if you assume that \(c_i(t)\) is the value of channel \(c_i\) at time *t*, then your datafile should be a raw file with values \[c_0(0), c_1(0), ... , c_N(0), c_0(1), ..., c_N(1), ... c_N(T)\] This is simply the flatten version of your recordings matrix, with size *N* x *T* Note The values can be saved in your own format (`int16`, `uint16`, `int8`, `float32`). You simply need to specify that to the code As you can see by typing: ``` >> spyking-circus raw_binary -i --- Informations --- | The parameters for RAW_BINARY file format are: | | -- sampling_rate -- <type 'float'> [** mandatory **] | -- data_dtype -- <type 'str'> [** mandatory **] | -- nb_channels -- <type 'int'> [** mandatory **] | | -- data_offset -- <type 'int'> [default is 0] | -- dtype_offset -- <type 'str'> [default is auto] | -- gain -- <type 'int'> [default is 1] --- ``` There are some extra and required parameters for the raw_binary file format. For example, you must specify the sampling rate `sampling_rate`, the data_dtype (`int16`, `float32`, …) and also the number of channels `nb_channels`. The remaining parameters are optional, i.e. if not provided, default values written there will be used. So the `mydata.params` file for a `mydata.dat` raw binary file will have the following params in the `[data]` section: ``` file_format = raw_binary sampling_rate = XXXX data_dtype = XXXX # should be int16,uint16,float32,... nb_channels = XXXX # as it can not be guessed from the file, it has to be specified data_offset = XXXX # Optional, if a header with a fixed size is present gain = XXXX # Optional, if you want a non unitary gain for the channels ``` Warning The `raw_binary` file format is the default one used internally by SpyKING CIRCUS when the flag `overwrite` is set to `False`. This means several things > * data are saved as `float32`, so storage can be large > * we can not handle properly t_start parameters if there are streams in the original data. Times will be continuous > * this is currently the **only** file format properly supported by phy and MATLAB GUIs, if you want to see the raw data ### Sanity plots[¶](#sanity-plots) In order to have a better feedback on what the algorithm is doing, and especially the clustering phase, the code can produce sanity plots that may be helpful to troubleshoot. This is the flag `make_plots` in the `clustering` section of the parameter files (see the configuration section [documentation on MPI](index.html#document-code/config)). All plots will be stored in the folder `path/mydata/plots` Note If you do not care about those plots, you can set to `None` the `make_plots` entries in the configuration file, and this will speed up the algorithm #### View of the activity[¶](#view-of-the-activity) The best way to visualize the activity on your electrodes, and to see if data are properly loaded or if results are making any sense is to use the devoted python GUI and the preview mode (see the visualization section [on Python GUI](index.html#document-GUI/python)) #### Views of the Clusters[¶](#views-of-the-clusters) During the clustering phase, the algorithm will save files names `cluster_i` where *i* is the number of the electrode. A typical plot will look like that A view on the clusters detected by the algorithm, on a given electrode On the two plots in the left column, you can see the rho vs delta plots (see [[<NAME>, 2014]](http://www.sciencemag.org/content/344/6191/1492.short)). Top plots shows the centroids that have been selected, and bottom plots shows in red all the putative centers that were considered by the algorithm. On the 4 plots on the rights, this is a 3D projection of all the spikes collected by that electrode, projected along different axes: x vs y, y vs z and x vs z. Note If, in those plots, you see clusters that you would have rather split, and that do not have different color, then this is likely that the clustering algorithm had wrong parameters. Remember that in the configuration file `max_clusters` controls the maximal number of clusters per electrodes that will be searched (so you may want to increase it if clustering is not accurate enough), and that `sim_same_elec` will control how much similar clusters will be merged. So again, decrease it if you think you are losing some obvious clusters. #### Views of the Local Merges[¶](#views-of-the-local-merges) During the clustering, if you set the parameter `debug_plots` to `True` in the `clustering` section, the code will produce (since 0.8.4) sanity plots for the local merges, to show you groups of clusters that were merged together. The method used to compute the distances between cluster can be distance (normalized distance between clusters, assuming they are Gaussian), [dip-test of unimodality](http://www.nicprice.net/diptest/Hartigan_1985_AnnalStat.pdf), [folding test](https://hal.archives-ouvertes.fr/hal-01951676/document), or the [Bhattacharyya distance](https://en.wikipedia.org/wiki/Bhattacharyya_distance). A view on the local merges that have been made given the selected distance, on a given electrode #### Views of the waveforms[¶](#views-of-the-waveforms) At the end of the clustering phase, the algorithm will save files names `waveform_i` where *i* is the number of the electrode. A typical plot will look like that A view on the templates, on a given electrode On this plot, you should get an insight on the templates that have been computed out of the clustering phase. For all the clusters detected on that given electrode, you should see all the waveforms peaking on that particular electrode, and the template, in red (in blue, this is the min and max amplitudes allowed during the fitting procedure). Note that if template is not aligned with the waveforms, this is normal. The templates are aligned on the electrodes were they have an absolute min. Here you are just looking at them on a particular electrode. The key point is that, as you can see, templates should all go below threshold on that particular electrode (dash-dotted line). When the template is flat, it means that it has been removed from the dictionary, because of time shifting and duplication elsewhere. ### Processing streams of data[¶](#processing-streams-of-data) It is often the case that, during the same recording session, the experimentalist records only some temporal chunks and not the whole experiment. However, because the neurons are the same all over the recording, it is better to process them as a single datafile. The code can handle such streams of data, either from multiple sources (several data files), or within the same source if supported by the file format (chunks in a single file). #### Chunks spread over several files[¶](#chunks-spread-over-several-files) You can use the `multi-files` stream mode in the `[data]` section. Note If you just want to process several *independent* files, coming from different recording sessions, you need to use the batch mode (see [the documentation on the parameters](index.html#document-code/parameters)) For the sake of clarity, we assume that all your files are labelled > * `mydata_0.extension` > * `mydata_1.extension` > * … > * `mydata_N.extension` Launch the code on the first file: ``` >> spyking-circus mydata_0.extension ``` The code will create a parameter file, `mydata_0.params`. Edit the file, and in the `[data]` section, set `stream_mode` to `multi-files`. Relaunch the code on the first file only: ``` >> spyking-circus mydata_0.extension ``` The code will now display something like: ``` ################################################################## ##### Welcome to the SpyKING CIRCUS ##### ##### ##### ##### Written by P.Yger and O.Marre ##### ################################################################## Steps : fitting GPU detected : True Number of CPU : 12 Number of GPU : 6 Shared Memory : True Parallel HDF5 : False Hostfile : /home/spiky/spyking-circus/circus.hosts ################################################################## --- Informations --- | Number of recorded channels : 256 | Number of analyzed channels : 256 | Data type : uint16 | Sampling rate : 20 kHz | Header offset for the data : 1881 | Duration of the recording : 184 min 37 s | Width of the templates : 5 ms | Spatial radius considered : 250 um | Stationarity : True | Waveform alignment : True | Skip strong artefacts : True | Template Extraction : median-raw | Streams : multi-files (19 found) --- ``` The key line here is the one stating that the code has detected 19 files, and will process them as a single one. Note The multi-files mode assumes that all files have the same properties: mapping, data type, data offset, … It has to be the case if they are all coming from the same recording session While running, in its first phase (filtering), two options are possible: * if your file format allows write access, and `overwrite` is set to `True` in the `data` section, then every individual data file will be overwritten and filtered on site * if your file format does not allow write access, or `overwrite` is `False`, the code will filter and concatenate all files into a new file, saved as a `float32` binary file called `mydata_all_sc.extension`. Templates are then detected onto this single files, and fitting is also applied onto it. #### Chunks contained in several folders[¶](#chunks-contained-in-several-folders) For some particular file formats (i.e. openephys), all the data are stored within a single folder, and your experiment may be split over several folders. In order to deal with that, the code can virtually concatenate files found in several folders, using the mode `multi-folders`. When activating such a mode for the `stream_mode`, the code will search for all folders, at the root of the file currently used, and will search inside all of them if compatible recordings can be found. If yes, they will all be concatenated virtually, such that all the folders are processed as a whole #### Chunks contained in the same datafile[¶](#chunks-contained-in-the-same-datafile) For more complex data structures, several recordings sessions can be saved within the same datafile. Assuming the file format allows it (see [the documentation on the file formats](index.html#document-code/fileformat)), the code can still stream all those chunks of data in order to process them as a whole. To do so, use exactly the same procedure as below, except that the `stream_mode` may be different, for example `single-file`. #### Visualizing results from several streams[¶](#visualizing-results-from-several-streams) ##### Multi-files[¶](#multi-files) As said, results are obtained on a single file `mydata_all.extension`, resulting of the concatenation of all the individual files. So when you are launching the GUI: ``` >> circus-gui-matlab mydata_0.extension ``` what you are seeing are *all* the spikes on *all* files. Here you can delete/merge templates, see the devoted GUI section for that ([GUI](index.html#document-GUI/index)). Note that you need to process data in such a manner, because otherwise, if looking at all results individually, you would have a very hard time keeping track of the templates over several files. Plus, you would not get all the information contained in the whole recording (the same neuron could be silent during some temporal chunks, but spiking during others). #### Getting individual results from streams[¶](#getting-individual-results-from-streams) Once your manual sorting session is done, you can simply split the results in order to get one result file per data file. To do so, simply launch: ``` >> circus-multi mydata_0.extension ``` This will create several files * `mydata_0.results.hdf5` * `mydata_1.results.hdf5` * … * `mydata_N.results.hdf5` In each of them, you’ll find the spike times of the given streams, between *0* and *T*, if *T* is the length of file *i*. ### Dealing with stimulation artifacts[¶](#dealing-with-stimulation-artifacts) Sometimes, because of external stimulation, you may end up having some artifacts on top of your recordings. For example, in case of optogenetic stimulation, shinning light next to your recording electrode is likely to contaminate the recording. Or it could be that those artifacts are simply affecting some portions of your recordings that you would like easily to ignore. The code has several built-in mechanisms to deal with those artifacts, in the `triggers` section of the parameter file. #### Ignore some saturation artifacts[¶](#ignore-some-saturation-artifacts) Your recording device might, sometimes, because of external stimulation, lead to saturation. This can be problematic, especially because after filtering, the saturation times will give rise to ringing artifacts that might affect the quality of the sorting. You can prevent such a situation with the `sat_value` paramater in the `[filtering]` section. This value, expressed as a percentage of the maximal range allowed by your data dtype, will specify when the software should decide that such values are saturated. The times and values of saturation, per channel, will be logged in a file, and during the filtering procedure, all these values will be set to 0 in the filtered data. #### Ignore some portions of the recording[¶](#ignore-some-portions-of-the-recording) You can decide to ignore some portions of the recordings, because they are corrupted by artifacts. ##### Setting dead periods[¶](#setting-dead-periods) In a text file, you must specify all the portions [t_start, t_stop] that you want to exclude from analysis. The times can be given in ms or in timesteps, and this can be changed with the `dead_unit` parameter. By default, they are assumed to be in ms. Assuming we want to exclude the first 500ms of every second, such a text file will look like: ``` // myartifacts.dead // Exclude 500 ms every 1000 ms from t=0 until t=10 seg // times are given in 'ms' (set dead_unit = 'ms' in [triggers] // columns: t_start t_stop 0 500 1000 1500 # this is a comment 2000 2500 ... 10000 10500 ``` All t_start/t_stop times here in the text file are in ms, and you must use one line per portion to exclude. Use `dead_unit` if you want to give times in timesteps. ##### How to use it[¶](#how-to-use-it) Once this file have been created, you should provide them in the `[triggers]` section of the code (see [here](index.html#document-code/config)) with the `dead_file` parameter. You should then activate the option `ignore_times` by setting it to `True`. Once the code is launched, all steps (whitening/clustering/fitting) will only work on spikes that are not in the time periods defined by the `dead_file`. #### Discard the unphysiological spikes[¶](#discard-the-unphysiological-spikes) If you want to exclude the unphysiological spikes, and/or residual artifacts that might exists in your data to be considered by the software during its internal steps (clustering, fitting), then you can use the `weird_thresh` parameter in the `[detection]` section. By default, all spikes above a certain threshold (in MAD) are considered. But if a `weird_thresh` value is specified, all threshold crossings higher than such a value will be discarded. Note that you should use that only if you have a lots of such events. Because if there are only few, the code should be rather robust. #### Subtract regularly occurring artifacts[¶](#subtract-regularly-occurring-artifacts) In a nutshell, the code is able, from a list of stimulation times, to simply compute an median-based average artifacts, and subtract it automatically to the signal during the filtering procedure. ##### Setting stimulation times[¶](#setting-stimulation-times) In a first text file, you must specify all the times of your artifacts, identified by a given identifier. The times can be given in ms or in timesteps, and this can be changed with the `trig_unit` parameter. By default, they are assumed to be in ms. For example, imagine you have 2 different stimulation protocols, each one inducing a different artifacts. The text file will look like: ``` // mytimes.triggers // Two interleaved stim (0 and 1) are // played at various times, roughly every // 500 ms 0 500.2 1 1000.2 0 1500.3 1 2000.1 # this is a comment ... 0 27364.1 1 80402.4 ``` This means that stim 0 is displayed at 500.2ms, then stim 1 at 1000.2ms, and so on. All times in the text file are in ms, and you must use one line per time. Use `trig_unit` if you want to give times in timesteps. ##### Setting time windows[¶](#setting-time-windows) In a second text file, you must tell the algorithm what is the time window you want to consider for a given artifacts. Using the same example, and assuming that stim 0 produces an artifacts of 100ms, while stim 1 produces a longer artifacts of 510ms, the file should look like: ``` // Estimated duration of the artifacts // Stim 0 lasts 100 ms // Stim 1 lasts 510 ms 0 100 # short opto flash 1 510 # long opto flash ``` Here, again, use `trig_unit` if you want to provide times in timesteps. ##### How to use it[¶](#id1) Once those two files have been created, you should provide them in the `[triggers]` section of the code (see [here](index.html#document-code/config)) with the `trig_file` and `trig_windows` parameters. You should then activate the option `clean_artifacts` by setting it to `True` before launching the filtering step. Note that by default, the code will produce one plot by artifacts, showing its temporal time course on all channels, during the imposed time window. This is what is subtracted, at all the given times for this unique stimulation artifacts. Example of a stimulation artifacts on a 252 MEA, subtracted during the filtering part of the algorithm. Note If, for some reasons, you want to relaunch this step (too small time windows, not enough artifacts, …) you will need to copy again the raw data before relaunching the filtering. This is because remember that the raw data are *always* filtered on-site. ### Automatic Merging[¶](#automatic-merging) #### Need for an meta merging step[¶](#need-for-an-meta-merging-step) Because for high number of channels, the chance that a cell can be split among several templates are high, one need to merge putative templates belonging to the same cells. This is a classical step in most of the spike sorting technique, and traditionally, this step was performed by a human operator, reviewing all templates one by one. Problem is that with the new generation of dense probes that the code can handle (4225 channels), the output of the algorithm can lead to more than 1000 templates, and one can not expect a human to go through all pairs iteratively. To automatize the procedure, we developed a so-called meta-merging step that will allow to quickly identify pairs of templates that have to be merged. To do so, first, we consider only pairs that have a similarity between their templates higher than `cc_overlap`. This allow not to considerate all the possible pairs, but only those that are likely to be the same cells, because their templates are similar. Note Since 0.8.2, the merging step is now included in the default pipeline of the algorithm, in order to simplify the evaluation with automatic procedures. However, since we don’t want to claim that such a meta-merging is optimal for all dataset, all species, and also for long and non-stationary recordings, we would encourage users to look at full results if the meta merging is suspicious. You can also automatically remove the nisy templates with the remove_noise option in the `merging` section. #### Comparison of CrossCorrelograms[¶](#comparison-of-crosscorrelograms) Then, for all those pairs of cells, we are computing the cross-correlation function in a time window of [-100, 100] ms, with a particular time bin `cc_bin`. The rationale behind is that a pair of template that should be merged should have a dip in the center of its cross-correlogram. To quantify that in an automated manner, we compute the theoretical amount of correlation that we should have, assuming the two cells would be independent. This allow us to compare the normal cross-correlogram between the two cells to a “control” one, keeping the same amount of correlation (see Figure). Difference between a normal cross-correlogram for a given pair of cells, and a `control` version. The center area in between the red dash dotted line is the one of interest. To quantify the dip, we measure the difference between the cross correlogram and its shuffled version in a window of interest [`-cc_average`, `cc_average`]. #### An iterative procedure with a dedicated GUI[¶](#an-iterative-procedure-with-a-dedicated-gui) We design a Python GUI to quickly visualize all those values and allow human to quickly performs all merges that need to be done. To launch it, with *N* processors, you need to do: ``` >> spyking-circus mydata.extension -m merging -c N ``` The GUI is still an ongoing work, so any feedbacks are welcome, but the idea is to show, in a single plot, all the putative pairs of cells that have to be merged. As can be seen in the top left panel, every point is a pair of neuron, and x-axis in the upper left panel shows the template similarity (between `cc_merge` and 1), while y-axis show the normalized difference between the control CC and the normal CC (see above). In the bottom left plot, this is the same measure on the y-axis, while the x-axis only shows the CC of the Reverse Cross-Correlogram. **Any pairs along the diagonal are likely to be merged** Meta-merging GUI ##### Selecting pairs[¶](#selecting-pairs) Each time you click on a given pairs (or select a group of them with the rectangle or lasso selector), the corresponding Cross-Correlogram are shown in the top right panel (and in dash-dotted line, this is the control). As you can see, there is a clear group of pairs that have a high template similarity > 0.9, and a high value for the CC metric >0.5. So we can select some of them Meta-merging GUI with several pairs that are selected If you think that all those pairs should be merged, you just need to click on the `Select` Button, and then on `Merge`. Once the merge is done, the GUI will recompute values and you can iterate the process Note The `Suggest Pairs` button suggests you pairs of neurons that have a template similarity higher than 0.9, and a high value for the CC metric ##### Changing the lag[¶](#changing-the-lag) By default, the CC metric is computed within a temporal window of [-5, 5] ms. But this value can be changed if you click on the `Set Window` Button. In the bottom right panel, you see all the CC for all pairs. You can change the way you want them to be sorted, and you can also click there to select some particular pairs. ##### Correcting for temporal lags while merging templates[¶](#correcting-for-temporal-lags-while-merging-templates) By default, in the GUI, when a merge between two templates is performed, the spikes of the destroyed template are just assigned to the one that is kept. This is a valid assumption is most cases. However, if you want to be more accurate, you need to take into account a possible time shift between the two templates. This is especially True if you are detecting both positive and negative peaks. If a template is large enough to cross both positive and negative thresholds, two time shifted versions of the same template could exist. One will be centered on the positive peak, and one centered on the negative peak. So when you merge them, you need to apply to the spikes this time shift between the templates. This can be done if you set the `correct_lag` flag in the `[merging]` section of the parameter file to `True`. ##### Exploring Templates[¶](#exploring-templates) In the middle, top plot, you can see on the x-axis the ratio between the peak of the template, and the detection threshold on its preferred electrode, and on the y-axis the number of spikes for that given templates. If you click on those points, you’ll see in the middle bottom plot the template waveform on its preferred electrode. Note The `Suggest Templates` button suggests you templates that have a peak below or just at the detection threshold. Those templates can exist, because of noise during the clustering phase. They are likely to be False templates, because the detection thresholds may have been set too low Meta-merging GUI with several templates that are selected You can then delete those templates, and the GUI will recompute the scores for all pairs. ##### Saving the results[¶](#saving-the-results) When you think all merges have been done, you just need to press the `Finalize` Button. This will save everything to file, without overwriting your original results. In fact, it will create new files with the suffix `-merged`, such that you need to use that suffix after if you want to view results in the GUI. Thus, if you want to convert/view those results after, you need to do: ``` >> circus-gui-matlab mydata.extension -e merged ``` Using the GUI[¶](#using-the-gui) --- ### A graphical launcher[¶](#a-graphical-launcher) For those that do not like the use of a command line, the program now integrates a standalone GUI that can be launched by simply doing: ``` >> spyking-circus-launcher ``` The GUI of the software. All operations described in the documentation can be performed here ### Quick preview GUIs[¶](#quick-preview-guis) #### Preview GUI[¶](#preview-gui) In order to be sure that the parameters in configuration file are correct, and before launching the algorithm that will filter the data on-site (and thus mess with them if parameters are wrong), one can use the preview GUI. To do so, simply do: ``` >> spyking-circus path/mydata.extension -p ``` The GUI will display you the electrode mapping, and the first second of the data, filtered, with the detection thresholds as dashed dotted lines. You can then be sure that the value of spike_thresh used in the parameter file is correct for your own data. A snapshot of the preview GUI. You can click/select one or multiple electrodes, and see the 1s of the activity, filtered, on top with the detection threshold Once you are happy with the way data are loaded, you can launch the algorithm. Note You can write down the value of the threshold to the configuration file by pressing the button `Write thresh to file` #### Result GUI[¶](#result-gui) In order to quickly visualize the results of the algorithm, and get a qualitative feeling of the reconstruction, you can see use a python GUI, similar to the previous one, showing the filtered traces superimposed with the reconstruction provided by the algorithm. To do so, simply do: ``` >> spyking-circus path/mydata.extension -r ``` A snapshot of the result GUI. You can click/select one or multiple electrodes, and see the the activity, filtered, on top with the reconstruction provided by the template matching algorithm (in black) Warning If results are not there yet, the GUI will only show you the filtered traces Note You can show the residuals, i.e. the differences between the raw data and the reconstruction by ticking the button `Show residuals` #### Meta-Merging GUI[¶](#meta-merging-gui) See the devoted section on Meta-Merging (see [Automatic Merging](index.html#document-code/merging)) ### Launching the visualization GUIs[¶](#launching-the-visualization-guis) You have several options and GUIs to visualize your results, just pick the one you are the most comfortable with! #### Matlab GUI[¶](#matlab-gui) ##### Installing MATLAB[¶](#installing-matlab) SpyKING CIRUCS will assume that you have a valid installation of MATLAB, and that the matlab command can be found in the system $PATH. For windows user, please have a look to this [howto](https://helpdeskgeek.com/windows-10/add-windows-path-environment-variable/). For unix users (mac or linux), simply add the following line to your .bash_profile or .bashrc file, in your $HOME directory: ``` export PATH=$PATH:/PATH_TO_YOUR_MATLAB/bin ``` Then relaunch the terminal ##### Launching the MATLAB GUI[¶](#launching-the-matlab-gui) To launch the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI provided with the software, you need of course to have a valid installation of [MATLAB](http://fr.mathworks.com/products/matlab/), and you should be able to simply do: ``` >> circus-gui-matlab path/mydata.extension ``` Note that in a near future, we plan to integrate all the views of the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI into [phy](https://github.com/cortex-lab/phy) To reload a particular dataset, that have been saved with a special `suffix`, you just need to do: ``` >> circus-gui-matlab path/mydata.extension -e suffix ``` This allows you to load a sorting session that has been saved and not finished. Also, if you want to load the results obtained by the [Meta Merging GUI](index.html#document-code/merging), you need to do: ``` >> circus-gui-matlab path/mydata.extension -e merged ``` #### Phy GUI[¶](#phy-gui) To launch the [phy](https://github.com/cortex-lab/phy) GUI (pure python based using opengl), you need a valid installation of [phy](https://github.com/cortex-lab/phy) 2.0 and [phylib](https://github.com/cortex-lab/phylib). ##### Installing phy 2.0[¶](#installing-phy-2-0) If you want to use the phy GUI to visualize your results, you may need to install [phy](https://github.com/cortex-lab/phy) 2.0. If you have installed SpyKING CIRCUS within a conda environment, first activate it: ``` >> conda activate circus ``` Then, once you are in the environment, install [phy](https://github.com/cortex-lab/phy) 2.0: ``` (circus) >> pip install colorcet pyopengl qtconsole requests traitlets tqdm joblib click mkdocs dask toolz mtscomp (circus) >> pip install --upgrade https://github.com/cortex-lab/phy/archive/master.zip (circus) >> pip install --upgrade https://github.com/cortex-lab/phylib/archive/master.zip ``` ##### Launching the phy 2.0 GUI[¶](#launching-the-phy-2-0-gui) If [phy](https://github.com/cortex-lab/phy) 2.0 is installed, you should be able to simply do: ``` >> spyking-circus path/mydata.extension -m converting -c N ``` Followed by: ``` >> circus-gui-python path/mydata.extension ``` As you see, first, you need to export the data to the [phy](https://github.com/cortex-lab/phy) format using the `converting` option (you can use several CPUs with the `-c` flag if you want to export a lot of Principal Components). This is because as long as [phy](https://github.com/cortex-lab/phy) is still under development, this is not the default output of the algorithm. Depending on your parameters, a prompt will ask you if you want to compute all/some/no Principal Components for the GUI. While it may be interesting if you are familiar with classical clustering and PCs, you should not consider exploring PCs for large datasets. Note If you want to export the results that you have processed after the [Meta Merging GUI](index.html#document-code/merging), you just need to specify the extension to choose for the export: ``` >> spyking-circus path/mydata.extension -m converging -e merged >> circus-gui-python path/mydata.extension -e merged ``` ### Panels of the GUIs[¶](#panels-of-the-guis) In the following, we will mostly talk about the MATLAB GUI, because it is still the default one for the algorithm, but all the concepts are similar across all GUIs. Warning The phy GUI is way nicer, but is currently still under active development. We are not responsible for the possible bugs that may be encountered while using it. #### Matlab GUI[¶](#matlab-gui) A view of the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI As you can see, the GUI is divided in several panels: * **A** A view of the templates * **B** A view of the features that gave rise to this templates * **C** A view of the amplitudes over time * **D** A view for putative repeats, depending on your stimulation * **E** A view of the Inter Spike Interval Distribution, for that given template * **F** A view of the Auto/Cross Correlation (Press Show Correlation) To know more about what to look in those views, see [Basis of Spike Sorting](index.html#document-GUI/sorting) Note At any time, you can save the status of your sorting session, by pressing the `Save` Button. The suffix next to that box will be automatically added to the data, such that you do not erase anything. To reload a particular dataset, that have been saved with a special `suffix`, you just need to do: ``` >> circus-gui path/mydata.extension -e suffix ``` #### Python GUI[¶](#python-gui) A view of the Python GUI, derived from [phy](https://github.com/cortex-lab/phy), and oriented toward template matching algorithm. To use it, you need a valid version of [phy](https://github.com/cortex-lab/phy), and [phylab_](#id3) To know more about how to use [phy](https://github.com/cortex-lab/phy) and [phylib](https://github.com/cortex-lab/phylib), see the devoted websites. If you want to have a exhaustive description of the sorting workflow performed with [phy](https://github.com/cortex-lab/phy), please see the [phy documentation](https://phy.readthedocs.io/en/latest/). ### Basis of spike sorting[¶](#basis-of-spike-sorting) In this section, we will review the basis of spike sorting, and the key operations that are performed by a human operator, in order to review and assess the quality of the data. The goal here is not to cover all the operations that one need to do when doing spike sorting, but rather to show you how key operations can be performed within the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI. If you want to have a similar description of those steps with [phy](https://github.com/cortex-lab/phy), please see the [phy documentation](http://phy-contrib.readthedocs.io/en/latest/template-gui/). Note All operations are similar across GUIs, so the key concepts here can be transposed to python/phy GUIs. #### Viewing a single template[¶](#viewing-a-single-template) The algorithm outputs different templates. Each corresponds to the average waveform that a putative cell evokes on the electrodes. The index of the template displayed is on the top right corner. The index can be changed by typing a number on the box or clicking on the plus / minus buttons below it. A view of the templates The large panel A shows the template on every electrode. You can click on the `Zoom in` and `Zoom out` buttons to get a closer look or step back. To adjust the view, you can change the scaling factor for the *X* and *Y* axis by changing the values in the `X scale` and `Y scale` boxes just next to the template view. `Reset` will restore the view to the default view. `Normalize` will automatically adapt the scale to see the most of your template. A view of the features Panel B shows the cluster from which this template has been extracted. Unless you want to redefine the cluster, you don’t have to worry about them. You just need to check that the clustering did effectively split clusters. If you see here what you think are two clusters that should have been split, then maybe the parameters of the clustering need to be adjusted (see [documentation on parameters](index.html#document-code/config)) A view of the Inter-Spike Intervals and the AutoCorrelation Panel E shows the ISI (inter spike interval). You can look at it from 0 to 25 ms, or from 0 to 200 ms if the button `Big ISI` is clicked. Above this panel, the % of refractory period violation is indicated, and a ratio indicates the number of violations / the total number of spikes. Panel F shows the auto-correlation, and you can freely change the time bin. Note If you are viewing two templates (see below), then Panel E shows combined ISI for the two templates, and Panel F shows the Cross-Correlogram between the two templates #### Cleaning a template[¶](#cleaning-a-template) A view of the amplitudes over time The template is matched all over the data, with a different amplitude each time. Each point of panel C represents a match, the *y*-axis is the amplitude, and *x*-axis the time. When there is a refractory period violation (two spikes too close), the bigger spike appears as a yellow point, and the smaller one in green. The 3 grey lines correspond to the average amplitude, the minimal amplitude and the maximal one. Many templates should have a large number of amplitudes around 1, as a sanity check that the template matching algorithm is working. However, sometimes, some others can have amplitude that may be anormally small or large. These latter points are usually “wrong matches”: they don’t correspond to real occurrences of the template. Rather, the algorithm just fitted noise here, or the residual that remains after subtracting templates. Of course, you don’t want to consider them as real spikes. So these amplitudes need to be separated from the other ones and removed. Note The minimal amplitude is now automatically handled during the fitting procedure, so there should be no need for adjusting the lower amplitude For this purpose, you need to define the limits of the area of good spikes. To define the minimal amplitude, click on the button `Set Min`, and then click on the panel D. The gray line corresponding to the minimal amplitude will be adjusted to pass by the point on which you click. The process holds for `Set Max`. In some cases, for long recordings where you have a drift, you would like to have an amplitude threshold varying over time. To do so, you need to define first an average amplitude over time. Click on `Define Trend` and see if the grey line follows the average amplitude over time. If not, you can try to modify the number right next to the button: if its value is 10, the whole duration will be divided in 10 intervals, and the median amplitude will be over each of these intervals. Alternatively, you can define this average over time manually by clicking on the `Define Trend Manually` button, then click on all the places by which this trend should pass in panel D, and then press enter. Once you have set the amplitude min and max correctly, you can split your template in two by clicking on the `Split from Lims` button. The template will be duplicated. One template will only keep the points inside these limits, the other ones will keep the points outside. #### Viewing two templates[¶](#viewing-two-templates) All these panels can also be used to compare two templates. For this, define the second template in the `Template 2` box (top right), and click on the button `View 2`. This button switches between viewing a single template or viewing two at the same time, in blue and red. In E, you will get the ISI of the merged spike trains, and in F the cross-correlogram between the two cells. ##### Suggestion of matches[¶](#suggestion-of-matches) At any time, you can ask the GUI to suggest you the closest template to the one you are currently looking at, by clicking on `Suggest Similar`. By default, the GUI will select the best match among all templates. If the box `Same Elec` is ticked, then the GUI will give you only the best matches on that electrode. You should then be able to see, in the feature space (Panel B), the two distinct clusters. Otherwise, because templates are from point gathered on different electrodes, this comparison does not make sense. If you want to see the *N* - th best match, just enter *N* in the input box next to the `Suggest Similar` Button. ##### Merging two templates[¶](#merging-two-templates) Very often a single cell is split by the algorithm into different templates. These templates thus need to be merged. When you are looking at one cell, click on the `Suggest similar` button to compare it to templates of similar shape. If the number next to this button, you will compare it to the most similar one, if it is 2, to the second most similar one, and so on. You will be automatically switched to the `View 2` mode (see above). In the middle left, a number between 0 and 1 indicates a coefficient of similarity between the two templates (1=perfect similarity). By ticking the `Normalize` box, the two templates will be normalized to the same maximum. There are many ways to decide if two templates should be merged or not, but most frequently people look at the cross-correlogram: if this is the same cell, there should be a clear dip in the middle of the cross-correlogram, indicating that two spikes of the two templates cannot be emitted to too close to each other, and thus respecting the refractory period. A view of the MATLAB GUI To merge the two templates together, click on the `Merge` button. The spikes from the two cells will be merged, and only the template of the first one will be kept. Note that the algorithm is rather on the side of over-dividing the cells into more templates, rather than the opposite, because it is much easier to merge cells than to cluster them further. So you will probably need to do that many times. Note Have a look to the Meta Merging GUI, made to perform all obvious merges in your recordings more quickly (see [Automatic Merging](index.html#document-code/merging)) #### Destroying a template[¶](#destroying-a-template) At any time, if you want to throw away a templates, because too noisy, you just need to click on the Button `Kill`. The templates will be destroyed Warning There is currently no `Undo` button in the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI. So please consider saving regularly your sorting session, or please consider using [phy](https://github.com/cortex-lab/phy) #### Repeats in the stimulation[¶](#repeats-in-the-stimulation) To display a raster, you need a file containing the beginning and end time of each repeat for each type of stimulus. This file should be a [MATLAB](http://fr.mathworks.com/products/matlab/) file containing two variables, that should be [MATLAB](http://fr.mathworks.com/products/matlab/) cell arrays: * `rep_begin_time{i}(j)` should contain the start time of the j-th repeat for the i-th type of stimulus. * `rep_end_time{i}(j)` should contain the end time of the j-th repeat for the i-th type of stimulus. The times should be specified in sample numbers. These two variables should be stored as a `mat` file in a file called `path/mydata/mydata.stim`, and placed in the same directory than the output files of the algorithm. If available, it will be loaded by the GUI and help you to visualize trial-to-trial responses of a given template. #### Give a grade to a cell[¶](#give-a-grade-to-a-cell) Once you have merged a cell and are happy about it, you can give it a grade by clicking on the `O` button. Clicking several times on it will go through different letters from A to E. This extra information can be helpful depending on the analysis you want to perform with your data. #### Saving your results[¶](#saving-your-results) To save the results of your post-processing, click on the `Save` button. A number of files will be saved, with the suffix written in the box right next to the save button. To reload a given spike sorting session, just enter this suffix after the file name when using the `circus-gui-matlab` command (see [documentation on configuration file](index.html#document-GUI/launching)): ``` >> circus-gui-matlab mydata.extension -e suffix ``` Advanced Informations[¶](#advanced-informations) --- ### Choosing the parameters[¶](#choosing-the-parameters) Only few parameters are likely to be modified by the user in the parameter file, depending on the type of data considered. If parameters are not optimal, the code may suggest you to change them. If you want to have a more precise feedback for a given dataset, do not hesitate to ask question to our Google group <https://groups.google.com/forum/#!forum/spyking-circus-users>, or contact us directly by email. Note The longer the recording, the better the code will work. If you have several chunks of recordings, you better concatenate everything into a single large data file, and provide it to the algorithm. This can be done automatically with the `multi-file` mode (see [here](index.html#document-code/multifiles)). HOwever, for long recordings, you should turn on the `smart_search` mode (see below). #### In vitro[¶](#in-vitro) ##### Retina[¶](#retina) 1. Templates observed are rather large, so `N_t = 5ms` is a decent value. If your final templates are smaller, you should reduce this value, as it reduces the memory usage. 2. A spike can be seen up to 250um away from its initiation site, so this is the default `radius` you should have either in your probe file, either in the parameters 3. Depending on the density of your array, we found that `max_cluster=10` is a decent value. Few electrodes have more than 10 distinct templates #### In vivo[¶](#in-vivo) ##### Cortex/Hippocampus/Superior Colliculus[¶](#cortex-hippocampus-superior-colliculus) 1. Templates observed are rather small, so `N_t = 2/3ms` is a decent value. Note that if your templates end up to be smaller, you should reduce this value, as it reduces the memory usage. 2. A spike can be seen up to 100um away from its initiation site, so this is the default `radius` you should have either in your probe file, either in the parameters 3. Depending on the density of your electrodes, we found that `max_cluster=10/15` is a decent value. Note If you see too many templates that seems to be mixtures of two templates, this is likely because the automatic merges performed internally are too aggressive. You can change that by playing with the `cc_merge` and `sim_same_elec` parameters (see the [FAQ](index.html#document-issues/faq)) #### Low thresholds or long recordings[¶](#low-thresholds-or-long-recordings) For long recordings, or if you have low thresholds and a lot of Multi-Unit Activity (MUA), you should consider turning the `smart_search` mode in the `clustering` section to `True`. Such a mode may become the default in future release. Instead of randomly selecting a subset of spikes on all channels, the smart search implements a rejection method algorithm that will try to sample more uniformly all the amplitudes, in order to be sure that all spikes are collected. #### Not so dense probes[¶](#not-so-dense-probes) If you have single channel recordings, or electrodes that are spaced appart by more than 50um, then you should set the `cc_merge` parameter in the `[clustering]` section to 1. Why? Because this parameter will ensure that templates that are scaled copies are not merged automatically. When templates are only over few channels, amplitude is a valuable information that you do not want to discard in order to separate them. ### Writing your custom file wrapper[¶](#writing-your-custom-file-wrapper) Since 0.5, SpyKING CIRCUS can natively read/write several file formats, in order to ease your sorting workflow. By default, some generic file formats are already implemented (see [the documentation on the file formats](index.html#document-code/fileformat)), but you can also write your own wrapper in order to read/write your own custom datafile. Note that we did not used [neo](https://github.com/NeuralEnsemble/python-neo), and we recommend not to do so, because your wrapper should have some functionalities not allowed yet by [neo](https://github.com/NeuralEnsemble/python-neo): * it should allow memory mapping, i.e. to read only chunks of your data at a time, slicing either by time or by channels. * it should read data in their native format, as they will internally be turned into `float32` * it could allow streaming, if data are internally stored in several chunks To do so, you simply need to create an object that will inherit from the `DataFile` object described in `circus/files/datafile.py`. The easy thing to understand the structure is to have a look to `circus/files/raw_binary.py` as an example of such a datafile object. If you have questions while writing your wrapper, do not hesitate to be in touch with us. The speed of the algorithm may slow down a little, depending on your wrapper. For example, currently, we provide an example of a wrapper based on [neuroshare](http://neuroshare.sourceforge.net/index.shtml) (mcd files). This wrapper is working, but slow and inefficient, because the [neuroshare](http://neuroshare.sourceforge.net/index.shtml) API is slow on its own. #### Mandatory attributes[¶](#mandatory-attributes) Here are the class attributes that you must define: ``` description = "mydatafile" # Description of the file format extension = [".myextension"] # extensions allowed parallel_write = False # can be written in parallel (using the comm object) is_writable = False # can be written is_streamable = ['multi-files'] # If the file format can support streams of data ['multi-files' is a default, but can be something else] _shape = None # The total shape of the data (nb time steps, nb channels) across streams if any _t_start = None # The global t_start of the data (0 by default) _t_stop = None # The final t_stop of the data, across all streams if any _params = {} # The dictionary where all attributes will be saved ``` Note that the datafile objects has an internal dictionary `_params` that contains all the values provided by the Configuration Parser, i.e. read from the parameter file in the data section. For a given file format, you can specify: ``` # This is a dictionary of values that need to be provided to the constructor, with the corresponding type _required_fields = {} ``` This is the list of mandatory parameters, along with the type, that have to be specify in the parameter file, because they can not be inferred from the header of your data file. For example: ``` _required_files = {'sampling_rate' : float, 'nb_channels' : int} ``` Then you can also specify some additional parameters, that may have a default value. If they are not provided in the parameter file, this default value is used. For example: ``` # This is a dictionary of values that may have a default value, if not provided to the constructor _default_values = {'gain' : 1.} ``` At the end, there are 5 mandatory attributes that the code will require for any given file format. Those should be stored in the `_params` dictionary: > * `nb_channels` > * `sampling_rate` > * `data_dtype` > * `dtype_offset` > * `gain` #### Custom methods[¶](#custom-methods) Here is the list of the function that you should implement in order to have a valid wrapper ##### Basics IO[¶](#basics-io) You must provide function to open/close the datafile: ``` def _open(self, mode=''): ''' This function should open the file - mode can be to read only 'r', or to write 'w' ''' raise NotImplementedError('The open method needs to be implemented for file format %s' %self.description) def _close(self): ''' This function closes the file ''' raise NotImplementedError('The close method needs to be implemented for file format %s' %self.description) ``` ##### Reading values from the header[¶](#reading-values-from-the-header) You need to provide a function that will read data from the header of your datafile: ``` def _read_from_header(self): ''' This function is called only if the file is not empty, and should fill the values in the constructor such as _shape. It returns a dictionary, that will be added to self._params based on the constrains given by required_fields and default_values ''' raise NotImplementedError('The _read_from_header method needs to be implemented for file format %s' %self.description) ``` Such a function must: * set _shape to (duration, nb_channels) * set _t_start if not 0 * return a dictionary of parameters that will be used, given the constrains obtained from values in _required_fields and _default_values, to create the DataFile ##### Reading chunks of data[¶](#reading-chunks-of-data) Then you need to provide a function to load a block of data, with a given size: ``` def read_chunk(self, idx, chunk_size, padding=(0, 0), nodes=None): ''' Assuming the analyze function has been called before, this is the main function used by the code, in all steps, to get data chunks. More precisely, assuming your dataset can be divided in nb_chunks (see analyze) of temporal size (chunk_size), - idx is the index of the chunk you want to load - chunk_size is the time of those chunks, in time steps - if the data loaded are data[idx:idx+1], padding should add some offsets, in time steps, such that we can load data[idx+padding[0]:idx+padding[1]] - nodes is a list of nodes, between 0 and nb_channels ''' raise NotImplementedError('The get_data method needs to be implemented for file format %s' %self.description) ``` Note that for convenience, in such a function, you can obtained local t_start, t_stop by using the method `t_start, t_stop = _get_t_start_t_stop(idx, chunk_size, padding)` (see `circus/files/raw_binary.py` for an example). This may be easier to slice your datafile. At the end, data must be returned as `float32`, and to do so, you can also use the internal method `_scale_data_to_float32(local_chunk)` ##### Writing chunks of data[¶](#writing-chunks-of-data) This method is required only if your file format is allowing write access: ``` def write_chunk(self, time, data): ''' This function writes data at a given time. - time is expressed in time step - data must be a 2D matrix of size time_length x nb_channels ''' raise NotImplementedError('The set_data method needs to be implemented for file format %s' %self.description) ``` #### Streams[¶](#streams) Depending on the complexity of your file format, you can allow several ways of streaming into your data. The way to define streams is rather simple, and by default, all files format can be streamed with a mode called `multi-files`. This is the former `multi-files` mode that we used to have in 0.4 versions (see [multi files](index.html#document-code/multifiles)): ``` def set_streams(self, stream_mode): ''' This function is only used for file format supporting streams, and need to return a list of datafiles, with appropriate t_start for each of them. Note that the results will be using the times defined by the streams. You can do anything regarding the keyword used for the stream mode, but multi-files is implemented by default This will allow every file format to be streamed from multiple sources, and processed as a single file. ''' if stream_mode == 'multi-files': dirname = os.path.abspath(os.path.dirname(self.file_name)) all_files = os.listdir(dirname) fname = os.path.basename(self.file_name) fn, ext = os.path.splitext(fname) head, sep, tail = fn.rpartition('_') mindigits = len(tail) basefn, fnum = head, int(tail) fmtstring = '_%%0%dd%%s' % mindigits sources = [] to_write = [] global_time = 0 params = self.get_description() while fname in all_files: new_data = type(self)(os.path.join(os.path.abspath(dirname), fname), params) new_data._t_start = global_time global_time += new_data.duration sources += [new_data] fnum += 1 fmtstring = '_%%0%dd%%s' % mindigits fname = basefn + fmtstring % (fnum, ext) to_write += ['We found the datafile %s with t_start %s and duration %s' %(new_data.file_name, new_data.t_start, new_data.duration)] print_and_log(to_write, 'debug', logger) return sources ``` Note When working with streams, you must always defined attributes (such as `t_start`, `duration`, …) that are local, and defined only for each streams. As you can see, set_streams is a function that given a `stream_mode`, will read the parameters and return a list of DataFiles, created by slightly changing those parameters. In the case of `multi-files`, this is just a change in the file names, but for some file formats, streams are embedded within the same data structure, and not spread over several files. For example, if you have a look to the file `circus/files/kwd.py` you can see that there is also a mode for streams call `single-file`. If this mode is enabled, the code will process all chunks of data in the HDF5 file, sorted by their keys, as a single giant data file. This is a common situation in experiment. Chunks of data are recorded at several times, but in the same data file. Because they are originating from the same experiment, they better be processed as a whole. Once those functions are implemented, you simply need to add your wrapper in the list defined in `circus/files/__init__.py`. Or be in touch with us to make it available in the default trunk. #### Parallelism[¶](#parallelism) In all your wrappers, if you want to deal with parallelism and do read/write access that will depend on MPI, you have access to an object `comm` which is the MPI communicator. Simply add at the top of your python wrapper: ``` from circus.shared.mpi import comm ``` And then have a look for example `circus/files/hdf5.py` to understand how this is used #### Logs[¶](#logs) In all your wrappers, if you want to log some informations to the log files (in addition to those logged by default in the DataFile class), you can use the `print_and_log` function. Simply add at the top of your wrapper: ``` from circus.shared.messages import print_and_log import logging logger = logging.getLogger(__name__) ``` Then, if you want to log something, the syntax of such a function is: ``` >> print_and_log(list_of_lines, 'debug', logger) ``` ### Extra steps[¶](#extra-steps) The code comes with some additional methods that are not executed by default, but that could still be useful. You can view them by simply doing: ``` >> spyking-circus -h ``` #### Merging[¶](#merging) This option will launh the Meta merging GUI, allowing a fast merging of obvious pairs, based on some automatic computations performed on the cross-correlograms. To launch it, simply use: ``` >> spyking-circus path/mydata.extension -m merging -c N ``` Note This merging step will not affect your main results, and will generate additional files with the suffix `merged`. You can launch it safely at the end of the fitting procedure, and try various parameters. To know more about how those merges are performed, (see [Automatic Merging](index.html#document-code/merging)). Note that after, if you want to visualize this `merged` result with the GUIs, you need do use the `-e` parameter, such as for example: ``` >> circus-gui-matlab path/mydata.extension -e merged ``` #### Thresholding[¶](#thresholding) In some cases, you may not want to spike sort the data, but you could only be interested by all the times at which you have threshold crossings, i.e. putative events or Multi Unit Activity (MUA). Note that the denser the probe, the more you will overestimate the real MUA, because of spikes being counted multiple times. To launch it, simply use: ``` >> spyking-circus path/mydata.extension -m thresholding -c N ``` Note This thresholding step will produce a file `mydata/mydata.mua.hdf5` in which you will have one entry per electrode, with all the times (and amplitudes) at which threshold crossing has been detected. [More on the MUA extraction](index.html#document-advanced/mua) #### Gathering[¶](#gathering) The more important one is the `gathering` option. This option allows you, while the fitting procedure is still running, to collect the data that have already been generated and save them as a temporary result. This methods use the fact that temporal chunks are processed sequentially, so you can, at any time, review what has already been fitted. To do so, simply do: ``` >> spyking-circus path/mydata.extension -m gathering -c N ``` Warning *N* must be equal to the number of nodes that are currently fitting the data, because you will collect the results from all of them Note that the data will be saved as if they were the final results, so you can launch the GUI and review the results. If nodes have different speed, you may see gaps in the fitted chunks, because some may be slower than others. The point of this `gathering` function is not to provide you an *exhaustive* view of the data, but simply be sure that everything is working fine. #### Converting[¶](#converting) As already said in the GUI section, this function allows you to export your results into the [phy](https://github.com/cortex-lab/phy) format. To do so, simply do: ``` >> spyking-circus path/mydata.extension -m converting -c N ``` During the process, you have the option to export or not the Principal Components for all the spikes that have been found, and [phy](https://github.com/cortex-lab/phy) will display them. Note that while this is safe to export all of them for small datasets, this will not scale for very large datasets with millions of spikes. Warning For millions of spikes, we do not recommend to export *all* Principal Components. You can export only *some*, but then keep in mind that you can not redefine manually your clusters in [phy](https://github.com/cortex-lab/phy) #### Deconverting[¶](#deconverting) This option will allow you to convert back your results from phy to the MATLAB GUI. This could be useful if you want to compare results between the GUI, or if you need to switch because of missing functionnalities. To convert the data, simply use: ``` >> spyking-circus path/mydata.extension -m deconverting ``` Note If you worked with data and a particular extension, then you will need to specify the extension: ``` >> spyking-circus path/mydata.extension -m deconverting -e extension ``` #### Extracting[¶](#extracting) This option allows the user to get, given a list of spike times and cluster ids, its own templates. For example one could perform the clustering with its own method, and given the results of its algorithms, extract templates and simply launch the template matching part in order to resolve overlapping spikes. To perform such a workflow, you just need to do: ``` >> spyking-circus path/mydata.extension -m extracting,fitting ``` Warning This option has not yet been tested during the integration in this 0.4 release, so please contact us if you are interested. #### Benchmarking[¶](#benchmarking) This option allows the user to generate synthetic ground-truth, and assess the performance of the algorithm. We are planning to move it into a proper testsuite, and make its usage more user friendly. Currently, this is a bit undocumented and for internal use only. In a nutshell, five types of benchmarks can be performed from an already processed file: * `fitting` The code will select a given template, and inject multiple shuffled copies of it at various rates, at random places * `clustering` The code will select a given template, and inject multiple shuffled copies of it at various rates and various amplitudes, at random places * `synchrony` The code will select a given template, and inject multiple shuffled copies of it on the same electrode, with a controlled pairwise correlation coefficient between those cells * `smart-search` To test the effect of the smart search. 10 cells are injected with various rates, and one has a low rate compared to the others. * `drifts` Similar to the clustering benchmark, but the amplitudes of the cells are drifting in time, with random slopes #### Validating[¶](#validating) This method allows to compare the performance of the algorithm to those of a optimized classifier. This is an implementation of the BEER (Best Ellipsoidal Error Rate) estimate, as described in [[Harris et al, 2000]](http://robotics.caltech.edu/~zoran/Reading/buzsaki00.pdf). Note that the implementation is slightly more generic, and requires the installation of `sklearn`. To use it, you need to have, if your datafile is `mydata.extension`, a file named `mydata/mydata.npy` which is simply an array of all the ground truth spike times. To know more about the BEER estimate, see the devoted documentation (see [More on the BEER estimate](index.html#document-advanced/beer)) ### Multi Unit Activity[¶](#multi-unit-activity) In some cases, performing the spike sorting may be an overkill. For example to quickly check if (and where) you have activity in your recordings, and/or if you are just interested in the macroscopic activity of your tissue. Albeit important, we need to keep in mind that for some scientific questions, spike sorting may not be necessary. However, when data are large, it can still be complicated to simply get the times at which you have putative spikes (i.e. threshold crossings). This is why, with SpyKING CIRCUS, you can quickly get what we call Multi-Unit Activity (MUA), i.e. times at which you have threshold crossings on every channels. Note however that the denser the probe, the more you will overestimate the real MUA, because of spikes being counted multiple times. You can use the `thresholding` method of the software, and to launch it, a typical workflow (assuming you want to filter and whiten the data first) will be: ``` >> spyking-circus path/mydata.extension -m filtering,whitening,thresholding -c N ``` Note This thresholding step will produce a file `mydata/mydata.mua.hdf5` in which you will have one entry per electrode, with all the times at which a threshold crossing has been detected on the channels. You can also retrieve the values of the signal at the the corresponding times, such that you can visualize the histogram of the amplitudes. This can be used to quickly observe if and where do you have activity. ### Details of the algorithm[¶](#details-of-the-algorithm) The full details of the algorithm have not been published yet, so we will only draft here the key principles and describe the ideas behind the four key steps of the algorithm. If you can not wait and really would like to know more about all its parameters, please get in touch with [<EMAIL>](mailto:pi<EMAIL>.yger%40inserm.fr) Note A full publication showing details/results of the algorithm is available at <http://biorxiv.org/content/early/2016/08/04/067843#### Filtering[¶](#filtering) In this first step, nothing incredibly fancy is happening. All the channels are high-pass filtered in order to remove fluctuations, and to do so, we used a classical third order Butterworth filter. This step is required for the algorithm to work. Raw vs. Filtered data #### Whitening[¶](#whitening) In this step, we are removing the spurious spatio-temporal correlations that may exist between all the channels. By detecting temporal periods in the data without any spikes, we compute a spatial matrix and a temporal filter that are whitening the data. This is a key step in most signal processing algorithms. Warning Because of this transformation, all the templates and data that are seen after in the [MATLAB](http://fr.mathworks.com/products/matlab/) GUI are in fact seen in this whitened space. spatial matrix to perform the whitening of the data for 24 electrodes #### Clustering[¶](#clustering) This is the main step of the algorithm, the one that allows it to perform a good clustering in a high dimensional space, with a smart sub sampling. ##### A divide and conquer approach[¶](#a-divide-and-conquer-approach) First, we split the problem by pooling spikes per electrodes, such that we can perform *N* independent clusterings (one per electrode), instead of a giant one. By doing so, the problem becomes intrinsically parallel, and one could easily use MPI to split the load over several nodes. Every spikes is assigned to only one given electrode, such that we can split the clustering problem into *N* independent clusterings. ##### A smart and robust clustering[¶](#a-smart-and-robust-clustering) We expanded on recent clustering technique [[<NAME>, 2014]](http://www.sciencemag.org/content/344/6191/1492.short) and designed a fully automated method for clustering the data without being biased by density peaks. In fact, the good point about the template matching approach that we are using is that we just need the *averaged* waveforms, so we don’t need to perform a clustering on all the spikes. Therefore, we can cluster only on a subset of all the spikes. They key point is to get a correct subset. Imagine that you have two cells next to the same electrode, but one firing way more than the other. If you are just subsampling by picking random spikes next to that electrode, you are likely to miss the under-represented neuron. The code is able to solve this issue, and perform what we call a *smart* search of spikes in order to subsample. Details should be published soon. Clustering with smart subsampling in a high dimensional space, leading to spatio-temporal templates for spiking activity triggered on the recording electrodes #### Fitting[¶](#fitting) The fitting procedure is a greedy template matching algorithm, inspired by the following publication [[Marre et al, 2012]](http://http://www.jneurosci.org/content/32/43/14859.abstract). The signal is reconstructed as a linear sum of the templates, and therefore, it can solve the problem of overlapping spikes. The good point of such an algorithm is that small temporal chunks can be processed individually (allowing to split the load among several computing units), and that most of the operations performed are matrix operations, thus this can gain a lot from the computing power of modern GPUs. Raw trace on a given electrode and superimposed templates in red. Each time the detection threshold (in dash dotted line) is crossed, the code lookup in the dictionary of template if a match can be found. ### Generated Files[¶](#generated-files) In this section, we will review the different files that are generated by the algorithm, and at the end of which step. In all the following, we will assume that the data are `path/mydata.extension`. All data are generated in the path `path/mydata/`. To know more about what is performed during the different steps of the algorithm, please see [details on the algorithm](index.html#document-advanced/algorithm), or wait for the publication. #### Whitening[¶](#whitening) At the end of that step, a single [HDF5](https://www.hdfgroup.org/HDF5/) file `mydata.basis.hdf5` is produced, containing several objects > * `/thresholds` the *N* thresholds, for all *N* electrodes. Note that values are positive, and should be multiply by the threshold parameter in the configuration file (see [documentation on parameters](index.html#document-code/config)) > * `/spatial` The spatial matrix used for whitening the data (size *N* x *N*) > * `/temporal` The temporal filter used for whitening the data (size *Nt* if *Nt* is the temporal width of the template) > * `/proj` and `/rec` The projection matrix obtained by PCA, and also its inverse, to represent a single waveform. (Size *Nt* x *F* if *F* is the number of features kept (5 by default)) > * `/waveforms` 1000 randomly chosen waveforms over all channels #### Clustering[¶](#clustering) At the end of that step, several files are produced * `mydata.clusters.hdf5` A [HDF5](https://www.hdfgroup.org/HDF5/) file that will encapsulates a lot of informations about the clusters, for every electrodes. What were the points selected, the spike times of those points, what was the labels assigned by the clustering, and also the rho and delta values resulting of the clustering algorithm used [[<NAME>, 2014]](http://www.sciencemag.org/content/344/6191/1492.short). To be more precise, the file has the following fields > + `/data_i`: the data points collected on electrode *i*, after PCA > + `/clusters_i`: the labels of those points after clustering > + `/times_i`: the spike times at which those spikes are > + `/debug_i`: a 2D array with rhos and deltas for those points (see clustering algorithm) > + `/electrodes`: an array with the prefered electrodes of all *K* templates > * `mydata.templates.hdf5` A [HDF5](https://www.hdfgroup.org/HDF5/) file storing all the templates, and also their orthogonal projections. So this matrix has a size that is twice the number of templates *2k*. Only the first *k* elements are the real templates. Note also that every templates has a given range of allowed amplitudes `limits`, and we are also saving the norms `norms` for internal purposes. To be more precise, the file has the following fields > + `/temp_shape`: the dimension of the template matrix *N* x *Nt* x *2K* if *N* is the number of electrodes, *Nt* the temporal width of the templates, and *K* the number of templates. Only the first *K* components are real templates > + `/temp_x`: the x values to reconstruct the sparse matrix > + `/temp_y`: the y values to reconstruct the sparse matrix > + `/temp_data`: the values to reconstruct the sparse matrix > + `/norms` : the *2K* norms of all templates > + `/limits`: the *K* limits [amin, amax] of the real templates > + `/maxoverlap`: a *K* x *K* matrix with only the maximum value of the overlaps accross the temporal dimension > + `/maxlag`: a *K* x *K* matrix with the indices leading to the `maxoverlap` values obtained. In a nutshell, for all pairs of templates, those are the temporal shifts leading to the maximum of the cross-correlation between templates > * `mydata.overlap.hdf5` A [HDF5](https://www.hdfgroup.org/HDF5/) file used internally during the fitting procedure. This file can be pretty big, and is also saved using a sparse structure. To be more precise, the file has the following fields > + `/over_shape`: the dimension of the overlap matrix *2K* x *2K* x *2Nt - 1* if *K* is the number of templates, and *Nt* the temporal width of the templates > + `/over_x`: the x values to reconstruct the sparse matrix > + `/over_y`: the y values to reconstruct the sparse matrix > + `/over_data`: the values to reconstruct the sparse matrix #### Fitting[¶](#fitting) At the end of that step, a single [HDF5](https://www.hdfgroup.org/HDF5/) file `mydata.result.hdf5` is produced, containing several objects > * `/spiketimes/temp_i` for a template *i*, the times at which this particular template has been fitted. > * `/amplitudes/temp_i` for a template *i*, the amplitudes used at the given spike times. Note that those amplitudes has two component, but only the first one is relevant. The second one is the one used for the orthogonal template, and does not need to be analyzed. > * `/gspikes/elec_i` if the `collect_all` mode was activated, then for electrode *i*, the times at which spikes peaking there have not been fitted. > * `/mse` if the `mse_error` mode was activated, a 2D array with time on the first column, and normalized mean squared error between the raw signal and the reconstruction Note Spike times are saved in time steps #### Thresholding[¶](#thresholding) At the end of the thresholding step, a single [HDF5](https://www.hdfgroup.org/HDF5/) file `mydata.mua.hdf5` is produced, containing several objects > * `/spiketimes/elec_i` for the electrode *i*, the times at which there was a threshold crossing, thus MUA activity. > * `/amplitudes/elec_i` for the electrode *i*, the amplitudes of the signal at the given times. #### Converting[¶](#converting) At the end of that step, several [numpy](http://www.numpy.org/) files are produced in a path `path/mydata.GUI`. They are all related to [phy](https://github.com/cortex-lab/phy), so see the devoted documentation ### GUI without SpyKING CIRCUS[¶](#gui-without-spyking-circus) #### MATLAB[¶](#matlab) You may need to launch the MATLAB GUI on a personal laptop, where the data were not processed by the software itself, so where you only have [MATLAB](http://fr.mathworks.com/products/matlab/) and SpyKING CIRCUS is not installed. This is feasible with the following procedure: > * Copy the the result folder `mydata` on your computer > * Create a MATLAB mapping for the probe you used, i.e. `mapping.hdf5` (see the following procedure below to create it) > * Open [MATLAB](http://fr.mathworks.com/products/matlab/) > * Set the folder `circus/matlab_GUI` as the default path > * Launch the following command `SortingGUI(sampling, 'mydata/mydata', '.mat', 'mapping.hdf5', 2)` You just need to copy the following code snippet into a file `generate_mapping.py`. ``` import sys, os, numpy, h5py probe_file = os.path.abspath(sys.argv[1]) def generate_matlab_mapping(probe): p = {} positions = [] nodes = [] for key in probe['channel_groups'].keys(): p.update(probe['channel_groups'][key]['geometry']) nodes += probe['channel_groups'][key]['channels'] positions += [p[channel] for channel in probe['channel_groups'][key]['channels']] idx = numpy.argsort(nodes) positions = numpy.array(positions)[idx] t = "mapping.hdf5" cfile = h5py.File(t, 'w') to_write = {'positions' : positions/10., 'permutation' : numpy.sort(nodes), 'nb_total' : numpy.array([probe['total_nb_channels']])} for key in ['positions', 'permutation', 'nb_total']: cfile.create_dataset(key, data=to_write[key]) cfile.close() return t probe = {} with open(probe_file, 'r') as f: probetext = f.read() exec probetext in probe mapping = generate_matlab_mapping(probe) ``` And then simply launch: ``` >> python generate_mapping.py yourprobe.prb ``` Once this is done, you should see a file `mapping.hdf5` in the directory where you launch the command. This is the [MATLAB](http://fr.mathworks.com/products/matlab/) mapping. Note If you do not have `h5py` installed on your machine, launch this script on the machine where SpyKING CIRCUS has been launched #### phy[¶](#phy) After the `converting` step, you must have a folder `mydata/mydata.GUI`. You simply need to copy this folder onto a computer without SpyKING CIRCUS, but only [phy](https://github.com/cortex-lab/phy) and [phylib](https://github.com/cortex-lab/phylib). In this folder, you should see a file `params.py`, generated during the `converting` step. So in a terminal, you simply need to go to this folder, and launch from a terminal: ``` >> phy template-gui params.py ``` If the raw data are not found, the Traceview will not be displayed. If you really want to see that view, remember that you need to get the raw data **filtered**, so you must also copy them back from your sorting machine. ### Example scripts[¶](#example-scripts) On this page, you will be very simple example of scripts to load/play a bit with the raw results, either in Python or in Matlab. This is not exhaustive, this is simply an example to show you how you can integrate your own workflow on the results. Warning Note that in Python templates (i.e. cells) indices start at 0, while they start at 1 in MATLAB. #### Display a template[¶](#display-a-template) If you want to display the particular template *i*, as a 2D matrix of size \(N_e\) x \(N_t\) (respectively the number of channels and the temporal width of your template) ##### Python[¶](#python) ``` from circus.shared.parser import CircusParser from circus.shared.files import load_data from pylab import * params = CircusParser('silico_0.dat') params.get_data_file() N_e = params.getint('data', 'N_e') # The number of channels N_t = params.getint('detection', 'N_t') # The temporal width of the template templates = load_data(params, 'templates') # To load the templates temp_i = templates[:, i].toarray().reshape(N_e, N_t) # To read the template i as a 2D matrix imshow(temp_i, aspect='auto') ``` ##### Matlab[¶](#matlab) ``` tmpfile = 'yourdata/yourdata.templates.hdf5'; templates_size = double(h5read(tmpfile, '/temp_shape')); N_e = templates_size(2); N_t = templates_size(1); temp_x = double(h5read(tmpfile, '/temp_x') + 1); temp_y = double(h5read(tmpfile, '/temp_y') + 1); temp_z = double(h5read(tmpfile, '/temp_data')); templates = sparse(temp_x, temp_y, temp_z, N_e*N_t, templates_size(3)); templates_size = [templates_size(1) templates_size(2) templates_size(3)/2]; temp_i = full(reshape(templates(:, tmpnum), N_t, N_e); imshow(temp_i) ``` #### Compute ISI[¶](#compute-isi) If you want to compute the inter-spike intervals of cell *i* ##### Python[¶](#id1) ``` from circus.shared.parser import CircusParser from circus.shared.files import load_data from pylab import * params = CircusParser('yourdatafile.dat') results = load_data(params, 'results') spikes = results['spiketimes']['temp_i'] isis = numpy.diff(spikes) hist(isis) ``` ##### Matlab[¶](#id2) ``` tmpfile = 'yourdata/yourdata.results.hdf5'; spikes = double(h5read(tmpfile, '/spiketimes/temp_i')); isis = diff(spikes); hist(isis) ``` #### Display the amplitude over time for a given template[¶](#display-the-amplitude-over-time-for-a-given-template) If you want to show a plot of cell *i* spike times vs. amplitudes ##### Python[¶](#id3) ``` from circus.shared.parser import CircusParser from circus.shared.files import load_data from pylab import * params = CircusParser('yourdatafile.dat') results = load_data(params, 'results') spikes = results['spiketimes']['temp_i'] amps = results['amplitudes']['temp_i'][:, 0] # The second column are amplitude for orthogonal, not needed plot(spikes, amps, '.') ``` ##### Matlab[¶](#id4) ``` tmpfile = 'yourdata/yourdata.results.hdf5'; spikes = double(h5read(tmpfile, '/spiketimes/temp_i')); amps = double(h5read(tmpfile, '/amplitudes/temp_i')(:,1)); plot(spikes, amps, '.') ``` ### Launching the tests[¶](#launching-the-tests) The code has now a dedicated test suite, that will not only test that the code can be launched, but it will also perform some stress tests that will convince you that the code is doing things right. In order to launch the tests, you simply need to do: ``` >> nosetests tests/ ``` If you have `nose` installed. You can also only launch some particular tests only: ``` >> nosetests tests/test_complete_workflow.py ``` Note The test suite is taking some time, because various datasets are generated and processed, so you should not be in a hurry. #### What is performed[¶](#what-is-performed) When you are launching the tests, the code will generate a completely artificial datasets of 5min at 20kHz, composed of some templates with Gaussian noise, on 30 channels. This source dataset is saved in `tests/data/data.dat`. Note If you copy your own dataset in `tests/data`, then the tests will use it! #### What to see[¶](#what-to-see) At the end of every tests, some particular datasets generated using the `benchmarking` mode are stored in `tests/synthetic/`, and plots are generated in `tests/plots/` Plots of the tests for the complete workflow. 25 templates at various rates/amplitudes are injected into the source datasets, and performance are shown here. ### BEER estimate[¶](#beer-estimate) #### Validating[¶](#validating) The code comes with an integrated way to measure the optimal performance of any spike sorting algorithm, given the spike times of a ground truth neuron present in the recording. This can be used by doing: ``` >> spyking-circus mydata.extension -m validating ``` To use it, you need to have, if your datafile is `mydata.extension`, a file named `mydata/mydata.juxta.dat` which is the juxta-cellular signal recorded next to your extracelullar channels. Note that if you have simply the spike times, there is a way to bypass this. #### BEER estimate[¶](#id1) In a nutshell, to quantify the performance the software with real ground-truth recordings, the code can compute the Best Ellispsiodal Error Rate (BEER), as described in [[Harris et al, 2000]](http://robotics.caltech.edu/~zoran/Reading/buzsaki00.pdf). This BEER estimate gives an upper bound on the performance of any clustering-based spike sorting method using elliptical cluster boundaries, such as the one described in our paper. After thresholding and feature extraction, the windowed segments of the trace are labelled according to whether or not they contained a true spike. Half of this labelled data set is then used to train a perceptron whose decision rule is a linear combination of all pairwise products of the features of each segment, and is thus capable of achieving any elliptical decision boundary. This decision boundary is then used to predict the occurrence of spikes in the segments in the remaining half of the labelled data, and the success or failure of these predictions then provide an estimate of the miss and false positive rates. The code will generate a file `mydata/mydata.beer.dat` storing all the needed information, and will produce several plots. Distribution of the number of juxta-cellular spikes as function of the detection thresholds (to know where it has to be defined) ISI and mean waveforms triggered by the juxta-cellular spikes Decision boundaries of the BEER classifier before and after learning. The complete ROC curve for the classifier, and all the templates found by the algorithm, superimposed. If you are interested by using such a feature, please contact us! Known issues[¶](#known-issues) --- In this section, you will find all information you need about possible bugs/comments we got from users. The most common questions are listed in the FAQ, or you may have a look to more specialized sections ### Frequently Asked Questions[¶](#frequently-asked-questions) Here are some questions that are popping up regularly. You can ask some or get answers on our Google Group <https://groups.google.com/forum/#!forum/spyking-circus-users* **I can not install the software** Note Be sure to have the latest version from the git folder. We are doing our best to improve the packaging and be sure that the code is working on all platforms, but please be in touch we us if you encounter any issues * **Is is working with Python 3?** Note Yes, the code is compatible with Python 3 * **The data displayed do not make any sense** Note Are you sure that the data are properly loaded? (see `data` section of the parameter file, especially `data_dtype`, `data_header`). Test everything with the preview mode by doing: ``` >> spyking-circus mydata.extension -p ``` * **Can I process single channel datasets, or coming from not so-dense electrodes?** Note Yes, the code can handle spikes that will occur only on a single channel, and not on a large subset. However, you may want to set the `cc_merge` parameter in the `[clustering]` section to 1, to prevent any global merges. Those global merges are indeed performed automatically by the algorithm, before the fitting phase. It assumes that templates that are similar, up to a scaling factor, can be merged because they are likely to reflect bursting neurons. But for few channels, where spatial information can not really be used to disentangle templates, the amplitude is a key factor that you want to keep. Also, you may need to turn on the `smart_search` mode in the `clustering` section, because as you have few channels, you want to collect spikes efficiently. * **Something is wrong with the filtering** Note Be sure to check that you are not messing around with the `filter_done` flag, that should be automatically changed when you perform the filtering. You can read the troubleshooting section on the filtering [here](index.html#document-issues/filtering) * **I see too many clusters, at the end, that should have been split** Note The main parameters that you can change will be `cc_merge` and `sim_same_elec` in the `[clustering]` section. They are controlling the number of *local* (i.e. per electrode) and *global* (i.e. across the whole probe layout) merges of templates that are performed before the fitting procedure is launched. By reducing `sim_same_elec` (can not be less than 0), you reduce the *local* merges, and by increasing `cc_merge` (can not be more than 1), you reduce the *global* merges. A first recommendation would be to set `cc_merge` to 1. You might also want to turn on the `smart_search` parameter in the `clustering` section. This will force a smarter collection of the spikes, based on rejection methods, and thus should improve the quality of the clustering. * **Memory usage is saturating for thousands of channels** Note If you have a very large number of channels (>1000), then the default size of 60s for all the data blocks loaded into memory during the different steps of the algorithm may be too big. In the `whitening` section, you can at least change it by setting `chunk_size` to a smaller value (for example 10s), but this may not be enough. If you want the code to always load smaller blocks during all steps of the algorithm `clustering, filtering`, then you need to add this `chunk_size` parameter into the `data` section. * **How do I read the templates in Python?** Note Templates are saved as a sparse matrix, but you can easily get access to them. For example if you want to read the template *i*, you have to do ``` from circus.shared.files import * params = load_parameters('yourdatafile.dat') N_e = params.getint('data', 'N_e') # The number of channels N_t = params.getint('data', 'N_t') # The temporal width of the template templates = load_data(params, 'templates') # To load the templates temp_i = templates[:, i].toarray().reshape(N_e, N_t) # To read the template i as a 2D matrix ``` To know more about how to play with the data, and build your own analysis, either in Python or [MATLAB](http://fr.mathworks.com/products/matlab/) you can go to our [dedicated section on analysis](index.html#document-advanced/analysis) * **After merging templates with the Meta Merging GUI, waveforms are not aligned** Note By default, the merges do not correct for the temporal lag that may exist between two templates. For example, if you are detecting both positive and negative peaks in your recordings, you may end up with time shifted copies of the same template. This is because if the template is large enough, crossing both positive and negative thresholds at the same time, the code will collect positive and negative spikes, leading to twice the same template, misaligned. We are doing our best, at the end of the clustering step, to automatically merge those duplicates based on the cross-correlation (see parameter `cc_merge`). However, if the lag between the two extrema is too large, or if they are slightly different, the templates may not be fused. This situation will bring a graphical issue in the [phy](https://github.com/cortex-lab/phy) GUI, while reviewing the result: if the user decided in the Meta Merging GUI to merge the templates, the waveforms will not be properly aligned. To deal with that, you simply must to set the `correct_lag` parameter in the `[merging]` section to `True`. Note that such a correction can not be done for merges performed in [phy](https://github.com/cortex-lab/phy). ### Filtering[¶](#filtering) The filtering is performed once, on the data, without any copy. This has pros and cons. The pros is that this allow the code to be faster, avoiding filtering on-the-fly the data each time temporal chunks are loaded. The cons is that the user has to be careful about how this filtering is done. #### Wrong parameters[¶](#wrong-parameters) If you filled the parameter files with incorrect values either for the data type, header, or even the number of channels (i.e. with a wrong probe file), then the filtering is likely to output wrong data in the file itself. If you are facing issues with the code, always be sure that the informations displayed by the algorithm before any operations are correct, and that the data are correctly read. To be sure, use the preview GUI before launching the whole algorithm (see [Python GUI](index.html#document-GUI/python)): ``` >> spyking-circus mydata.extension -p ``` #### Interruption of the filtering[¶](#interruption-of-the-filtering) The filtering is performed in parallel by several nodes, each of them in charge of a subset of all the temporal chunks. This means that if any of them is failing because of a crash, or if the filtering is interupted by any means, then you have to copy again the entire raw file and start again. Otherwise, you are likely to filter twice some subparts of the data, leading to wrong results #### Flag filter_done[¶](#flag-filter-done) To let the code know that the filtering has been performed, you can notice at the bottom of the configuration file a flag `filter_done` that is False by default, but that becomes `True` only after the filtering has been performed. As long as this parameter files is ketp along with your data, the algorithm, if relaunched, will not refilter the file. Warning If you delete the configuration file, but want to keep the same filtered data, then think about setting this flag manually to `True` ### Whitening[¶](#whitening) #### No silences are detected[¶](#no-silences-are-detected) This section should be pretty robust, and the only error that you could get is a message saying that no silence were detected. If this is the case, this is likely that the parameters are wrong, and that the data are not properly understood. Be sure that your data are properly loaded by using the preview mode: ``` >> spyking-circus mydata.extension -p ``` If this is the case, please try to reduce the `safety_time` value. If no silences are detected, then your data may not be properly loaded. #### Whitening is disabled because of NaNs[¶](#whitening-is-disabled-because-of-nans) Again, this should be rare, and if this warning happens, you may try to get rid of this warning by changing the parameters of the `whitening` section. Try for example to increase `safety_time` for example to `3`, or try to change the value of `chunk_size`. We may enhance the robustness of the whitening in future releases. Citations[¶](#citations) --- ### How to cite SpyKING CIRCUS[¶](#how-to-cite-spyking-circus) Note If you are using SpyKING CIRCUS for your project, please cite us * <NAME>., <NAME>, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *A spike sorting toolbox for up to thousands of electrodes validated with ground truth recordings in vitro and in vivo*, eLife 2018;7:e34518 ### Publications refering to SpyKING CIRCUS[¶](#publications-refering-to-spyking-circus) Here is a non exhaustive list of papers using SpyKING CIRCUS. Do not hesitate to send us a mail in order update this list, the more the merrier #### 2021[¶](#id1) * Kajiwara, M. et al, *Inhibitory neurons exhibit high controlling ability in the cortical microconnectome*, PLOS Computational Biology, 17(4), e1008846 * Sans-Dublanc, A. et al, *Optogenetic fUSI for brain-wide mapping of neural activity mediating collicular-dependent behaviors*, Neuron * <NAME>. et <NAME>., *Electrophysiological analysis of brain organoids: current approaches and advancements*, Frontiers in Neuroscience, 14, 1405. * <NAME>. et al, *A facile and comprehensive algorithm for electrical response identification in mouse retinal ganglion cells*, Plos one, 16(3), e0246547. * <NAME> al, *Retinal Ganglion Cells Functional Changes in a Mouse Model of Alzheimer’s Disease Are Linked with Neurotransmitter Alterations*, Journal of Alzheimer’s Disease, (Preprint), 1-14. * <NAME>. et al, *ELVISort: encoding latent variables for instant sorting, an artificial intelligence-based end-to-end solution*, Journal of Neural Engineering, 18(4), 046033. * Provansal, M. et al *Functional ultrasound imaging of the spreading activity following optogenetic stimulation of the rat visual cortex*, bioRxiv. * Saif-ur-Rehman, M. et al, *SpikeDeep-Classifier: A deep-learning based fully automatic offline spike sorting algorithm*, Journal of Neural Engineering, 18(1), 016009. * Sedaghat-Nejad, E. et al, *P-sort: an open-source software for cerebellar neurophysiology*, bioRxiv. * <NAME>. et al, *Parallel processing of natural images by overlapping retinal neuronal ensembles*, bioRxiv. * <NAME>. et al, *Evaluation and resolution of many challenges of neural spike-sorting: a new sorter*, bioRxiv. * <NAME>. et al, *Predicting synchronous firing of large neural populations from sequential recordings*, PLoS computational biology, 17(1), e1008501. * Malfatti, T. et al, *Activity of CaMKIIa+ dorsal cochlear nucleus neurons are crucial for tinnitus perception but not for tinnitus induction*, bioRxiv. * Waschke, L. et al, *Behavior needs neural variability*, Neuron. * <NAME>. et <NAME>., *Improving scalability in systems neuroscience*, Neuron. #### 2020[¶](#id2) * <NAME>. et al, *Desflurane Anesthesia Alters Cortical Layer–specific Hierarchical Interactions in Rat Cerebral Cortex*, Anesthesiology: The Journal of the American Society of Anesthesiologists, 132(5), 1080-1090. * <NAME>. et al, *Ethanol Alters Variability, But Not Rate, of Firing in Medial Prefrontal Cortex Neurons of Awake‐Behaving Rats*, Alcoholism: Clinical and Experimental Research, 44(11), 2225-2238. * Z<NAME>. et al, *Network Dynamics in the Developing Piriform Cortex of Unanesthetized Rats*, Cerebral Cortex. * Lee H. et al, *Differential effect of anesthesia on visual cortex neurons with diverse population coupling*. Neuroscience. * <NAME>., et al, *Fronto-Temporal Coupling Dynamics During Spontaneous Activity and Auditory Processing in the Bat Carollia perspicillata*, Frontiers in systems neuroscience, 14, 14. * <NAME> et al, *Neural oscillations in the fronto-striatal network predict vocal output in bats*, PLoS biology, 18(3), e3000658. * Lee H. et al, *State-dependent cortical unit activity reflects dynamic brain state transitions in anesthesia*, Journal of Neuroscience, 40(49), 9440-9454. * <NAME>. et al, *Recurrent circuitry is required to stabilize piriform cortex odor representations across brain states*, Elife, 9, e53125. * Jin M. et al, *Mouse higher visual areas provide both distributed and discrete contributions to visually guided behaviors*, bioRxiv, 001446 * <NAME>, et al., *CellExplorer: a graphical user interface and standardized pipeline for visualizing and characterizing single neuron features*, bioRxiv, 083436 * Kajiwara M.et al., *Inhibitory neurons are a Central Controlling regulator in the effective cortical microconnectome*, bioRxiv, 954016 * <NAME>. et al., *SpikeForest: reproducible web-facing ground-truth validation of automated neural spike sorters*, bioRxiv, 900688 * <NAME>., et al. *Deep Learning-Based Template Matching Spike Classification for Extracellular Recordings*, Applied Sciences 10.1 (2020): 301 * <NAME>., et al. *EZcalcium: Open Source Toolbox for Analysis of Calcium Imaging Data*, bioRxiv, 893198 * Estaban<NAME>. et al., *Sensorimotor neuronal learning requires cortical topography*, bioRxiv, 873794 * <NAME>. et al., *Robust odor coding across states in piriform cortex requires recurrent circuitry: evidence for pattern completion in an associative network*, bioRxiv, 694331 * Yuan et al., *Versatile live-cell activity analysis platform for characterization of neuronal dynamics at single-cell and network level*, bioRXiv, 071787 * <NAME>. et al., *SpikeInterface, a unified framework for spike sorting*, bioRxiv, 796599 * García-<NAME>. et al., *Fronto-temporal coupling dynamics during spontaneous activity and auditory processing*, bioRxiv, 886770 #### 2019[¶](#id3) * Frazzini V. et al., *In vivo interictal signatures of human periventricular heterotopia*, bioRxiv, 816173 * Abbasi A et al., *Sensorimotor neuronal learning requires cortical topography*, bioRxiv 873794 * <NAME>. et al., *Enhanced representation of natural sound sequences in the ventral auditory midbrain*, bioRxiv 846485 * Chong E. et al., *Manipulating synthetic optogenetic odors reveals the coding logic of olfactory perception*, bioRxiv 841916 * <NAME>. et al., *Robust odor coding across states in piriform cortex requires recurrent circuitry: evidence for pattern completion in an associative network*, bioRxiv 694331 * Szőnyi1 A. et al., *Median raphe controls acquisition of negative experience in the mouse*, Science Vol 366, Issue 6469 * <NAME>. and <NAME>., *MEArec: a fast and customizable testbench simulator for ground-truth extracellular spiking activity*, bioRxiv, 691642 * <NAME>., <NAME>., et Bertrand, A., *SHYBRID: A graphical tool for generating hybrid ground-truth spiking data for evaluating spike sorting performance*, bioRxiv, 734061 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. et <NAME>., *Multi-shanks SiNAPS Active Pixel Sensor CMOSprobe: 1024 simultaneously recording channels for high-density intracortical brain mapping*, bioRxiv, 749911 * <NAME>., <NAME>. & <NAME>., *Fronto-striatal oscillations predict vocal output in bats*, bioRxiv, 724112 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Pattern recovery by recurrent circuits in piriform cortex*, biooRxiv 694331; doi: <https://doi.org/10.1101/694331> * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.,*A projection specific logic to sampling visual inputs in mouse superior colliculus*, bioRxiv 272914; doi: <https://doi.org/10.1101/272914> * <NAME>., et al., *Fine-scale mapping of cortical laminar activity during sleep slow oscillations using high-density linear silicon probes*, Journal of neuroscience methods 316: 58-70 * <NAME>., et al. *µSpikeHunter: An advanced computational tool for the analysis of neuronal communication and action potential propagation in microfluidic platforms*, Scientific reports 9.1: 5777 * Angotzi, <NAME>, et al. *SiNAPS: An implantable active pixel sensor CMOS-probe for simultaneous large-scale neural recordings*, Biosensors and Bioelectronics 126: 355-364. * <NAME>., et al. *Discovering precise temporal patterns in large-scale neural recordings through robust and interpretable time warping*, bioRxiv: 661165 * <NAME>., <NAME>., <NAME>., *Scaling Spike Detection and Sorting for Next-Generation Electrophysiology*, In Vitro Neuronal Networks. Springer, Cham 171-184. * <NAME>., and <NAME>., *Continuing progress of spike sorting in the era of big data*, Current opinion in neurobiology 55: 90-96 * <NAME>., <NAME>., <NAME>., <NAME>., *Spike sorting with Gaussian mixture models*, Scientific reports, 9(1), 3627 * <NAME>., <NAME>., <NAME>., *Modeling the correlated activity of neural populations: A review*, Neural computation, 31(2), 233-269. * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Reliability of motor and sensory neural decoding by threshold crossings for intracortical brain–machine interface*, Journal of neural engineering. * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Neuronal spiking activity highlights a gradient of epileptogenicity in human tuberous sclerosis lesions*, Clinical Neurophysiology, 130(4), 537-547. * <NAME>., <NAME>., <NAME>., *A data-driven regularization approach for template matching in spike sorting with high-density neural probes*, In Proceedings of IEEE EMBC. IEEE. * <NAME>., <NAME>., <NAME>., <NAME>., *Robust Online Spike Recovery for High-Density Electrode Recordings using Convolutional Compressed Sensing*. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) (pp. 1015-1020). IEEE. * <NAME>., <NAME>., <NAME>., <NAME>., *From serial to parallel: predicting synchronous firing of large neural populations from sequential recordings*, bioRxiv, 560656. * <NAME>., <NAME>., *Open-Source Tools for Processing and Analysis of In Vitro Extracellular Neuronal Signals. In In Vitro Neuronal Networks* (pp. 233-250). Springer, Cham. * <NAME>., <NAME>., <NAME>., *Signal-to-peak-interference ratio maximization with automatic interference weighting for threshold-based spike sorting of high-density neural probe data*, In International IEEE/EMBS Conference on Neural Engineering:[proceedings]. International IEEE EMBS Conference on Neural Engineering. IEEE. #### 2018[¶](#id4) * <NAME>., *Large-scale neuron cell classification of single-channel and multi-channel extracellularrecordings in the anterior lateral motor cortex*, bioRxiv 445700; doi: <https://doi.org/10.1101/445700> * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Whole-Brain Functional Ultrasound Imaging Reveals Brain Modules for Visuomotor Integration*, Neuron, 5:1241-1251 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Locomotion modulates specific functional cell types in the mouse visual thalamus*, Nature Communications, 4882 (2018) * <NAME>., <NAME>., *D.sort: template based automatic spike sorting tool*, BioRxiv, 10.1101/423913 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *A fully automated spike sorting algorithm using t-distributed neighbor embedding and density based clustering*, BioRxiv, 10.1101/418913 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>, *Separating intrinsic interactions from extrinsic correlations in a network of sensory neurons*, BioRxiv, 10.1101/243816 * <NAME>., <NAME>, <NAME>., *Neuronal adaptation reveals a suboptimal decoding of orientation tuned populations in the mouse visual cortex*, BioRxiv, 10.1101/433722 * <NAME>., <NAME>., *Contribution of sensory encoding to measured bias*, BioRxiv, 10.1101/444430 * <NAME>., <NAME>., <NAME>., *Neural activity classification with machine learning models trained on interspike interval series data*, arXiv, 1810.03855 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. *Decoupling of timescales reveals sparse convergent CPG network in the adult spinal cord*, BiorXiv, 402917 * <NAME>, <NAME>, <NAME>, *A novel and fully automatic spike sorting implementation with variable number of features*, J Neurophysiol. 10.1152/jn.00339.2018 * <NAME>., <NAME>, <NAME>., <NAME>, *Speed-Selectivity in Retinal Ganglion Cells is Modulated by the Complexity of the Visual Stimulus*, BioRxiv, 350330 * <NAME>, <NAME>., <NAME>, *Towards online spike sorting for high-density neural probes using discriminative template matching with suppression of interfering spikes*, Journal of Neural Engineering, 1741-2552 * <NAME>., <NAME>., <NAME>., <NAME>., *Supra-barrel Distribution of Directional Tuning for Global Motion in the Mouse Somatosensory Cortex*, Cell Reports 22, 3534–3547 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Hippocampal Network Dynamics during Rearing Episodes*, Cell Reports, 23(6):1706-1715 * <NAME>., <NAME>., <NAME>., <NAME>., *Challenges and opportunities for large-scale electrophysiology with Neuropixels probes*, Current Opinion in Neurobiology, Volume 50, 92-100 * <NAME>., <NAME>. , <NAME>., <NAME>, *A transformation from temporal to ensemble coding in a model of piriform cortex*, eLife, 10.7554/eLife.34831 * <NAME>., <NAME>. , *Recurrent cortical circuits implement concentration-invariant odor coding*, Science, 361(6407) * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Functional Asymmetries between Central and Peripheral Retinal Ganglion Cells in a Diurnal Rodent*, BioRxiv, 277814 * <NAME>., <NAME>., <NAME>., *Data-driven multi-channel filter design with peak-interference suppression for threshold-based spike sorting in high-density neural probes*, IEEE International Conference on Acoustics, Speech and Signal processing (ICASSP) #### 2017[¶](#id5) * <NAME>., <NAME>., *Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience*, BioRxiv, 196949 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *YASS: Yet Another Spike Sorter*, BioRxiv, 151928 * <NAME>., <NAME>., <NAME>., *Model-based spike sorting with a mixture of drifting t-distributions*, Journal of Neuroscience Methods, 288, 82-98 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Multiplexed computations in retinal ganglion cells of a single type*, Nature Communications 10.1038/s41467-017-02159-y * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., … & <NAME>. *A Fully Automated Approach to Spike Sorting*, Neuron, 95(6), 1381-1394 * <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., … & <NAME>. *Electrical stimulus artifact cancellation and neural spike detection on large multi-electrode arrays*, PLOS Computational Biology, 13(11), e1005842. * <NAME>., <NAME>, <NAME>., <NAME>., <NAME>. and <NAME>., *Sorting Overlapping Spike Waveforms from Electrode and Tetrode Recordings*, Front. Neuroinform. * <NAME>., <NAME>., <NAME>., <NAME>., *A primacy code for odor identity*, Nature Communication, 1477 * <NAME>., <NAME>., <NAME>., <NAME>., *Closed-loop estimation of retinal network sensitivity reveals signature of efficient coding*, eNeuro, ENEURO.0166-17.2017 * <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. *Spatial organization of chromatic pathways in the mouse dorsal lateral geniculate nucleus*, Journal of Neuroscience, 37(5), 1102-1116. #### 2016[¶](#id6) * <NAME>., <NAME>., & <NAME>. *T-SNE visualization of large-scale neural recordings*, bioRxiv, 087395. * <NAME>., <NAME>, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., *Fast and accurate spike sorting in vitro and in vivo for up to thousands of electrodes*, bioRxiv, 67843
nat
cran
R
Package ‘nat.nblast’ June 14, 2023 Type Package Title NeuroAnatomy Toolbox ('nat') Extension for Assessing Neuron Similarity and Clustering Version 1.6.7 Description Extends package 'nat' (NeuroAnatomy Toolbox) by providing a collection of NBLAST-related functions for neuronal morphology compari- son (Costa et al. (2016) <doi:10.1016/j.neuron.2016.06.012>). URL https://natverse.org/nat.nblast/ BugReports https://github.com/natverse/nat.nblast/issues Depends R (>= 2.15.1), rgl, methods, nat (>= 1.5.12) Imports nabor, dendroextras, plyr, spam Suggests spelling, bigmemory, ff, testthat, knitr, rmarkdown License GPL-3 LazyData yes VignetteBuilder knitr RoxygenNote 7.2.3 Language en-GB Encoding UTF-8 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-0587-9355>), <NAME> [aut] (<https://orcid.org/0000-0001-9260-3156>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-06-14 08:20:09 UTC R topics documented: nat.nblast-packag... 2 calc_dists_dotprod... 4 calc_prob_ma... 5 calc_score_matri... 5 create_scoringmatri... 6 diagona... 8 fctraces2... 9 fill_in_sparse_score_ma... 9 fill_pairs_sparse_score_ma... 10 nblas... 11 nblast_allbyal... 14 NeuriteBlas... 15 neuron_pair... 16 nhclus... 17 plot3d.hclus... 18 show_similarit... 19 smat.fcw... 21 sparse_score_ma... 21 subset.hclus... 22 sub_dist_ma... 23 sub_score_ma... 24 WeightedNNBasedLinesetMatchin... 24 ... 26 nat.nblast-package Neuron similarity, search and clustering tools Description nat.nblast provides tools to compare neuronal morphology using the NBLAST algorithm (Costa et al. 2016). Similarity and search The main entry point for similarity and search functions is nblast. Traced neurons will normally be converted to the dotprops format for search. When multiple neurons are compared they should be in a neuronlist object. The current NBLAST version (2) depends on a scoring matrix. Default matrices trained using Drosophila neurons in the FCWB template brain space are distributed with this package (see smat.fcwb); see Scoring Matrices section below for creating new scoring matrices. nblast makes use of a more flexible but more complicated function NeuriteBlast which includes several additional options. The function WeightedNNBasedLinesetMatching provides the primi- tive functionality of finding the nearest neighbour distances and absolute dot products for two sets of segments. Neither of these functions are intended for end use. Calculating all by all similarity scores is facilitated by the nblast_allbyall function which can take either a neuronlist as input or a character vector naming (a subset) of neurons in a (large) neuronlist. The neuronlist containing the input neurons should be resident in memory i.e. not the neuronlistfh. Clustering Once an all by all similarity score matrix is available it can be used as the input to a variety of clustering algorithms. nhclust provides a convenient wrapper for R’s hierarchical clustering func- tion hclust. If you wish to use another clustering function, then you can use the sub_dist_mat to convert a raw similarity score matrix into a normalised distance matrix (or R dist object) suitable for clustering. If you need a similarity matrix or want to modify the normalisation then you can use sub_score_mat. Note that raw NBLAST scores are not symmetric (i.e. S(A,B) is not equal to S(B,A)) so before clus- tering we construct a symmetric similarity/distance matrix 1/2 * ( S(A,B)/S(A,A) + S(B,A)/S(B,B) ). See sub_score_mat’s documentation for details. Cached scores Although NBLAST is fast and can be parallelised, it makes sense to cache to disk all by all similarity scores for a group of neurons that will be subject to repeated clustering or other analysis. The matrix can simply be saved to disk and then reloaded using base R functions like save and load. sub_score_mat and sub_dist_mat can be used to extract a subset of scores from this raw score matrix. For large matrices, the bigmemory or ff packages allow matrices to be stored on disk and portions loaded into memory on demand. sub_score_mat and sub_dist_mat work equally well for regular in-memory matrices and these disk-backed matrices. To give an example, for 16,129 neurons from the flycircuit.tw dataset, the 260,144,641 comparisons took about 250 hours of compute time (half a day on ~20 cores). When saved to disk as single precision (i.e. 4 bytes per score) ff matrix they occupy just over 1Gb. Calculating scoring matrices The NBLAST algorithm depends on appropriately calibrated scoring matrices. These encapsulate the log odds ratio that a pair of segments come from two structurally related neurons rather than two unrelated neurons, given the observed distance and absolute dot product of the two segments. Scoring matrices can be constructed using the create_scoringmatrix function, supplying a set of matching neurons and a set of non-matching neurons. See the create_scoringmatrix documen- tation for links to lower-level functions that provide finer control over construction of the scoring matrix. Package Options There is one package option nat.nblast.defaultsmat which is NULL by default, but could for example be set to one of the scoring matrices included with the package such as "smat.fcwb" or to a new user-constructed matrix. References <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2014). NBLAST: Rapid, sensitive comparison of neuronal structure and construction of neuron family databases. bioRxiv preprint. doi:10.1101/006346. See Also nblast, smat.fcwb, nhclust, sub_dist_mat, sub_score_mat, create_scoringmatrix calc_dists_dotprods Calculate distances and dot products between two sets of neurons Description Calculate distances and dot products between two sets of neurons Usage calc_dists_dotprods( query_neurons, target_neurons, subset = NULL, ignoreSelf = TRUE, ... ) Arguments query_neurons a neuronlist to use for calculating distances and dot products. target_neurons a further neuronlist to use for calculating distances and dot products. subset a data.frame specifying which neurons in query_neurons and target_neurons should be compared, with columns specifying query and target neurons by name, with one row for each pair. If unspecified, this defaults to an all-by-all compari- son. ignoreSelf a Boolean indicating whether to ignore comparisons of a neuron against itself (default TRUE). ... extra arguments to pass to NeuriteBlast. Details Distances and dot products are the raw inputs for constructing scoring matrices for the NBLAST search algorithm. Value A list, one element for for pair of neurons with a 2 column data.frame containing one column of distances and another of absolute dot products. calc_prob_mat Calculate probability matrix from distances and dot products between neuron segments Description Calculate probability matrix from distances and dot products between neuron segments Usage calc_prob_mat( nndists, dotprods, distbreaks, dotprodbreaks = seq(0, 1, by = 0.1), ReturnCounts = FALSE ) Arguments nndists a list of nearest-neighbour distances or a list of both nearest-neighbour distances and dot products. dotprods a list of dot products. distbreaks a vector specifying the breaks for distances in the probability matrix. dotprodbreaks a vector specifying the breaks for dot products in the probability matrix. ReturnCounts a Boolean indicating that counts should be returned instead of the default prob- abilities. Value A matrix with columns as specified by dotprodbreaks and rows as specified by distbreaks, con- taining probabilities (for default value of ReturnCounts=TRUE) or counts (if ReturnCounts=TRUE) for finding neuron segments with the given distance and dot product. calc_score_matrix Calculate scoring matrix from probability matrices for matching and non-matching sets of neurons Description Calculate scoring matrix from probability matrices for matching and non-matching sets of neurons Usage calc_score_matrix(matchmat, randmat, logbase = 2, epsilon = 1e-06) Arguments matchmat a probability matrix given by considering ’matching’ neurons. randmat a probability matrix given by considering ’non-matching’ or ’random’ neurons. logbase the base to which the logarithm should be taken to produce the final scores. epsilon a pseudocount to prevent division by zero when constructing the log odds ratio in the probability matrix. Value A matrix with with class=c("scoringmatrix", "table"), with columns as specified by dotprodbreaks and rows as specified by distbreaks, containing scores for neuron segments with the given dis- tance and dot product. create_scoringmatrix Create a scoring matrix given matching and non-matching sets of neu- rons Description Calculate a scoring matrix embodying the logarithm of the odds that a matching pair of neurite seg- ments come from a structurally related rather than random pair of neurons. This function embodies sensible default behaviours and is recommended for end users. More control is available by using the individual functions listed in See Also. Usage create_scoringmatrix( matching_neurons, nonmatching_neurons, matching_subset = NULL, non_matching_subset = NULL, ignoreSelf = TRUE, distbreaks, dotprodbreaks = seq(0, 1, by = 0.1), logbase = 2, epsilon = 1e-06, ... ) Arguments matching_neurons a neuronlist of matching neurons. nonmatching_neurons a neuronlist of non-matching neurons. matching_subset, non_matching_subset data.frames indicating which pairs of neurons in the two input neuron lists should be used to generate the matching and null distributions. See details for the de- fault behaviour when NULL. ignoreSelf a Boolean indicating whether to ignore comparisons of a neuron against itself (default TRUE). distbreaks a vector specifying the breaks for distances in the probability matrix. dotprodbreaks a vector specifying the breaks for dot products in the probability matrix. logbase the base to which the logarithm should be taken to produce the final scores. epsilon a pseudocount to prevent division by zero when constructing the log odds ratio in the probability matrix. ... extra arguments to pass to NeuriteBlast or options for the call to mlply call that actually iterates over neuron pairs. Details By default create_scoringmatrix will use all neurons in matching_neurons to create the match- ing distribution. This is appropriate if all of these neurons are of a single type. If you wish to use multiple types of neurons then you will need to specify a matching_subset to indicate which pairs of neurons are of the same type. By default create_scoringmatrix will use a random set of pairs from non_matching_neurons to create the null distribution. The number of random pairs will be equal to the number of matching pairs defined by matching_neurons This is appropriate if non_matching_neurons contains a large collection of neurons of different types. You may wish to set the random seed using set.seed if you want to ensure that exactly the same (pseudo-)random pairs of neurons are used in subsequent calls. Value A matrix with columns as specified by dotprodbreaks and rows as specified by distbreaks, containing log odd scores for neuron segments with the given distance and dot product. See Also calc_score_matrix, calc_prob_mat, calc_dists_dotprods, neuron_pairs Examples # calculate scoring matrix # bring in some mushroom body neurons library(nat) data(kcs20) # convert the (connected) tracings into dotprops (point and vector) # representation, resampling at 1 micron intervals along neuron fctraces20.dps=dotprops(fctraces20, resample=1) # we will use both all kcs vs all fctraces20 and fctraces20 vs fctraces20 # as random_pairs to make the null distribution random_pairs=rbind(neuron_pairs(fctraces20), neuron_pairs(nat::kcs20, fctraces20)) # you can add .progress='natprogress' if this looks like taking a while smat=create_scoringmatrix(kcs20, c(kcs20, fctraces20.dps), non_matching_subset=random_pairs) # now plot the scoring matrix distbreaks=attr(smat,'distbreaks') distbreaks=distbreaks[-length(distbreaks)] dotprodbreaks=attr(smat,'dotprodbreaks')[-1] # Create a function interpolating colors in the range of specified colors jet.colors <- colorRampPalette( c("blue", "green", "yellow", "red") ) # 2d filled contour plot of scoring matrix. Notice that the there is a region # at small distances and large abs dot product with the highest log odds ratio # i.e. most indicative of a match rather than non-match filled.contour(x=distbreaks, y=dotprodbreaks, z=smat, col=jet.colors(20), main='smat: log odds ratio', xlab='distance /um', ylab='abs dot product') # 3d perspective plot of the scoring matrix persp3d(x=distbreaks, y=dotprodbreaks, z=smat, col=jet.colors(20)[cut(smat,20)], xlab='distance /um', ylab='abs dot product', zlab='log odds ratio') diagonal Extract diagonal terms from a variety of matrix types Description Extract diagonal terms from a variety of matrix types Usage diagonal(x, indices = NULL) ## Default S3 method: diagonal(x, indices = NULL) Arguments x A square matrix indices specifies a subset of the diagonal using a character vector of names, a logical vector or integer indices. The default (NULL) implies all elements. Details Insists that input matrix is square. Uses the 'diagonal' attribute when available and has specialised handling of ff, big.matrix, dgCMatrix matrices. Does not check that row and column names are identical for those matrix classes (unlike the base diag function, but always uses rownames. Value a named vector containing the diagonal elements. Examples m=fill_in_sparse_score_mat(letters[1:5]) diagonal(m) fctraces20 20 traced Drosophila neurons from Chiang et al 2011 Description This R list (which has additional class neuronlist) contains 15 skeletonized Drosophila neurons as dotprops objects. Original data is due to Chiang et al. [1], who have generously shared their raw data. Automated tracing of neuron skeletons was carried out by Lee et al [2]. Image registration and further processing was carried out by <NAME>, <NAME> and <NAME>[3]. References [1] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., et al. (2011). Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution. Curr Biol 21 (1), 1–11. doi: doi:10.1016/ j.cub.2010.11.056 [2] <NAME>, <NAME>, <NAME>, and <NAME>. (2012). High-throughput com- puter method for 3d neuronal structure reconstruction from the image stack of the Drosophila brain and its applications. PLoS Comput Biol, 8(9):e1002658, Sep 2012. doi: doi:10.1371/ journal.pcbi.1002658. [3] NBLAST: Rapid, sensitive comparison of neuronal structure and construction of neuron family databases. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. Jefferis. bioRxiv doi: doi:10.1101/006346. fill_in_sparse_score_mat Add one or more submatrices to a sparse score matrix Description Add one or more submatrices to a sparse score matrix Usage fill_in_sparse_score_mat(sparse_matrix, ..., diag = NULL) Arguments sparse_matrix either an existing (square) sparse matrix or a character vector of names that will be used to define an empty sparse matrix. ... Additional matrices to insert into sparse_matrix. Row and column names must have matches in sparse_matrix. diag optional full diagonal for sparse matrix i.e. self-match scores. See Also sparse_score_mat fill_pairs_sparse_score_mat Add forwards, reverse and self scores for a pair of neurons to a sparse score matrix Description Add forwards, reverse and self scores for a pair of neurons to a sparse score matrix Usage fill_pairs_sparse_score_mat( sparse_matrix, n1, n2, dense_matrix, reverse = TRUE, self = TRUE, reverse_self = (reverse && self) ) Arguments sparse_matrix the sparse matrix to fill in. n1 the name of the query neuron. n2 the name of the target neuron. dense_matrix the score matrix from which to extract scores. reverse logical indicating that the reverse score should also be filled in (default TRUE). self logical indicating that the self-score of the query should also be filled in (used for normalised scores; default TRUE). reverse_self logical indicating that the self-score of the target should also be filled in (used for mean scores; default TRUE). Value A sparse matrix (of class spam) with the specified score entries filled. nblast Calculate similarity score for neuron morphologies Description Uses the NBLAST algorithm that compares the morphology of two neurons. For more control over the parameters of the algorithm, see the arguments of NeuriteBlast. Usage nblast( query, target = getOption("nat.default.neuronlist"), smat = NULL, sd = 3, version = c(2, 1), normalised = FALSE, UseAlpha = FALSE, OmitFailures = NA, ... ) Arguments query the query neuron. target a neuronlist to compare neuron against. Defaults to options("nat.default.neuronlist"). See nat-package. smat the scoring matrix to use (see details) sd Standard deviation to use in distance dependence of NBLAST v1 algorithm. Ignored when version=2. version the version of the algorithm to use (the default, 2, is the latest). normalised whether to divide scores by the self-match score of the query UseAlpha whether to weight the similarity score for each matched segment to emphasise long range neurites rather then arbours (default: FALSE, see UseAlpha section for details). OmitFailures Whether to omit neurons for which FUN gives an error. The default value (NA) will result in nblast stopping with an error message the moment there is an error. For other values, see details. ... Additional arguments passed to NeuriteBlast or the function used to compute scores from distances/dot products. (expert use only). Details when smat=NULL options("nat.nblast.defaultsmat") will be checked and if NULL, then smat.fcwb or smat_alpha.fcwb will be used depending on the value of UseAlpha. When OmitFailures is not NA, individual nblast calls will be wrapped in try to ensure that failure for any single neuron does not abort the whole nblast call. When OmitFailures=FALSE, missing values will be left as NA. OmitFailures=TRUE is not (yet) implemented. If you want to drop scores for neurons that failed you will need to set OmitFailures=FALSE and then use na.omit or similar to post-process the scores. Note that when OmitFailures=FALSE error messages will not be printed because the call is wrapped as try(expr, silent=TRUE). Internally, the plyr package is used to provide options for parallelising NBLAST and displaying progress. To display a progress bar as the scores are computed, add .progress="natprogress" to the arguments (non-text progress bars are available – see create_progress_bar). To parallelise, add .parallel=TRUE to the arguments. In order to make use of parallel calculation, you must reg- ister a parallel backend that will distribute the computations. There are several possible backends, the simplest of which is the multicore option made available by doMC, which spreads the load across cores of the same machine. Before using this, the backend must be registered using registerDoMC (see example below). Value Named list of similarity scores. NBLAST Versions The nblast version argument presently exposes two versions of the algorithm; both use the same core procedure of aligning two vector clouds, segment by segment, and then computing the distance and absolute dot product between the nearest segment in the target neuron for every segment in the query neuron. However they differ significantly in the procedure used to calculate a score using this set of distances and absolute dot products. Version 1 of the algorithm uses a standard deviation (argument sd) as a user-supplied parameter for a negative exponential weighting function that determines the relationship between score and the distance between segments. This corresponds to the parameter σ in the weighting function: p f = |u~i · v~i | exp (−d2i /2σ 2 ) This is the same approach described in Kohl et al 2013 and the similarity scores in the interval (0,1) described in that paper can exactly recapitulated by setting version=1 and normalised=TRUE. Version 2 of the algorithm is described in Costa et al 2014. This uses a more sophisticated and principled scoring approach based on a log-odds ratio defined by the distribution of matches and non-matches in sample data. This information is passed to the nblast function in the form of a scoring matrix (which can be computed by create_scoringmatrix); a default scoring matrix smat.fcwb has been constructed for Drosophila neurons. Which version should I use? You should use version 2 if you are working with Drosophila neurons or you have sufficient training data (in the form of validated matching and random neuron pairs to construct a scoring matrix). If this is not the case, you can always fall back to version 1, setting the free parameter (sd or σ) to a value that encapsulates your understanding of the location precision of neurons in your species/brain region of interest. In the fly brain we have used σ = 3 microns, since previous estimates of the localisation of identifiable features of neurons (Jefferis, Potter et al 2007) are of this order. UseAlpha In NBLAST v2, the alpha factor for a segment indicates whether neighbouring segments are aligned in a similar direction (as typical for e.g. a long range axonal projection) or randomly aligned (as typical for dendritic arbours). See Costa et al. for details. Setting UseAlpha=TRUE will emphasise the axon, primary neurite etc. of a neuron. This can be a particularly useful option e.g. when you are searching by a traced fragment that you know or suspect to follow an axon tract. References Kohl, <NAME>, A.D., <NAME>., and <NAME> (2013). A bidirectional circuit switch reroutes pheromone signals in male and female brains. Cell 155 (7), 1610–23 doi:10.1016/j.cell.2013.11.025. <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2014). NBLAST: Rapid, sensitive comparison of neuronal structure and construction of neuron family databases. bioRxiv preprint. doi:10.1101/006346. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2007). Comprehensive maps of Drosophila higher olfactory centers: spatially segregated fruit and pheromone representation. Cell 128 (6), 1187–1203. doi:10.1016/j.cell.2007.01.040 See Also nat-package, nblast_allbyall, create_scoringmatrix, smat.fcwb Examples # load sample Kenyon cell data from nat package data(kcs20, package='nat') # search one neuron against all neurons scores=nblast(kcs20[['GadMARCM-F000142_seg002']], kcs20) # scores from best to worst, top hit is of course same neuron sort(scores, decreasing = TRUE) hist(scores, breaks=25, col='grey') abline(v=1500, col='red') # plot query neuron open3d() # plot top 3 hits (including self match with thicker lines) plot3d(kcs20[which(sort(scores, decreasing = TRUE)>1500)], lwd=c(3,1,1)) rest=names(which(scores<1500)) plot3d(rest, db=kcs20, col='grey', lwd=0.5) # normalised scores (i.e. self match = 1) of all neurons vs each other # note use of progress bar scores.norm=nblast(kcs20, kcs20, normalised = TRUE, .progress="natprogress") hist(scores.norm, breaks=25, col='grey') # produce a heatmap from normalised scores jet.colors <- colorRampPalette( c("blue", "green", "yellow", "red") ) heatmap(scores.norm, labCol = with(kcs20,type), col=jet.colors(20), symm = TRUE) ## Not run: # Parallelise NBLASTing across 4 cores using doMC package library(doMC) registerDoMC(4) scores.norm2=nblast(kcs20, kcs20, normalised=TRUE, .parallel=TRUE) stopifnot(all.equal(scores.norm2, scores.norm)) ## End(Not run) nblast_allbyall Wrapper function to compute all by all NBLAST scores for a set of neurons Description Calls nblast to compute the actual scores. Can accept either a neuronlist or neuron names as a character vector. This is a thin wrapper around nblast and its main advantage is the option of "mean" normalisation for forward and reverse scores, which is the most sensible input to give to a clustering algorithm as well as the choice of returning a distance matrix. Usage nblast_allbyall(x, ...) ## S3 method for class 'character' nblast_allbyall(x, smat = NULL, db = getOption("nat.default.neuronlist"), ...) ## S3 method for class 'neuronlist' nblast_allbyall( x, smat = NULL, distance = FALSE, normalisation = c("raw", "normalised", "mean"), ... ) Arguments x Input neurons (neuronlist or character vector) ... Additional arguments for methods or nblast smat the scoring matrix to use (see details of nblast for meaning of default NULL value) db A neuronlist or a character vector naming one. Defaults to value of options("nat.default.neuronli distance logical indicating whether to return distances or scores. normalisation the type of normalisation procedure that should be carried out, selected from 'raw', 'normalised' or 'mean' (i.e. the average of normalised scores in both directions). If distance=TRUE then this cannot be raw. Details Note that nat already provides a function nhclust for clustering, which is a wrapper for R’s hclust function. nhclust actually expects raw scores as input. TODO It would be a good idea in the future to implement a parallel version of this function. See Also nblast, sub_score_mat, nhclust Examples library(nat) kcs20.scoremat=nblast_allbyall(kcs20) kcs20.hclust=nhclust(scoremat=kcs20.scoremat) plot(kcs20.hclust) NeuriteBlast Produce similarity score for neuron morphologies Description A low-level entry point to the NBLAST algorithm that compares the morphology of a neuron with those of a list of other neurons. For most use cases, one would probably wish to use nblast instead. Usage NeuriteBlast( query, target, targetBinds = NULL, normalised = FALSE, OmitFailures = NA, simplify = TRUE, ... ) Arguments query either a single query neuron or a neuronlist target a neuronlist to compare neuron against. targetBinds numeric indices or names with which to subset target. normalised whether to divide scores by the self-match score of the query OmitFailures Whether to omit neurons for which FUN gives an error. The default value (NA) will result in nblast stopping with an error message the moment there is an error. For other values, see details. simplify whether to simplify the scores from a list to a vector. TRUE by default. The only time you might want to set this false is if you are collecting something other than simple scores from the search function. See simplify2array for further details. ... extra arguments to pass to the distance function. Details For detailed description of the OmitFailures argument, see the details section of nblast. Value Named list of similarity scores. See Also WeightedNNBasedLinesetMatching neuron_pairs Utility function to generate all or random pairs of neurons Description Utility function to generate all or random pairs of neurons Usage neuron_pairs(query, target, n = NA, ignoreSelf = TRUE) Arguments query, target either neuronlists or character vectors of names. If target is missing, query will be used as both query and target. n number of random pairs to draw. When NA, the default, uses expand.grid to draw all pairs. ignoreSelf Logical indicating whether to omit pairs consisting of the same neuron (default TRUE). Value a data.frame with two character vector columns, query and target. See Also calc_score_matrix, expand.grid Examples neuron_pairs(nat::kcs20, n=20) nhclust Cluster a set of neurons Description Given an NBLAST all by all score matrix (which may be specified by a package default) and/or a vector of neuron identifiers use hclust to carry out a hierarchical clustering. The default value of the distfun argument will handle square distance matrices and R dist objects. Usage nhclust( neuron_names, method = "ward", scoremat = NULL, distfun = as.dist, ..., maxneurons = 4000 ) Arguments neuron_names character vector of neuron identifiers. method clustering method (default Ward’s). scoremat score matrix to use (see sub_score_mat for details of default). distfun function to convert distance matrix returned by sub_dist_mat into R dist ob- ject (default= as.dist). ... additional parameters passed to hclust. maxneurons set this to a sensible value to avoid loading huge (order N^2) distances directly into memory. Value An object of class hclust which describes the tree produced by the clustering process. See Also hclust, dist Other scoremats: sub_dist_mat() Examples library(nat) kcscores=nblast_allbyall(kcs20) hckcs=nhclust(scoremat=kcscores) # divide hclust object into 3 groups library(dendroextras) dkcs=colour_clusters(hckcs, k=3) # change dendrogram labels to neuron type, extracting this information # from type column in the metadata data.frame attached to kcs20 neuronlist labels(dkcs)=with(kcs20[labels(dkcs)], type) plot(dkcs) # 3d plot of neurons in those clusters (with matching colours) open3d() plot3d(hckcs, k=3, db=kcs20) # names of neurons in 3 groups subset(hckcs, k=3) plot3d.hclust Methods to identify and plot groups of neurons cut from an hclust object Description plot3d.hclust uses plot3d to plot neurons from each group, cut from the hclust object, by colour. Usage ## S3 method for class 'hclust' plot3d( x, k = NULL, h = NULL, groups = NULL, col = rainbow, colour.selected = FALSE, ... ) Arguments x an hclust object generated by nhclust. k number of clusters to cut from hclust object. h height to cut hclust object. groups numeric vector of groups to plot. col colours for groups (directly specified or a function). colour.selected When set to TRUE the colour palette only applies to the displayed cluster groups (default FALSE). ... additional arguments for plot3d Details Note that the colours are in the order of the dendrogram as assigned by colour_clusters. Value A list of rgl IDs for plotted objects (see plot3d). See Also nhclust, plot3d, slice,colour_clusters Examples # 20 Kenyon cells data(kcs20, package='nat') # calculate mean, normalised NBLAST scores kcs20.aba=nblast_allbyall(kcs20) kcs20.hc=nhclust(scoremat = kcs20.aba) # plot the resultant dendrogram plot(kcs20.hc) # now plot the neurons in 3D coloured by cluster group # note that specifying db explicitly could be avoided by use of the # \code{nat.default.neuronlist} option. plot3d(kcs20.hc, k=3, db=kcs20) # only plot first two groups # (will plot in same colours as when all groups are plotted) plot3d(kcs20.hc, k=3, db=kcs20, groups=1:2) # only plot first two groups # (will be coloured with a two-tone palette) plot3d(kcs20.hc, k=3, db=kcs20, groups=1:2, colour.selected=TRUE) show_similarity Display two neurons with segments in the query coloured by similarity Description By default, the query neuron will be drawn with its segments shaded from red to blue, with red indicating a poor match to the target segments, and blue a good match. Usage show_similarity( query, target, smat = NULL, cols = colorRampPalette(c("red", "yellow", "cyan", "navy")), col = "black", AbsoluteScale = FALSE, PlotVectors = TRUE, ... ) Arguments query a neuron to compare and colour. target the neuron to compare against. smat a score matrix (if NULL, defaults to smat.fcwb). cols the function to use to colour the segments (e.g. heat.colors). col the colour with which to draw the target neuron. AbsoluteScale logical indicating whether the colours should be calculated based on the mini- mum and maximum similarities for the neuron (AbsoluteScale = FALSE) or on the minimum and maximum possible for all neurons. PlotVectors logical indicating whether the vectors of the dotprops representation should be plotted. If FALSE, only the points are plotted. ... extra arguments to pass to plot3d. Value show_similarity is called for the side effect of drawing the plot; a vector of object IDs is returned. See Also The low level function WeightedNNBasedLinesetMatching is used to retrieve the scores. Examples ## Not run: library(nat) # Pull out gamma and alpha-beta neurons gamma_neurons <- subset(kcs20, type=='gamma') ab_neurons <- subset(kcs20, type=='ab') # Compare two alpha-beta neurons with similar branching, but dissimilar arborisation clear3d() show_similarity(ab_neurons[[1]], ab_neurons[[2]]) # Compare an alpha-beta and a gamma neuron with some similarities and differences clear3d() show_similarity(ab_neurons[[1]], gamma_neurons[[3]]) ## End(Not run) smat.fcwb Scoring matrices for neuron similarities in FCWB template brain Description Scoring matrices quantify the log2 odds ratio that a segment pair with a given distance and absolute dot product come from a pair of neurons of the same type, rather than unrelated neurons. Details These scoring matrices were generated using all by all pairs from 150 DL2 antennal lobe projection neurons from the FlyCircuit dataset and 5000 random pairs from the same dataset. • smat.fcwb was trained using nearest-neighbour distance and the tangent vector defined by the first eigen vector of the k=5 nearest neighbours. • smat_alpha.fcwb was defined as for smat.fcwb but weighted by the factor alpha defined as (l1-l2)/(l1+l2+l3) where l1,l2,l3 are the three eigen values. Most work on the flycircuit dataset has been carried out using the smat.fcwb scoring matrix al- though the smat_alpha.fcwb matrix which emphasises the significance of matches between linear regions of the neuron (such as axons) may have some advantages. sparse_score_mat Convert a subset of a square score matrix to a sparse representation Description This can be useful for storing raw forwards and reverse NBLAST scores for a set of neurons without having to store all the uncomputed elements in the full score matrix. Usage sparse_score_mat(neuron_names, dense_matrix) Arguments neuron_names a character vector of neuron names to save scores for. dense_matrix the original, dense version of the full score matrix. Value A spare matrix, in compressed, column-oriented form, as an R object inheriting from both CsparseMatrix-class and generalMatrix-class. See Also fill_in_sparse_score_mat Examples data(kcs20, package = "nat") scores=nblast_allbyall(kcs20) scores.3.sparse=sparse_score_mat(names(kcs20)[3], scores) scores.3.sparse # can also add additional submatrices fill_in_sparse_score_mat(scores.3.sparse,scores[3:6,3:4]) subset.hclust Return the labels of items in 1 or more groups cut from hclust object Description Return the labels of items in 1 or more groups cut from hclust object Usage ## S3 method for class 'hclust' subset(x, k = NULL, h = NULL, groups = NULL, ...) Arguments x tree like object k an integer scalar with the desired number of groups h numeric scalar with height where the tree should be cut groups a vector of which groups to inspect. ... Additional parameters passed to methods Details Only one of h and k should be supplied. Value A character vector of labels of selected items sub_dist_mat Convert (a subset of) a raw score matrix to a distance matrix Description This function can convert a raw score matrix returned by nblast into a square distance matrix or dist object. It can be used with file-backed matrices as well as regular R matrices resident in memory. Usage sub_dist_mat( neuron_names, scoremat = NULL, form = c("matrix", "dist"), maxneurons = NA ) Arguments neuron_names character vector of neuron identifiers. scoremat score matrix to use (see sub_score_mat for details of default). form the type of object to return. maxneurons set this to a sensible value to avoid loading huge (order N^2) distances directly into memory. Details Note that if neuron_names is missing then the rownames of scoremat will be used i.e. every neuron in scoremat will be used. Value return An object of class matrix or dist (as determined by the form argument), corresponding to a subset of the distance matrix See Also Other scoremats: nhclust() sub_score_mat Return scores (or distances) for given query and target neurons Description Scores can either be returned as raw numbers, normalised such that a self-hit has score 1, or as the average of the normalised scores in both the forwards & reverse directions (i.e. |query->target| + |target->query| / 2). Distances are returned as either 1 - normscore in the forwards direction, or as 1 - normscorebar, where normscorebar is normscore averaged across both directions. Usage sub_score_mat( query, target, scoremat = NULL, distance = FALSE, normalisation = c("raw", "normalised", "mean") ) Arguments query, target character vectors of neuron identifiers. scoremat a matrix, ff matrix, bigmatrix or a character vector specifying the name of an ff matrix containing the all by all score matrix. distance logical indicating whether to return distances or scores. normalisation the type of normalisation procedure that should be carried out, selected from 'raw', 'normalised' or 'mean' (i.e. the average of normalised scores in both directions). If distance=TRUE then this cannot be raw. See Also sub_dist_mat WeightedNNBasedLinesetMatching Compute point & tangent vector similarity score between two linesets Description WeightedNNBasedLinesetMatching is a low level function that is called by nblast. Most end users will not usually need to call it directly. It does allow the results of an NBLAST comparison to be inspected in further detail (see examples). Usage WeightedNNBasedLinesetMatching(target, query, ...) ## S3 method for class 'dotprops' WeightedNNBasedLinesetMatching(target, query, UseAlpha = FALSE, ...) ## S3 method for class 'neuron' WeightedNNBasedLinesetMatching( target, query, UseAlpha = FALSE, OnlyClosestPoints = FALSE, ... ) Arguments target, query dotprops or neuron objects to compare (must be of the same class) ... extra arguments to pass to the distance function. UseAlpha Whether to scale dot product of tangent vectors (default=F) OnlyClosestPoints Whether to restrict searches to the closest points in the target (default FALSE, only implemented for dotprops). Details WeightedNNBasedLinesetMatching will work with 2 objects of class dotprops or neuron. The code to calculate scores directly for neuron objects gives broadly comparable scores to that for dotprops objects, but has been lightly tested. Furthermore only objects in dotprops form were used in the construction of the scoring matrices distributed in this package. It is therefore recom- mended to convert neuron objects to dotprops objects using the dotprops function. UseAlpha determines whether the alpha values (eig1-eig2)/sum(eig1:3) are passed on to Weight- edNNBasedLinesetMatching. These will be used to scale the dot products of the direction vectors for nearest neighbour pairs. Value Value of NNDistFun passed to WeightedNNBasedLinesetMatching See Also dotprops Examples # Retrieve per segment distances / absolute dot products segvals=WeightedNNBasedLinesetMatching(kcs20[[1]], kcs20[[2]], NNDistFun=list) names(segvals)=c("dist", "adotprod") pairs(segvals) [ Extract parts of a sparse spam matrix Description Extract parts of a sparse spam matrix Usage ## S4 method for signature 'spam,character,character,logical' x[i, j, ..., drop = TRUE] ## S4 method for signature 'spam,character,character,missing' x[i, j, ..., drop = TRUE] ## S4 method for signature 'spam,character,missing,logical' x[i, j, ..., drop = TRUE] ## S4 method for signature 'spam,character,missing,missing' x[i, j, ..., drop = TRUE] ## S4 method for signature 'spam,missing,character,logical' x[i, j, ..., drop = TRUE] ## S4 method for signature 'spam,missing,character,missing' x[i, j, ..., drop = TRUE] Arguments x object to extract from. i row identifiers. j column identifiers. ... additional arguments. drop logical indicating that dimensions should be dropped.
@finos/perspective-esbuild-plugin
npm
JavaScript
[`@finos/perspective-esbuild-plugin`](#finosperspective-esbuild-plugin) === Applications bundled with `esbuild` can make use of the `@finos/perspective-esbuild-plugin` module. A full example can be found in the repo under [`examples/esbuild-example`](https://github.com/finos/perspective/tree/master/examples/esbuild-example). ``` const esbuild = require("esbuild"); const {PerspectiveEsbuildPlugin} = require("@finos/perspective-esbuild-plugin"); esbuild.build({ entryPoints: ["src/index.js"], plugins: [PerspectiveEsbuildPlugin()], format: "esm", bundle: true, loader: { ".ttf": "file", }, }); ``` When bundling via `esbuild`, you must also: * Use the `type="module"` attribute in your app's `<script>` tag, as this build mode is only supported via ES modules. * Use the direct imports for the `esm` versions Perspective, specifically `@finos/perspective/dist/esm/perspective.js` and `@finos/perspective-viewer/dist/esm/perspective-viewer.js` Readme --- ### Keywords none
oxygengine-ai
rust
Rust
Struct oxygengine_ai::AiSystemInstallerSetup === ``` pub struct AiSystemInstallerSetup<'a, C>where C: Component,{ pub postfix: &'a str, pub behaviors: AiBehaviors<C>, } ``` Fields --- `postfix: &'a str``behaviors: AiBehaviors<C>`Auto Trait Implementations --- ### impl<'a, C> !RefUnwindSafe for AiSystemInstallerSetup<'a, C### impl<'a, C> Send for AiSystemInstallerSetup<'a, C### impl<'a, C> Sync for AiSystemInstallerSetup<'a, C### impl<'a, C> Unpin for AiSystemInstallerSetup<'a, C### impl<'a, C> !UnwindSafe for AiSystemInstallerSetup<'a, CBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### unsafe fn finalize_raw(data: *mut()) Safety #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Component for Twhere T: Send + Sync + 'static,
carat
cran
R
Package ‘carat’ September 5, 2023 Type Package Title Covariate-Adaptive Randomization for Clinical Trials Version 2.2.1 Date 2023-09-05 Author <NAME> [aut], <NAME> [aut, cre], <NAME> [aut, ths], <NAME> [aut, ths] Maintainer <NAME> <<EMAIL>> Description Provides functions and command-line user interface to generate allocation sequence by covari- ate-adaptive randomization for clinical trials. The package currently supports six covariate- adaptive randomization procedures. Three hypothesis testing methods that are valid and ro- bust under covariate-adaptive randomization are also available in the package to facilitate the in- ference for treatment effect under the included randomization procedures. Addition- ally, the package provides comprehensive and efficient tools to allow one to evaluate and com- pare the performance of randomization procedures and tests based on various crite- ria. See <NAME>, <NAME>, <NAME>, and <NAME> (2023) <doi:10.18637/jss.v107.i02> for details. License GPL (>= 2) Imports Rcpp (>= 1.0.4.6), ggplot2 (>= 3.3.0), gridExtra (>= 2.3), stringr (>= 1.4.0), methods Suggests dplyr (>= 0.8.5) Depends R (>= 3.6.0) NeedsCompilation yes LinkingTo Rcpp, RcppArmadillo Repository CRAN Date/Publication 2023-09-05 15:30:06 UTC R topics documented: carat-packag... 2 AdjBC... 3 AdjBCD.si... 6 AdjBCD.u... 7 boot.tes... 8 compPowe... 10 compRan... 11 corr.tes... 14 DoptBC... 15 DoptBCD.si... 18 DoptBCD.u... 20 evalPowe... 21 evalRan... 23 evalRand.si... 27 getDat... 28 HuHuCA... 30 HuHuCAR.si... 34 HuHuCAR.u... 35 pat... 36 PocSimMI... 37 PocSimMIN.si... 41 PocSimMIN.u... 42 rand.tes... 43 StrBC... 45 StrBCD.si... 49 StrBCD.u... 50 StrPB... 51 StrPBR.si... 54 StrPBR.u... 55 carat-package carat-package: Covariate-Adaptive Randomization for Clinical Trials Description Provides functions and a command-line user interface to generate allocation sequences for clinical trials with covariate-adaptive randomization methods. It currently supports six different covariate- adaptive randomization procedures, including stratified randomization, minimization, and a general family of designs proposed by Hu and Hu (2012) <doi:10.1214/12-AOS983>. Three hypothesis testing methods, all valid and robust under covariate-adaptive randomization are also included in the package to facilitate the inference for treatment effects under the included randomization pro- cedures. Additionally, the package provides comprehensive and efficient tools for the performance evaluation and comparison of randomization procedures and tests based on various criteria. Acknowledgement This work was supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China [grant number 20XNA023]. Author(s) <NAME> <<EMAIL>>;<NAME> <<EMAIL>>; <NAME> <<EMAIL>>; <NAME> <<EMAIL>>. References Atkinson A C. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <doi:10.2307/2335853> <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <doi:10.1093/biomet/asr021> <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <doi:10.1214/12-AOS983> <NAME>, <NAME>, <NAME>. Testing hypotheses of covariate-adaptive randomized clinical trials[J]. Jour- nal of the American Statistical Association, 2015, 110(510): 669-680. <doi:10.1080/01621459.2014.922469> <NAME>, <NAME>, <NAME>, et al. Statistical Inference for Covariate-Adaptive Randomization Procedures[J]. Journal of the American Statistical Association, 2020, 115(531): 1488-1597. <doi:10.1080/01621459.2019.1635483> <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <doi: 10.18637/jss.v107.i02> <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. <doi:10.2307/2529712> <NAME>, <NAME>. Randomization in clinical trials: theory and practice[M]. John Wiley & Sons, 2015. <doi:10.1002/9781118742112> <NAME>., <NAME>. Validity of tests under covariate-adaptive biased coin randomization and generalized linear models[J]. Biometrics, 2013, 69(4), 960-969. <doi:10.1111/biom.12062> <NAME>., <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. <doi:10.1093/biomet/asq014> <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. <doi:10.1016/0021-9681(74)90015-0> AdjBCD Covariate-adjusted Biased Coin Design Description Allocates patients to one of two treatments based on covariate-adjusted biased coin design as pro- posed by <NAME>, <NAME> (2011) <doi:10.1093/biomet/asr021>. Usage AdjBCD(data, a = 3) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. a a design parameter governing the degree of randomness. The default is 3. Details Consider I covaraites and mi levels for the ith covariate, i = 1, . . . , I. Tj is the assignment of the jth patient and Zj = (k1 , . . . , kI ) indicates the covariate profile of the jth patient, j = 1, . . . , n. For convenience, (k1 , . . . , kI ) and (i; ki ) denote stratum and margin, respectively. Dj (.) is the difference between numbers of patients assigned to treatment 1 and treatment 2 at the corresponding level after j patients have been assigned. Let F a be a decreasing and symmetric function of Dj (.), which depends on a design parameter a ≥ 0. Then, the probability of allocating the (j + 1)th patient to treatment 1 is F a (Dj (.)), where F a (x) = , for x ≥ 1, for x = 0, and |x|a F a (x) = , for x ≤ −1. As a goes to ∞, the design becomes more deteministic. Details of the procedure can be found in <NAME> and <NAME> (2011). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. References <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See AdjBCD.sim for allocating patients with covariate data generating mechanism; See AdjBCD.ui for the command-line user interface. Examples # a simple use ## Real Data ## create a dataframe df <- data.frame("gender" = sample(c("female", "male"), 1000, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 1000, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 1000, TRUE), stringsAsFactors = TRUE) Res <- AdjBCD(df, a = 2) ## view the output Res ## view all patients' profile and assignments Res$Cov_Assig ## Simulated Data n <- 1000 cov_num <- 3 level_num <- c(2, 3, 5) # Set pr to follow two tips: #(1) length of pr should be sum(level_num); #(2) sum of probabilities for each margin should be 1. pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, rep(0.2, times = 5)) # set the design parameter a <- 1.8 # obtain result Res.sim <- AdjBCD.sim(n, cov_num, level_num, pr, a) # view the assignments of patients Res.sim$Cov_Assig[cov_num + 1, ] # view the differences between treatment 1 and treatment 2 at all levels Res.sim$Diff AdjBCD.sim Covariate-adjusted Biased Coin Design with Covariate Data Gener- ating Mechanism Description Allocates patients to one of two treatments based on the covariate-adjusted biased coin design as proposed by <NAME>, <NAME> (2011) <doi:10.1093/biomet/asr021>, by simulating the covariates-profile under the assumption of independence between covariates and levels within each covariate. Usage AdjBCD.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), a = 3) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2,2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). a a design parameter governing the degree of randomness. The default is 3. Details See AdjBCD. Value See AdjBCD. References <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See AdjBCD for allocating patients with complete covariate data; See AdjBCD.ui for the command- line user interface. AdjBCD.ui Command-line User Interface Using Covariate-adjusted Biased Coin Design Description A call to the user-interface function for allocation of patients to one of two treatments, using covariate-adjusted biased coin design, as proposed by <NAME>, <NAME> (2011) <doi:10.1093/biomet/asr021>. Usage AdjBCD.ui(path, folder = "AdjBCD") Arguments path the path in which a folder used to store variables will be created. folder name of the folder. If it is the default, a folder named "AdjBCD" will be created. Details See AdjBCD. Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by AdjBCD.ui. Note This function provides a command-line user interface, and users should follow the prompts to enter data including covariates as well as levels for each covariate, design parameter a and the covariate profile of the new patient. References <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See AdjBCD for allocating patients with complete covariate data; See AdjBCD.sim for allocating patients with covariate data generating mechanism. boot.test Bootstrap t-test Description Performs bootstrap t-test on treatment effects. This test is proposed by Shao et al. (2010) <doi:10.1093/biomet/asq014>. Usage boot.test(data, B = 200, method = c("HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "DoptBCD", "AdjBCD"), conf = 0.95, ...) Arguments data a data frame. It consists of patients’ profiles, treatment assignments and outputs. See getData. B an integer. It is the number of bootstrap samples. The default is 200. method the randomization procedure to be used for testing. This package provides tests for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". conf confidence level of the interval. The default is 0.95. ... arguments to be passed to method. These arguments depend on the randomiza- tion method used and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR is to be used. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be used. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" are to be used. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be used. bsize the block size for stratified randomization. It is required to be a multiple of 2. Note that bsize is only needed when "StrPBR" is to be used. Details The bootstrap t-test is described as follows: 1) Generate bootstrap data (Y1∗ , Z1∗ ), . . . , (Yn∗ , Zn∗ ) as a simple random sample with replacement from the original data (Y1 , Z1 ), . . . , (Yn , Zn ), where Yi denotes the outcome and Zi denotes the profile of the ith patient. 2) Perform covariate-adaptive procedures on the patients’ profiles to obtain new treatment assign- ments T1∗ , . . . , Tn∗ , and define n n 1 X ∗ 1 X ∗ θ̂∗ = − (T − 2) × Y ∗ n∗1 i=1 i i where n∗1 is the number of patients assigned to treatment 1 and n∗0 is the number of patients assigned to treatment 2. 3) Repeat step 2 B times to generate B independent boostrap samples to obtain θ̂b∗ , b = 1, . . . , B. The variance of Ȳ1 − Ȳ0 can then be approximated by the sample variance of θ̂b∗ . Value It returns an object of class "htest". An object of class "htest" is a list containing the following components: statistic the value of the t-statistic. p.value the p-value of the test,the null hypothesis is rejected if p-value is less than the pre-determined significance level. conf.int a confidence interval under the chosen level conf for the difference in treatment effect between treatment 1 and treatment 2. estimate the estimated treatment effect difference between treatment 1 and treatment 2. stderr the standard error of the mean (difference), used as denominator in the t-statistic formula. method a character string indicating what type of test was performed. data.name a character string giving the name(s) of the data. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. Examples #Suppose the data used is patients' profile from real world, # while it is generated here. Data needs to be preprocessed # and then get assignments following certain randomization. set.seed(100) df<- data.frame("gender" = sample(c("female", "male"), 100, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 100, TRUE), "jobs" = sample(c("stu.", "teac.", "other"), 100, TRUE, c(0.4, 0.2, 0.4)), stringsAsFactors = TRUE) ##data preprocessing data.pd <- StrPBR(data = df, bsize = 4)$Cov_Assig #Then we need to combine patients' profiles and outcomes after randomization and treatments. outcome = runif(100) data.combined = data.frame(rbind(data.pd,outcome), stringsAsFactors = TRUE) #run the bootstrap t-test B = 200 Strbt = boot.test(data.combined, B, "StrPBR", bsize = 4) Strbt compPower Comparison of Powers for Different Tests under Different Randomiza- tion methods Description Compares the power of tests under different randomization methods and treatment effects through matrices and plots. Usage compPower(powers, diffs, testname) Arguments powers a list. Each argument consists the power generated by evalPower() in this package or by other sources. The length of each argument must match. diffs a vector. It contains values of group treatment effect differences. The length of this argument and the length of each argument of powers must match. testname a vector. Each element is the name of test and the randomization method used. For example, when applying rand.test and corr.test under HuHuCAR, it can be c("HH.rand","HH.corr"). The length of this argument must match the length of diffs. Value This function returns a list. The first element is a matrix consisting of powers of chosen tests under different values of treatment effects. The second element of the list is a plot of powers. diffs forms the vertical axis of the plot. Examples ##settings set.seed(100) n = 1000 cov_num = 5 level_num = c(2,2,2,2,2) pr = rep(0.5,10) beta = c(1,4,3,2,5,5,4,3,2,1) di = seq(0,0.5,0.1) sigma = 1 type = "linear" p=0.85 Iternum = 10 #<<for demonstration,it is suggested to be around 1000 sl = 0.05 weight = rep(0.1,5) #comparison of corrected t-test under StrBCD and PocSim ##data generation library("ggplot2") Strctp=evalPower(n,cov_num,level_num,pr,type,beta,di, sigma,Iternum,sl,"StrBCD","corr.test",FALSE,p) PSctp=evalPower(n,cov_num,level_num,pr,type,beta,di,sigma, Iternum,sl,"PocSimMIN","corr.test",FALSE,weight,p) powers = list(Strctp,PSctp) testname = c("StrBCD.corr","PocSimMIN.corr") #get plot and matrix for comparison cp = compPower(powers,di,testname) cp compRand Compare Different Randomization Procedures via Tables and Plots Description Compares randomization procedures based on several different quantities of imbalances. Among all included randomization procedures of class "careval", two or more procedures can be compared in this function. Usage compRand(...) Arguments ... objects of class "careval". Details The primary goal of using covariate-adaptive randomization in practice is to achieve balance with respect to the key covariates. We choose four rules to measure the absolute imbalances at overall, within-covariate-margin, and within-stratum levels, which are maximal, 95%quantile, median and mean of the absolute imbalances at different aspects. The Monte Carlo method is used to calculate the four types of imbalances. Let Dn,i (·) be the final difference at the corresponding level for ith iteration, i = 1, . . ., N, and N is the number of iterations. (1) Maximal max |Dn,i (·)|. (2) 95% quantile (3) Median for N is odd, and 1 (|D(N/2) (·)| + |D(N/2+1) (·)|) 2 for N is even. (4) Mean N |Dn,i (·)|. Value It returns an object of class "carcomp". An object of class "carcomp" is a list containing the following components: Overall Imbalances a matrix containing the maximum, 95%-quantile, median, and mean of the ab- solute overall imbalances for the randomization method(s) to be evaluated. Within-covariate-margin Imbalances Imbalances a matrix containing the maximum, 95%-quantile, median, and mean of the ab- solute within-covariate-margin imbalances for the randomization method(s) to be evaluated. Within-stratum Imbalances a matrix containing the maximum, 95%-quantile, median, and mean of the abso- lute within-stratum imbalances for the randomization method(s) to be evaluated. dfmm a data frame containing the mean absolute imbalances at the overall, within- stratum, and within-covariate-margin levels for the randomization method(s) to be evaluated. df_abm a data frame containing the absolute imbalances at the overall, within-stratum, and within-covariate-margin levels. mechanism a character string giving the randomization method(s) to be evaluated. n the number of patients. iteration the number of iterations. cov_num the number of covariates. level_num a vector of level numbers for each covariate. Data Type a character string giving the data type, Real or Simulated. DataGeneration a bool vector indicating whether the data used for all the iterations is the same for the randomization method(s) to be evaluated. References <NAME>. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. See Also See evalRand or evalRand.sim to evaluate a specific randomization procedure. Examples ## Compare stratified permuted block randomization and Hu and Hu's general CAR cov_num <- 2 level_num <- c(2, 2) pr <- rep(0.5, 4) n <- 500 N <- 20 # <<adjust according to CPU bsize <- 4 # set weight for Hu and Hu's method, it satisfies # (1)Length should equal to cov_num omega <- c(1, 2, 1, 1) # Assess Hu and Hu's general CAR Obj1 <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "HuHuCAR", omega, p = 0.85) # Assess stratified permuted block randomization Obj2 <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "StrPBR", bsize) RES <- compRand(Obj1, Obj2) corr.test Corrected t-test Description Performs corrected t-test on treatment effects. This test follows the idea of Ma et al. (2015) <doi:10.1080/01621459.2014.922469>. Usage corr.test(data, conf = 0.95) Arguments data a data frame. It consists of patients’ profiles, treatment assignments and outputs. See getData. conf confidence level of the interval. The default is 0.95. Details When the working model is the true underlying linear model, and the chosen covariate-adaptive design achieves that the overall imbalance and marginal imbalances for all covariates are bounded in probability, we can derive the asymptotic distribution under the null distribution, where the treat- ment effect of each group is the same. Subsequently, we can replace the variance estimator in a simple two sample t-test with an adjusted variance estimator. Details can be found in Ma et al.(2015). Value It returns an object of class "htest". An object of class "htest" is a list containing the following components: statistic the value of the t-statistic. p.value the p-value of the test,the null hypothesis is rejected if p-value is less than the pre-determined significance level. conf.int a confidence interval under the chosen level conf for the difference in treatment effect between treatment 1 and treatment 2. estimate the estimated treatment effect difference between treatment 1 and treatment 2. stderr the standard error of the mean (difference), used as denominator in the t-statistic formula. method a character string indicating what type of test was performed. data.name a character string giving the name(s) of the data. References <NAME>, <NAME>, <NAME>. Testing hypotheses of covariate-adaptive randomized clinical trials[J]. Jour- nal of the American Statistical Association, 2015, 110(510): 669-680. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. Examples ##generate data set.seed(100) n = 1000 cov_num = 5 level_num = c(2,2,2,2,2) pr = rep(0.5,10) beta = c(0.1,0.4,0.3,0.2,0.5,0.5,0.4,0.3,0.2,0.1) omega = c(0.1, 0.1, rep(0.8 / 5, times = 5)) mu1 = 0 mu2 = 0.7 sigma = 1 type = "linear" p = 0.85 dataH = getData(n,cov_num,level_num,pr,type,beta, mu1,mu2,sigma,"HuHuCAR",omega,p) #run the corrected t-test HHct=corr.test(dataH) HHct DoptBCD Atkinson’s D_A-optimal Biased Coin Design Description Allocates patients to one of two treatments based on the DA -optimal biased coin design in the presence of the prognostic factors proposed by <NAME> (1982) <doi:10.2307/2335853>. Usage DoptBCD(data) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. Details Consider an experiment involving n patients. Assuming a linear model between response and co- variates, Atkinson’s DA -optimal biased coin design sequentially assigns patients to minimize the variance of estimated treatment effects. Supposing j patients have been assigned, the probability of assigning the (j + 1)th patient to treatment 1 is [1 − (1; xtj+1 )(Fjt Fj )−1 bj ]2 , [1 − (1; xtj+1 )(Fjt Fj )−1 bj ]2 + [1 + (1; xtj+1 )(Fjt Fj )−1 bj ]2 where X = (xi , i = 1, . . . , j) and xi = (xi1 , . . . , xij ) denote the covariate profile of the ith patient; and Fj = [1j ; X] is the information matrix; and bTj = (2Tj − 1j )T Fj , Tj = (T1 , . . . , Tj ) is a sequence containing the first j patients’ allocations. Details of the procedure can be found in A.C.Atkinson (1982). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. References <NAME>. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See DoptBCD.sim for allocating patients with covariate data generating mechanism. See DoptBCD.ui for the command-line user interface. Examples # a simple use ## Real Data df <- data.frame("gender" = sample(c("female", "male"), 100, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 100, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 100, TRUE), stringsAsFactors = TRUE) Res <- DoptBCD(df) ## view the output Res ## view all patients' profile and assignments ## Res$Cov_Assig ## Simulated Data n <- 1000 cov_num <- 2 level_num <- c(2, 5) # Set pr to follow two tips: #(1) length of pr should be sum(level_num); #(2)sum of probabilities for each margin should be 1. pr <- c(0.4, 0.6, rep(0.2, times = 5)) Res.sim <- DoptBCD.sim(n, cov_num, level_num, pr) ## view the output Res.sim ## view the difference between treatment 1 and treatment 2 ## at overall, within-strt. and overall levels Res.sim$Diff N <- 5 n <- 100 cov_num <- 2 level_num <- c(3, 5) # << adjust to your CPU and the length should correspond to cov_num ## Set pr to follow two tips: ## (1) length of pr should be sum(level_num); ## (2)sum of probabilities for each margin should be 1 pr <- c(0.3, 0.4, 0.3, rep(0.2, times = 5)) omega <- c(0.2, 0.2, rep(0.6 / cov_num, times = cov_num)) ## generate a container to contain Diff DH <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) DA <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) for(i in 1 : N){ result <- HuHuCAR.sim(n, cov_num, level_num, pr, omega) resultA <- StrBCD.sim(n, cov_num, level_num, pr) DH[ , i] <- result$Diff; DA[ , i] <- resultA$Diff } ## do some analysis require(dplyr) ## analyze the overall imbalance Ana_O <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_O) <- c("HuHuCAR", "DoptBCD") colnames(Ana_O) <- c("mean", "median", "95%quantile") temp <- DH[1, ] %>% abs tempA <- DA[1, ] %>% abs Ana_O[1, ] <- c((temp %>% mean), (temp %>% median), (temp %>% quantile(0.95))) Ana_O[2, ] <- c((tempA %>% mean), (tempA %>% median), (tempA %>% quantile(0.95))) ## analyze the within-stratum imbalances tempW <- DH[2 : (1 + prod(level_num)), ] %>% abs tempWA <- DA[2 : 1 + prod(level_num), ] %>% abs Ana_W <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_W) <- c("HuHuCAR", "DoptBCD") colnames(Ana_W) <- c("mean", "median", "95%quantile") Ana_W[1, ] = c((tempW %>% apply(1, mean) %>% mean), (tempW %>% apply(1, median) %>% mean), (tempW %>% apply(1, mean) %>% quantile(0.95))) Ana_W[2, ] = c((tempWA %>% apply(1, mean) %>% mean), (tempWA %>% apply(1, median) %>% mean), (tempWA %>% apply(1, mean) %>% quantile(0.95))) ## analyze the marginal imbalance tempM <- DH[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs tempMA <- DA[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs Ana_M <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_M) <- c("HuHuCAR", "DoptBCD") colnames(Ana_M) <- c("mean", "median", "95%quantile") Ana_M[1, ] = c((tempM %>% apply(1, mean) %>% mean), (tempM %>% apply(1, median) %>% mean), (tempM %>% apply(1, mean) %>% quantile(0.95))) Ana_M[2, ] = c((tempMA %>% apply(1, mean) %>% mean), (tempMA %>% apply(1, median) %>% mean), (tempMA %>% apply(1, mean) %>% quantile(0.95))) AnaHP <- list(Ana_O, Ana_M, Ana_W) names(AnaHP) <- c("Overall", "Marginal", "Within-stratum") AnaHP DoptBCD.sim Atkinson’s D_A-optimal Biased Coin Design with Covariate Data Generating Mechanism Description Allocates patients generated by simulating covariates-profile under the assumption of independence between covariates and levels within each covariate, to one of two treatments based on the DA - optimal biased coin design in the presence of prognostic factors, as proposed by <NAME> (1982) <doi:10.2307/2335853>. Usage DoptBCD.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4)) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2,2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). Details See DoptBCD. Value See DoptBCD. References Atkinson A C. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See DoptBCD for allocating patients with complete covariate data; See DoptBCD.ui for the command- line user interface. DoptBCD.ui Command-line User Interface Using Atkinson’s D_A-optimal Biased Coin Design Description A call to the user-interface function used to allocate patients to one of two treatments using Atkin- son’s DA -optimal biased coin design proposed by <NAME> (1982) <doi:10.2307/2335853>. Usage DoptBCD.ui(path, folder = "DoptBCD") Arguments path the path in which a folder used to store variables will be created. folder name of the folder. If default, a folder named "DoptBCD" will be created. Details See DoptBCD. Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by that func- tion. Note This function provides a command-line user interface and users should follow the prompts to enter data including covariates, as well as levels for each covariate and the covariate profile of the new patient. References Atkinson A C. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See DoptBCD for allocating patients with complete covariate data; See DoptBCD.sim for allocating patients with covariate data generating mechanism. evalPower Evaluation of Tests and Randomization Procedures through Power Description Returns powers and a plot of the chosen test and method under different treatment effects. Usage evalPower(n, cov_num, level_num, pr, type, beta, di = seq(0,0.5,0.1), sigma = 1, Iternum, sl = 0.05, method = c("HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "DoptBCD","AdjBCD"), test = c("boot.test", "corr.test", "rand.test"), plot = TRUE, ...) Arguments n the number of patients. cov_num the number of covariates. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. type a data-generating method. Optional input: "linear" or "logit". beta a vector of coefficients of covariates. The length of beta must correspond to the sum of all covariates’ levels. di a value or a vector of values of difference in treatment effects. The default value is a sequence from 0 to 0.5 with increments of 0.1. The value(s) forms the horizontal axis of the plot. sigma the error variance for the linear model. The default is 1. This should be a positive value and is only used when type = linear. Iternum an integer. It is the number of iterations required for power calculation. sl the significance level. If the p value returned by the test is less than sl, the null hypothesis will be rejected. The default value is 0.05. method the randomization procedure to be used for power calculation. This package pro- vides power calculation for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". test a character string specifying the alternative tests used to verify hypothesis, must be one of "boot.test", "corr.test" or "rand.test", which are the boot- strap t test, the corrected t test, and the randomization test, respectively. The arguments associated with the testing function can be specified; otherwise, the default value will be used. plot a bool. It indicates whether to plot or not. Optional input: TRUE or FALSE. ... arguments to be passed to method. These arguments depend on the randomiza- tion method used and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR is to be used. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be used. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" are to be used. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be used. bsize the block size for the stratified randomization. It is required to be a mul- tiple of 2. Note that bsize is only needed when "StrPBR" is to be used. B an integer. It is the number of bootstrap samples. It is needed only when test is boot.test. Reps an integer. It is the number of randomized replications used in the ran- domization test. It is needed only when test is rand.test. nthreads the number of threads to be used in parallel computation. This is needed only under rand.test and boot.test. The default is 1. Value This function returns a list. The first element is a dataframe representing the powers of the chosen test under different values of treatment effects. The second element is the execution time. An optional element is the plot of power in which di forms the vertical axis. Examples ##settings set.seed(2019) n = 100#<<for demonstration,it is suggested to be larger than 1000 cov_num = 5 level_num = c(2,2,2,2,2) pr = rep(0.5,10) beta = c(0.1,0.4,0.3,0.2,0.5,0.5,0.4,0.3,0.2,0.1) omega = c(0.1, 0.1, rep(0.8 / 5, times = 5)) di = seq(0,0.5,0.1) sigma = 1 type = "linear" p = 0.85 Iternum = 10#<<for demonstration,it is suggested to be around 1000 sl = 0.05 Reps = 10#<<for demonstration,it is suggested to be 200 #Evaluation of Power library("ggplot2") Strtp=evalPower(n,cov_num,level_num,pr,type,beta,di,sigma, Iternum,sl,"HuHuCAR","rand.test",TRUE,omega,p,Reps, nthreads = 1) Strtp evalRand Evaluation of Randomization Procedures Description Evaluates a specific randomization procedure based on several different quantities of imbalances. Usage evalRand(data, method = "HuHuCAR", N = 500, ...) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. N the iteration number. The default is 500. method the randomization procedure to be evaluated. This package provides assessment for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". ... arguments to be passed to method. These arguments depend on the randomiza- tion method assessed and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR is to be assessed. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be assessed. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" are to be assessed. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be assessed. bsize the block size for stratified permuted block randomization. It is required to be a multiple of 2. Note that bsize is only needed when "StrPBR" is to be assessed. Details The data is designed for N times using method. Value It returns an object of class "careval". An object of class "careval" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. weight a vector giving the weights imposed on each covariate. bsize the block size. covariates a character string giving the name(s) of the included covariates. Assig a n*N matrix containing assignments for each patient for N iterations. strt_num the number of strata. All strata a matrix containing all strata involved. Imb a matrix containing maximum, 95%-quantile, median, and mean of absolute imbalances at overall, within-stratum and within-covariate-margin levels. Note that, we refer users to the ith column of `All strata` for details of level i, i = 1, . . . ,strt_num. SNUM a matrix with N colunms containing the number of patients in each stratum for each iteration. method the randomization method to be evaluated. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. iteration the number of iterations. Data Type the data type. Real or Simulated. DIF a matrix containing the final differences at the overall, within-stratum, and within- covariate-margin levels for each iteration. data the data frame. References <NAME>. Optimum biased coin designs for sequential clinical trials with prognostic factors[J]. Biometrika, 1982, 69(1): 61-67. <NAME>, <NAME>. The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factors[J]. Biometrika, 2011, 98(3): 519-535. <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. See Also See evalRand.sim to evaluate a randomization procedure with covariate data generating mecha- nism. Examples # a simple use ## Access by real data ## create a dataframe df <- data.frame("gender" = sample(c("female", "male"), 1000, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 1000, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 1000, TRUE), stringsAsFactors = TRUE) Res <- evalRand(data = df, method = "HuHuCAR", N = 500, omega = c(1, 2, rep(1, ncol(df))), p = 0.85) ## view the output Res ## view all patients' assignments Res$Assig ## Assess by simulated data cov_num <- 3 level_num <- c(2, 3, 5) pr <- c(0.35, 0.65, 0.25, 0.35, 0.4, 0.25, 0.15, 0.2, 0.15, 0.25) n <- 1000 N <- 50 omega = c(1, 2, 1, 1, 2) # assess Hu and Hu's procedure with the same group of patients Res.sim <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "HuHuCAR", omega, p = 0.85) ## Compare four procedures cov_num <- 3 level_num <- c(2, 10, 2) pr <- c(rep(0.5, times = 2), rep(0.1, times = 10), rep(0.5, times = 2)) n <- 100 N <- 200 # <<adjust according to CPU bsize <- 4 ## set weights for HuHuCAR omega <- c(1, 2, rep(1, cov_num)); ## set weights for PocSimMIN weight = rep(1, cov_num); ## set biased probability p = 0.80 # assess Hu and Hu's procedure RH <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "HuHuCAR", omega = omega, p = p) # assess Pocock and Simon's method RPS <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "PocSimMIN", weight, p = p) # assess Shao's procedure RS <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "StrBCD", p = p) # assess stratified randomization RSR <- evalRand.sim(n = n, N = N, Replace = FALSE, cov_num = cov_num, level_num = level_num, pr = pr, method = "StrPBR", bsize) # create containers C_M = C_O = C_WS = matrix(NA, nrow = 4, ncol = 4) colnames(C_M) = colnames(C_O) = colnames(C_WS) = c("max", "95%quan", "med", "mean") rownames(C_M) = rownames(C_O) = rownames(C_WS) = c("HH", "PocSim", "Shao", "StraRand") # assess the overall imbalance C_O[1, ] = RH$Imb[1, ] C_O[2, ] = RPS$Imb[1, ] C_O[3, ] = RS$Imb[1, ] C_O[4, ] = RSR$Imb[1, ] # view the result C_O # assess the marginal imbalances C_M[1, ] = apply(RH$Imb[(1 + RH$strt_num) : (1 + RH$strt_num + sum(level_num)), ], 2, mean) C_M[2, ] = apply(RPS$Imb[(1 + RPS$strt_num) : (1 + RPS$strt_num + sum(level_num)), ], 2, mean) C_M[3, ] = apply(RS$Imb[(1 + RS$strt_num) : (1 + RS$strt_num + sum(level_num)), ], 2, mean) C_M[4, ] = apply(RSR$Imb[(1 + RSR$strt_num) : (1 + RSR$strt_num + sum(level_num)), ], 2, mean) # view the result C_M # assess the within-stratum imbalances C_WS[1, ] = apply(RH$Imb[2 : (1 + RH$strt_num), ], 2, mean) C_WS[2, ] = apply(RPS$Imb[2 : (1 + RPS$strt_num), ], 2, mean) C_WS[3, ] = apply(RS$Imb[2 : (1 + RS$strt_num), ], 2, mean) C_WS[4, ] = apply(RSR$Imb[2 : (1 + RSR$strt_num), ], 2, mean) # view the result C_WS # Compare the four procedures through plots meth = rep(c("Hu", "PS", "Shao", "STR"), times = 3) shape <- rep(1 : 4, times = 3) crt <- rep(1 : 3, each = 4) crt_c <- rep(c("O", "M", "WS"), each = 4) mean <- c(C_O[, 4], C_M[, 4], C_WS[, 4]) df_1 <- data.frame(meth, shape, crt, crt_c, mean, stringsAsFactors = TRUE) require(ggplot2) p1 <- ggplot(df_1, aes(x = meth, y = mean, color = crt_c, group = crt, linetype = crt_c, shape = crt_c)) + geom_line(size = 1) + geom_point(size = 2) + xlab("method") + ylab("absolute mean") + theme(plot.title = element_text(hjust = 0.5)) p1 evalRand.sim Evaluation Randomization Procedures with Covariate Data Generat- ing Mechanism Description Evaluates randomization procedure based on several different quantities of imbalances by simulat- ing patients’ covariate profiles under the assumption of independence between covariates and levels within each covariate. Usage evalRand.sim(n = 1000, N = 500, Replace = FALSE, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), method = "HuHuCAR", ...) Arguments N the iteration number. The default is 500. n the number of patients. The default is 1000. Replace a bool. If Replace = FALSE, the function does clinical trial design for N iterations for one group of patients. If Replace = TRUE, the function dose clinical trial design for N iterations for N different groups of patients. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2, 2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). method the randomization procedure to be evaluated. This package provides assessment for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". ... arguments to be passed to method. These arguments depend on the randomiza- tion method assessed and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR are to be assessed. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be assessed. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" is to be assessed. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be assessed. bsize the block size for stratified permuted block randomization. It is required to be a multiple of 2. Note that bsize is only needed when "StrPBR" is to be assessed. Details See evalRand. Value See evalRand. See Also See evalRand to evaluate a randomization procedure with complete covariate data. getData Data Generation Description Generates continuous or binary outcomes given patients’ covariates, the underlying model and the randomization procedure. Usage getData(n, cov_num, level_num, pr, type, beta, mu1, mu2, sigma = 1, method = "HuHuCAR", ...) Arguments n the number of patients. cov_num the number of covariates. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. type a data-generating method. Optional input: "linear" or "logit". beta a vector of coefficients of covariates. The length of beta must correspond to the sum of all covariates’ levels. mu1,mu2 main effects of treatment 1 and treatment 2. sigma the error variance for the linear model. The default is 1. This should be a positive value and is only used when type = linear. method the randomization procedure to be used for generating randomization sequences. This package provides data-generating function for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". ... arguments to be passed to method. These arguments depend on the randomiza- tion method used and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR is to be used. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be used. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" are to be used. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be used. bsize the block size for stratified randomization. It is required to be a multiple of 2. Note that bsize is only needed when "StrPBR" is to be used. Details To generate continuous outcomes, we use the linear model: yi = µj + xTi β + i , to generate binary outcomes, we use the logit link function: exp{µj + xTi β} P (yi = 1) = , where j indicates patient i belongs to treatment j. Value getData returns a size covn um + 2 × n dataframe. The first cov_num rows represent patients’ profile. The next row consists of patients’ assignments and the final row consists of generated outcomes. Examples #Parameters' Setting set.seed(100) n = 1000 cov_num = 5 level_num = c(2,2,2,2,2) beta = c(1,4,3,2,5,5,4,3,2,1) mu1 = 0 mu2 = 0 sigma = 1 type = "linear" p = 0.85 omega = c(0.1, 0.1, rep(0.8 / 5, times = 5)) pr = rep(0.5,10) #Data Generation dataH = getData(n, cov_num,level_num, pr, type, beta, mu1, mu2, sigma, "HuHuCAR", omega, p) dataH[1:(cov_num+2),1:5] HuHuCAR Hu and Hu’s General Covariate-Adaptive Randomization Description Allocates patients to one of two treatments using Hu and Hu’s general covariate-adaptive random- ization proposed by <NAME>, <NAME> (2012) <doi:10.1214/12-AOS983>. Usage HuHuCAR(data, omega = NULL, p = 0.85) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. omega a vector of weights at the overall, within-stratum, and within-covariate-margin levels. It is required that at least one element is larger than 0. If omega = NULL (default), the overall, within-stratum, and within-covariate-margin imbalances are weighted with porportions 0.2, 0.3, and 0.5/cov_num for each covariate- margin, respectively, where cov_num is the number of covariates of interest. p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details Consider I covariates and mi levels for the ith covariate, i = 1, . . . , I. Tj is the assignment of the jth patient and Zj = (k1 , . . . , kI ) indicates the covariate profile of this patient, j = 1, . . . , n. For convenience, (k1 , . . . , kI ) and (i; ki ) denote the stratum and margin, respectively. Dj (.) is the difference between the numbers of patients assigned to treatment 1 and treatment 2 at the corre- sponding levels after j patients have been assigned. The general covariate-adaptive randomization procedure is as follows: (1) The first patient is assigned to treatment 1 with probability 1/2; (2) Suppose that j − 1 patients have been assigned (1 < j ≤ n) and the jth patient falls within (k1∗ , . . . , kI∗ ); (3) If the jth patient were assigned to treatment 1, then the potential overall, within-covariate- margin, and within-stratum differences between the two treatments would be (1) Dj (i; ki∗ ) = Dj−1 (i, ki∗ ) + 1, (1) Dj (k1∗ , . . . , kI∗ ) = Dj (k1∗ , . . . , kI∗ ) + 1, for margin (i; ki∗ ) and stratum (k1∗ , . . . , kI∗ ). Similarly, the potential differences at the overall, within-covariate-margin, and within-stratum levels would be obtained if the jth patient were as- signed to treatment 2; (4) An imbalance measure is defined by I (l) (l) (l) (l) X Imbj = ωo [Dj ]2 + ωm,i [Dj (i; ki∗ )]2 + ωs [Dj (k1∗ , . . . , kI∗ )]2 , l = 1, 2; i=1 (5) Conditional on the assignments of the first (j − 1) patients as well as the covariate profiles of the first j patients, assign the jth patient to treatment 1 with probability P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = q (1) (2) for Imbj > Imbj , P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = p (1) (2) for Imbj < Imbj , and P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = 0.5 (1) (2) for Imbj = Imbj . Details of the procedure can be found in Hu and Hu (2012). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. weight a vector giving the weights imposed on each covariate. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. References <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See HuHuCAR.sim for allocating patients with covariate data generating mechanism. See HuHuCAR.ui for the command-line user interface. Examples # a simple use ## Real Data ## create a dataframe df <- data.frame("gender" = sample(c("female", "male"), 1000, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 1000, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 1000, TRUE), stringsAsFactors = TRUE) omega <- c(1, 2, rep(1, 3)) Res <- HuHuCAR(data = df, omega) ## view the output Res ## view all patients' profile and assignments Res$Cov_Assig ## Simulated data cov_num <- 3 level_num <- c(2, 3, 3) pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, 0.4, 0.3, 0.3) omega <- rep(0.2, times = 5) Res.sim <- HuHuCAR.sim(n = 100, cov_num, level_num, pr, omega) ## view the output Res.sim ## view the detials of difference Res.sim$Diff N <- 100 # << adjust according to your CPU n <- 1000 cov_num <- 3 level_num <- c(2, 3, 5) # << adjust to your CPU and the length should correspond to cov_num # Set pr to follow two tips: #(1) length of pr should be sum(level_num); #(2)sum of probabilities for each margin should be 1. pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, rep(0.2, times = 5)) omega <- c(0.2, 0.2, rep(0.6 / cov_num, times = cov_num)) # Set omega0 = omegaS = 0 omegaP <- c(0, 0, rep(1 / cov_num, times = cov_num)) ## generate a container to contain Diff DH <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) DP <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) for(i in 1 : N){ result <- HuHuCAR.sim(n, cov_num, level_num, pr, omega) resultP <- HuHuCAR.sim(n, cov_num, level_num, pr, omegaP) DH[ , i] <- result$Diff; DP[ , i] <- resultP$Diff } ## do some analysis require(dplyr) ## analyze the overall imbalance Ana_O <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_O) <- c("NEW", "PS") colnames(Ana_O) <- c("mean", "median", "95%quantile") temp <- DH[1, ] %>% abs tempP <- DP[1, ] %>% abs Ana_O[1, ] <- c((temp %>% mean), (temp %>% median), (temp %>% quantile(0.95))) Ana_O[2, ] <- c((tempP %>% mean), (tempP %>% median), (tempP %>% quantile(0.95))) ## analyze the within-stratum imbalances tempW <- DH[2 : (1 + prod(level_num)), ] %>% abs tempWP <- DP[2 : 1 + prod(level_num), ] %>% abs Ana_W <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_W) <- c("NEW", "PS") colnames(Ana_W) <- c("mean", "median", "95%quantile") Ana_W[1, ] = c((tempW %>% apply(1, mean) %>% mean), (tempW %>% apply(1, median) %>% mean), (tempW %>% apply(1, mean) %>% quantile(0.95))) Ana_W[2, ] = c((tempWP %>% apply(1, mean) %>% mean), (tempWP %>% apply(1, median) %>% mean), (tempWP %>% apply(1, mean) %>% quantile(0.95))) ## analyze the marginal imbalance tempM <- DH[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs tempMP <- DP[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs Ana_M <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_M) <- c("NEW", "PS"); colnames(Ana_M) <- c("mean", "median", "95%quantile") Ana_M[1, ] = c((tempM %>% apply(1, mean) %>% mean), (tempM %>% apply(1, median) %>% mean), (tempM %>% apply(1, mean) %>% quantile(0.95))) Ana_M[2, ] = c((tempMP %>% apply(1, mean) %>% mean), (tempMP %>% apply(1, median) %>% mean), (tempMP %>% apply(1, mean) %>% quantile(0.95))) AnaHP <- list(Ana_O, Ana_M, Ana_W) names(AnaHP) <- c("Overall", "Marginal", "Within-stratum") AnaHP HuHuCAR.sim Hu and Hu’s General Covariate-Adaptive Randomization with Co- variate Data Generating Mechanism Description Allocates patients to one of two treatments using general covariate-adaptive randomization pro- posed by <NAME>, <NAME> (2012) <doi:10.1214/12-AOS983>, by simulating covariate profiles based on the assumption of independence between covariates and levels within each covariate. Usage HuHuCAR.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), omega = NULL, p = 0.85) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2, 2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). omega a vector of weights at the overall, within-stratum, and within-covariate-margin levels. It is required that at least one element is larger than 0. If omega = NULL (default), the overall, within-stratum, and within-covariate-margin imbalances are weighted with porportions 0.2, 0.3, and 0.5/cov_num for each covariate- margin, respectively, where cov_num is the number of covariates of interest. p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details See HuHuCAR. Value See HuHuCAR. References <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See HuHuCAR for allocating patients with complete covariate data; See HuHuCAR.ui for the command- line user interface. HuHuCAR.ui Command-line User Interface Using Hu and Hu’s General Covariate- adaptive Randomization Description A call to the user-iterface function used to allocate patients to one of two treatments using Hu and Hu’s general covariate-adaptive randomization method as proposed by <NAME>, <NAME> (2012) <doi:10.1214/12-AOS983>. Usage HuHuCAR.ui(path, folder = "HuHuCAR") Arguments path the path in which a folder used to store variables will be created. folder name of the folder. If default, a folder named "HuHuCAR" will be created. Details See HuHuCAR Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by HuHuCAR.ui. Note This function provides a command-line interface so that users should follow the prompts to enter data, including covariates as well as levels for each covariate, weights omega, biased probability p and the covariate profile of the new patient. References <NAME>, <NAME>. Asymptotic properties of covariate-adaptive randomization[J]. The Annals of Statistics, 2012, 40(3): 1794-1815. <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. See Also See HuHuCAR for allocating patients with complete covariate data; See HuHuCAR.sim for allocating patients with covariate data generating mechanism. pats Data of Covariate Profile of Patients Description Gives the simulated covariate profile of patients for clincal trials. Usage data(pats) Arguments pats a data frame. Each row contains an individual’s covariate profile and each col- umn corresponds to a covariate. It contains the following columns: gender Options are male and female. employment status Options are "unemployment" (unemp), "part time" (part.), and "full time" (full.). income Options are >= 1w, <= 0.5w, and 0.5~1w. marriage status Options are unmarried, married, and divorced PocSimMIN Pocock and Simon’s Method in the Two-Arms Case Description Allocates patients to one of two treatments using Pocock and Simon’s method proposed by Pocock S J, <NAME> (1975) <doi:10.2307/2529712>. Usage PocSimMIN(data, weight = NULL, p = 0.85) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. weight a vector of weights for within-covariate-margin imbalances. It is required that at least one element is larger than 0. If weight = NULL (default), the within- covariate-margin imbalances are weighted with an equal proportion, 1/cov_num, for each covariate-margin. p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details Consider I covariates and mi levels for the ith covariate, i = 1, . . . , I. Tj is the assignment of the jth patient and Zj = (k1 , . . . , kI ) indicates the covariate profile of this patient, j = 1, . . . , n. For convenience, (k1 , . . . , kI ) and (i; ki ) denote the stratum and margin, respectively. Dj (.) is the dif- ference between the numbers of patients assigned to treatment 1 and treatment 2 at the correspond- ing levels after j patients have been assigned. The Pocock and Simon’s minimization procedure is as follows: (1) The first patient is assigned to treatment 1 with probability 1/2; (2) Suppose that j − 1 patients have been assigned (1 < j ≤ n) and the jth patient falls within (k1∗ , . . . , kI∗ ); (3) If the jth patient were assigned to treatment 1, then the potential within-covariate-margin dif- ferences between the two treatments would be (1) Dj (i; ki∗ ) = Dj−1 (i, ki∗ ) + 1 for margin (i; ki∗ ). Similarly, the potential differences would be obtained in the same way if the jth patient were assigned to treatment 2; (4) An imbalance measure is defined by I (l) (l) X Imbj = ωm,i [Dj (i; ki∗ )]2 , l = 1, 2; (5) Conditional on the assignments of the first (j − 1) patients as well as the covariate profiles of the first j patients, assign the jth patient to treatment 1 with the probability P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = q (1) (2) for Imbj > Imbj , P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = p (1) (2) for Imbj < Imbj , and P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = 0.5 (1) (2) for Imbj = Imbj . Details of the procedure can be found in <NAME>, <NAME> (1975). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. weight a vector giving the weights imposed on each covariate. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. References <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. See Also See PocSimMIN.sim for allocating patients with covariate data generating mechanism. See PocSimMIN.ui for the command-line user interface. Examples # a simple use ## Real Data ## creat a dataframe df <- data.frame("gender" = sample(c("female", "male"), 1000, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 1000, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 1000, TRUE), stringsAsFactors = TRUE) weight <- c(1, 2, 1) Res <- PocSimMIN(data = df, weight) ## view the output Res ## view all patients' profile and assignments Res$Cov_Assig ## Simulated Data cov_num = 3 level_num = c(2, 3, 3) pr = c(0.4, 0.6, 0.3, 0.3, 0.4, 0.4, 0.3, 0.3) Res.sim <- PocSimMIN.sim(n = 1000, cov_num, level_num, pr) ## view the output Res.sim ## view the detials of difference Res.sim$Diff N <- 5 n <- 1000 cov_num <- 3 level_num <- c(2, 3, 5) # Set pr to follow two tips: # (1) length of pr should be sum(level_num); # (2)sum of probabilities for each margin should be 1. pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, rep(0.2, times = 5)) omega <- c(0.2, 0.2, rep(0.6 / cov_num, times = cov_num)) weight <- c(2, rep(1, times = cov_num - 1)) ## generate a container to contain Diff DH <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) DP <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) for(i in 1 : N){ result <- HuHuCAR.sim(n, cov_num, level_num, pr, omega) resultP <- PocSimMIN.sim(n, cov_num, level_num, pr, weight) DH[ , i] <- result$Diff; DP[ , i] <- resultP$Diff } ## do some analysis require(dplyr) ## analyze the overall imbalance Ana_O <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_O) <- c("NEW", "PS") colnames(Ana_O) <- c("mean", "median", "95%quantile") temp <- DH[1, ] %>% abs tempP <- DP[1, ] %>% abs Ana_O[1, ] <- c((temp %>% mean), (temp %>% median), (temp %>% quantile(0.95))) Ana_O[2, ] <- c((tempP %>% mean), (tempP %>% median), (tempP %>% quantile(0.95))) ## analyze the within-stratum imbalances tempW <- DH[2 : (1 + prod(level_num)), ] %>% abs tempWP <- DP[2 : 1 + prod(level_num), ] %>% abs Ana_W <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_W) <- c("NEW", "PS") colnames(Ana_W) <- c("mean", "median", "95%quantile") Ana_W[1, ] = c((tempW %>% apply(1, mean) %>% mean), (tempW %>% apply(1, median) %>% mean), (tempW %>% apply(1, mean) %>% quantile(0.95))) Ana_W[2, ] = c((tempWP %>% apply(1, mean) %>% mean), (tempWP %>% apply(1, median) %>% mean), (tempWP %>% apply(1, mean) %>% quantile(0.95))) ## analyze the marginal imbalance tempM <- DH[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs tempMP <- DP[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs Ana_M <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_M) <- c("NEW", "PS") colnames(Ana_M) <- c("mean", "median", "95%quantile") Ana_M[1, ] = c((tempM %>% apply(1, mean) %>% mean), (tempM %>% apply(1, median) %>% mean), (tempM %>% apply(1, mean) %>% quantile(0.95))) Ana_M[2, ] = c((tempMP %>% apply(1, mean) %>% mean), (tempMP %>% apply(1, median) %>% mean), (tempMP %>% apply(1, mean) %>% quantile(0.95))) AnaHP <- list(Ana_O, Ana_M, Ana_W) names(AnaHP) <- c("Overall", "Marginal", "Within-stratum") AnaHP PocSimMIN.sim Pocock and Simon’s Method in the Two-Arms Case with Covariate Data Generating Mechanism Description Allocates patients to one of two treatments using Pocock and Simon’s method proposed by Pocock S J, <NAME> (1975) <doi:10.2307/2529712>, by simulating covariate profiles under the assumption of independence between covariates and levels within each covariate. Usage PocSimMIN.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), weight = NULL, p = 0.85) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2, 2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). weight a vector of weights for within-covariate-margin imbalances. It is required that at least one element is larger than 0. If weight = NULL (default), the within- covariate-margin imbalances are weighted with an equal proportion, 1/cov_num, for each covariate-margin. p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details See PocSimMIN. Value See PocSimMIN. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. See Also See PocSimMIN for allocating patients with complete covariate data; See PocSimMIN.ui for the command-line user interface. PocSimMIN.ui Command-line User Interface Using Pocock and Simon’s Procedure with Two-Arms Case Description A call to the user-iterface function used to allocate patients to one of two treatments using Pocock and Simon’s method proposed by <NAME>, <NAME> (1975) <doi:10.2307/2529712>. Usage PocSimMIN.ui(path, folder = "PocSimMIN") Arguments path the path in which a folder used to storage variables will be created. folder name of the folder. If default, a folder named "PocSimMIN" will be created. Details See PocSimMIN. Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by PocSimMIN.ui. Note This function provides a command-line interface and users should follow the prompts to enter data including covariates as well as levels for each covariate, weight, biased probability p and the co- variate profile of the new patient. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial[J]. Biometrics, 1975: 103-115. See Also See PocSimMIN for allocating a given completely collected data; See PocSimMIN.sim for allocating patients with covariate data generating mechanism. rand.test Randomization Test Description Performs randomization test on treatment effects. Usage rand.test(data, Reps = 200, method = c("HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "DoptBCD", "AdjBCD"), conf = 0.95, binwidth = 30, ...) Arguments data a data frame. It consists of patients’ profiles, treatment assignments and outputs. See getData. Reps an integer. It is the number of randomized replications used in the randomization test. The default is 200. method the randomization procedure to be used for testing. This package provides tests for "HuHuCAR", "PocSimMIN", "StrBCD", "StrPBR", "AdjBCD", and "DoptBCD". conf confidence level of the interval. The default is 0.95. binwidth the number of bins for each bar in histogram. The default is 30. ... arguments to be passed to method. These arguments depend on the randomiza- tion method used and the following arguments are accepted: omega a vector of weights at the overall, within-stratum, and within-covariate- margin levels. It is required that at least one element is larger than 0. Note that omega is only needed when HuHuCAR is to be used. weight a vector of weights for within-covariate-margin imbalances. It is re- quired that at least one element is larger than 0. Note that weight is only needed when PocSimMIN is to be used. p the biased coin probability. p should be larger than 1/2 and less than 1. Note that p is only needed when "HuHuCAR", "PocSimMIN" and "StrBCD" are to be used. a a design parameter governing the degree of randomness. Note that a is only needed when "AdjBCD" is to be used. bsize the block size for stratified randomization. It is required to be a multiple of 2. Note that bsize is only needed when "StrPBR" is to be used. Details The randomization test is described as follows: 1) For the observed responses Y1 , . . . , Yn and the treatment assignments T1 , T2 , . . . , Tn , compute the observed test statistic Pn Pn − i=1 Yi ∗ (Ti − 2) i=1 Yi ∗ (Ti − 1) Sobs = − where n1 is the number of patients assigned to treatment 1 and n0 is the number of patients assigned to treatment 2; 2) Perform the covariate-adaptive randomization procedure to obtain the new treatment assignments and calculate the corresponding test statistic Si . And repeat this process L times; 3) Calculate the two-sided Monte Carlo p-value estimator PL p= L Value It returns an object of class "htest". An object of class "htest" is a list containing the following components: p.value p-value of the test, the null hypothesis is rejected if the p-value is less than sl. estimate the estimated difference in treatment effects between treatment 1 and treatment 2. conf.int a confidence interval under the chosen level conf for the difference in treatment effect between treatment 1 and treatment 2. method a character string indicating what type of test was performed. data.name a character string giving the name(s) of the data. statistic the value of the t-statistic. As the randomization test is a nonparametric method, we cannot calculate the t-statistic, so it is hidden in this result. References <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>. Randomization in clinical trials: theory and practice[M]. <NAME> & Sons, 2015. Examples ##generate data set.seed(100) n = 1000 cov_num = 5 level_num = c(2,2,2,2,2) pr = rep(0.5,10) beta = c(0.1,0.4,0.3,0.2,0.5,0.5,0.4,0.3,0.2,0.1) mu1 = 0 mu2 = 0.01 sigma = 1 type = "linear" p = 0.85 dataS = getData(n, cov_num, level_num, pr, type, beta, mu1, mu2, sigma, "StrBCD", p) #run the randomization test library("ggplot2") Strt = rand.test(data = dataS, Reps = 200,method = "StrBCD", conf = 0.95, binwidth = 30, p = 0.85) Strt StrBCD Shao’s Method in the Two-Arms Case Description Allocates patients to one of the two treatments using Shao’s method proposed by <NAME>, <NAME>, <NAME> (2010) <doi:10.1093/biomet/asq014>. Usage StrBCD(data, p = 0.85) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details Consider I covariates and mi levels for the ith covariate, i = 1, . . . , I. Tj is the assignment of the jth patient and Zj = (k1 , . . . , kI ) indicates the covariate profile of this patient, j = 1, . . . , n. For convenience, (k1 , . . . , kI ) and (i; ki ) denote the stratum and margin, respectively. Dj (.) is the difference between the numbers of patients assigned to treatment 1 and treatment 2 at the corre- sponding levels after j patients have been assigned. The stratified biased coin design is as follows: (1) The first patient is assigned to treatment 1 with probability 1/2; (2) Suppose j − 1 patients have been assigned (1 < j ≤ n) and the jth patient falls within (k1∗ , . . . , kI∗ ); (3) If the jth patient were assigned to treatment 1, then the potential within-stratum difference between the two treatments would be (1) Dj (k1∗ , . . . , kI∗ ) = Dj (k1∗ , . . . , kI∗ ) + 1 for stratum (k1∗ , . . . , kI∗ ). Similarly, the potential difference would be obtained in the same way if the jth patient were assigned to treatment 2; (4) An imbalance measure is defined by (l) (l) Imbj = [Dj (k1∗ , . . . , kI∗ )]2 , l = 1, 2; (5) Conditional on the assignments of the first (j − 1) patients as well as the covariates’profiles of the first j patients, assign the jth patient to treatment 1 with probability P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = q (1) (2) for Imbj > Imbj , P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = p (1) (2) for Imbj < Imbj , and P (Tj = 1|Zj , T1 , . . . , Tj−1 ) = 0.5 (1) (2) for Imbj = Imbj . Details of the procedure can be found in <NAME>, <NAME>, <NAME> (2010). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. References <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. See Also See StrBCD.sim for allocating patients with covariate data generating mechanism. See StrBCD.ui for command-line user interface. Examples # a simple use ## Real Data ## creat a dataframe df <- data.frame("gender" = sample(c("female", "male"), 1000, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 1000, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 1000, TRUE), stringsAsFactors = TRUE) Res <- StrBCD(data = df) ## view the output Res ## view all patients' profile and assignments Res$Cov_Assig ## Simulated Data cov_num = 3 level_num = c(2, 3, 3) pr = c(0.4, 0.6, 0.3, 0.4, 0.3, 0.4, 0.3, 0.3) Res.sim <- StrBCD.sim(n = 1000, cov_num, level_num, pr) ## view the output Res.sim ## view the detials of difference Res.sim$Diff N <- 5 n <- 1000 cov_num <- 3 level_num <- c(2, 3, 5) # Set pr to follow two tips: # (1) length of pr should be sum(level_num); # (2)sum of probabilities for each margin should be 1 pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, rep(0.2, times = 5)) omega <- c(0.2, 0.2, rep(0.6 / cov_num, times = cov_num)) ## generate a container to contain Diff DH <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) DS <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) for(i in 1 : N){ result <- HuHuCAR.sim(n, cov_num, level_num, pr, omega) resultS <- StrBCD.sim(n, cov_num, level_num, pr) DH[ , i] <- result$Diff; DS[ , i] <- resultS$Diff } ## do some analysis require(dplyr) ## analyze the overall imbalance Ana_O <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_O) <- c("NEW", "Shao") colnames(Ana_O) <- c("mean", "median", "95%quantile") temp <- DH[1, ] %>% abs tempS <- DS[1, ] %>% abs Ana_O[1, ] <- c((temp %>% mean), (temp %>% median), (temp %>% quantile(0.95))) Ana_O[2, ] <- c((tempS %>% mean), (tempS %>% median), (tempS %>% quantile(0.95))) ## analyze the within-stratum imbalances tempW <- DH[2 : (1 + prod(level_num)), ] %>% abs tempWS <- DS[2 : 1 + prod(level_num), ] %>% abs Ana_W <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_W) <- c("NEW", "Shao") colnames(Ana_W) <- c("mean", "median", "95%quantile") Ana_W[1, ] = c((tempW %>% apply(1, mean) %>% mean), (tempW %>% apply(1, median) %>% mean), (tempW %>% apply(1, mean) %>% quantile(0.95))) Ana_W[2, ] = c((tempWS %>% apply(1, mean) %>% mean), (tempWS %>% apply(1, median) %>% mean), (tempWS %>% apply(1, mean) %>% quantile(0.95))) ## analyze the marginal imbalance tempM <- DH[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs tempMS <- DS[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs Ana_M <- matrix(NA, nrow = 2, ncol = 3) rownames(Ana_M) <- c("NEW", "Shao") colnames(Ana_M) <- c("mean", "median", "95%quantile") Ana_M[1, ] = c((tempM %>% apply(1, mean) %>% mean), (tempM %>% apply(1, median) %>% mean), (tempM %>% apply(1, mean) %>% quantile(0.95))) Ana_M[2, ] = c((tempMS %>% apply(1, mean) %>% mean), (tempMS %>% apply(1, median) %>% mean), (tempMS %>% apply(1, mean) %>% quantile(0.95))) AnaHP <- list(Ana_O, Ana_M, Ana_W) names(AnaHP) <- c("Overall", "Marginal", "Within-stratum") AnaHP StrBCD.sim Shao’s Method in the Two-Arms Case with Covariate Data Generating Mechanism Description Allocates patients to one of two treatments using Shao’s method proposed by <NAME>, <NAME>, Zhong B (2010) <doi:10.1093/biomet/asq014>, by simulating covariate profiles under the assumption of independence between covariates and levels within each covariate. Usage StrBCD.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), p = 0.85) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2, 2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). p the biased coin probability. p should be larger than 1/2 and less than 1. The default is 0.85. Details See StrBCD. Value See StrBCD. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. See Also See StrBCD for allocating patients with complete covariate data; See StrBCD.ui for the command- line user interface. StrBCD.ui Command-line User Interface Using Shao’s Method Description A call to the user-interface function used to allocate patients to one of two treatments using Shao’s method proposed by <NAME>, <NAME>, <NAME> (2010) <doi:10.1093/biomet/asq014>. Usage StrBCD.ui(path, folder = "StrBCD") Arguments path the path in which a folder used to storage variables will be created. folder name of the folder. If default, a folder named "StrBCD" will be created. Details See StrBCD. Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by StrBCD.ui. Note This function provides a command-line interface and users should follow the prompts to enter data including covariates as well as levels for each covariate, biased probability p and the covariate profile of the new patient. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>, <NAME>, <NAME>. A theory for testing hypotheses under covariate-adaptive randomization[J]. Biometrika, 2010, 97(2): 347-360. See Also See StrBCD for allocating patients with complete covariate data; See StrBCD.sim for allocating patients with covariate data generating mechanism. StrPBR Stratified Permuted Block Randomization Description Allocates patients to one of two treatments using stratified permuted block randomization proposed by <NAME> (1974) <doi:10.1016/0021-9681(74)90015-0>. Usage StrPBR(data, bsize = 4) Arguments data a data frame. A row of the dataframe corresponds to the covariate profile of a patient. bsize the block size for stratified randomization. It is required to be a multiple of 2. The default is 4. Details Different covariate profiles are defined to be strata, and then permuted block randomization is ap- plied to each stratum. It works efficiently when the number of strata is small. However, when the number of strata increases, the stratified permuted block randomization fails to obtain balance between two treatments. Permuted block randomization, or blocking, is used to balance treatments within a block so that there are the same number of subjects in each treatment. A block contains the same number of each treatment and blocks of different sizes are combined to make up the randomization list. Details of the procedure can be found in <NAME> (1974). Value It returns an object of class "carandom". An object of class "carandom" is a list containing the following components: datanumeric a bool indicating whether the data is a numeric data frame. covariates a character string giving the name(s) of the included covariates. strt_num the number of strata. cov_num the number of covariates. level_num a vector of level numbers for each covariate. n the number of patients. Cov_Assig a (cov_num + 1) * n matrix containing covariate profiles for all patients and the corresponding assignments. The ith column represents the ith patient. The first cov_num rows include patients’ covariate profiles, and the last row contains the assignments. assignments the randomization sequence. All strata a matrix containing all strata involved. Diff a matrix with only one column. There are final differences at the overall, within- stratum, and within-covariate-margin levels. method a character string describing the randomization procedure to be used. Data Type a character string giving the data type, Real or Simulated. framework the framework of the used randomization procedure: stratified randomization, or model-based method. data the data frame. bsize the block size. numbers of pats for each stratum a vector giving the numbers of patients for each stratum. References <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. See Also See StrPBR.sim for allocating patients with covariate data generating mechanism. See StrPBR.ui for the command-line user interface. Examples # a simple use ## Real Data ## creat a dataframe df <- data.frame("gender" = sample(c("female", "male"), 100, TRUE, c(1 / 3, 2 / 3)), "age" = sample(c("0-30", "30-50", ">50"), 100, TRUE), "jobs" = sample(c("stu.", "teac.", "others"), 100, TRUE), stringsAsFactors = TRUE) Res <- StrPBR(data = df, bsize = 4) ## view the output Res ## view all patients' profile and assignments Res$Cov_Assig ## Simulated data cov_num <- 3 level_num <- c(2, 3, 3) pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, 0.4, 0.3, 0.3) Res.sim <- StrPBR.sim(n = 100, cov_num, level_num, pr) ## view the output Res.sim ## view the detials of difference Res.sim$Diff N <- 5 n <- 1000 cov_num <- 3 level_num <- c(2, 3, 5) # Set pr to follow two tips: #(1) length of pr should be sum(level_num); #(2)sum of probabilities for each margin should be 1. pr <- c(0.4, 0.6, 0.3, 0.4, 0.3, rep(0.2, times = 5)) omega <- c(0.2, 0.2, rep(0.6 / cov_num, times = cov_num)) # Set block size for stratified randomization bsize <- 4 ## generate a container to contain Diff DS <- matrix(NA, ncol = N, nrow = 1 + prod(level_num) + sum(level_num)) for(i in 1 : N){ rtS <- StrPBR.sim(n, cov_num, level_num, pr, bsize) DS[ , i] <- rtS$Diff } ## do some analysis require(dplyr) ## analyze the overall imbalance Ana_O <- matrix(NA, nrow = 1, ncol = 3) rownames(Ana_O) <- c("Str.R") colnames(Ana_O) <- c("mean", "median", "95%quantile") tempS <- DS[1, ] %>% abs Ana_O[1, ] <- c((tempS %>% mean), (tempS %>% median), (tempS %>% quantile(0.95))) ## analyze the within-stratum imbalances tempWS <- DS[2 : 1 + prod(level_num), ] %>% abs Ana_W <- matrix(NA, nrow = 1, ncol = 3) rownames(Ana_W) <- c("Str.R") colnames(Ana_W) <- c("mean", "median", "95%quantile") Ana_W[1, ] = c((tempWS %>% apply(1, mean) %>% mean), (tempWS %>% apply(1, median) %>% mean), (tempWS %>% apply(1, mean) %>% quantile(0.95))) ## analyze the marginal imbalance tempMS <- DS[(1 + prod(level_num) + 1) : (1 + prod(level_num) + sum(level_num)), ] %>% abs Ana_M <- matrix(NA, nrow = 1, ncol = 3) rownames(Ana_M) <- c("Str.R"); colnames(Ana_M) <- c("mean", "median", "95%quantile") Ana_M[1, ] = c((tempMS %>% apply(1, mean) %>% mean), (tempMS %>% apply(1, median) %>% mean), (tempMS %>% apply(1, mean) %>% quantile(0.95))) AnaHP <- list(Ana_O, Ana_M, Ana_W) names(AnaHP) <- c("Overall", "Marginal", "Within-stratum") AnaHP StrPBR.sim Stratified Permuted Block Randomization with Covariate Data Gener- ating Mechanism Description Allocates patients to one of two treatments using stratified randomization proposed by <NAME> (1974) <doi:10.1016/0021-9681(74)90015-0>, by simulating covariates-profile on assumption of independence between covariates and levels within each covariate. Usage StrPBR.sim(n = 1000, cov_num = 2, level_num = c(2, 2), pr = rep(0.5, 4), bsize = 4) Arguments n the number of patients. The default is 1000. cov_num the number of covariates. The default is 2. level_num a vector of level numbers for each covariate. Hence the length of level_num should be equal to the number of covariates. The default is c(2, 2). pr a vector of probabilities. Under the assumption of independence between co- variates, pr is a vector containing probabilities for each level of each covariate. The length of pr should correspond to the number of all levels, and the sum of the probabilities for each margin should be 1. The default is rep(0.5, 4), which corresponds to cov_num = 2, and level_num = c(2, 2). bsize the block size for the stratified randomization. It is required to be a multiple of 2. The default is 4. Details See StrPBR. Value See StrPBR. References <NAME>, <NAME>, <NAME>, <NAME>: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. See Also See StrPBR for allocating patients with complete covariate data; See StrPBR.ui for the command- line user interface. StrPBR.ui Command-line User Interface Using Stratified Permuted Block Ran- domization with Two-Arms Case Description A call to the user-iterface function used to allocate patients to one of two treatments using stratified permuted block randomization proposed by <NAME> (1974) <doi: 10.1016/0021-9681(74)90015- 0>. Usage StrPBR.ui(path, folder = "StrPBR") Arguments path the path in which a folder used to storage variables will be created. folder name of the folder. If default, a folder named "StrPBR" will be created. Details See StrPBR. Value It returns an object of class "carseq". The function print is used to obtain results. The generic accessor functions assignment, covariate, cov_num, cov_profile and others extract various useful features of the value returned by StrPBR.ui. Note This function provides a command-line interface and users should follow the prompts to enter data including covariates as well as levels for each covariate, block size bsize and the covariate profile of the new patient. References <NAME>, <NAME>, <NAME>, <NAME>. carat: Covariate-Adaptive Randomization for Clinical Trials[J]. Journal of Statistical Software, 2023, 107(2): 1-47. <NAME>. The randomization and stratification of patients to clinical trials[J]. Journal of chronic diseases, 1974, 27(7): 365-375. See Also See StrPBR for allocating patients with complete covariate data; See StrPBR.sim for allocating patients with covariate data generating mechanism.
github.com/unionj-cloud/go-doudou
go
Go
README [¶](#section-readme) --- [![Vite logo](https://go-doudou.github.io/hero.png)](https://go-doudou.github.io) [![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go) [![GoDoc](https://godoc.org/github.com/unionj-cloud/go-doudou?status.png)](https://godoc.org/github.com/unionj-cloud/go-doudou) [![Go](https://github.com/unionj-cloud/go-doudou/actions/workflows/go.yml/badge.svg?branch=main)](https://github.com/unionj-cloud/go-doudou/actions/workflows/go.yml) [![codecov](https://codecov.io/gh/unionj-cloud/go-doudou/branch/main/graph/badge.svg?token=QRLPRAX885)](https://codecov.io/gh/unionj-cloud/go-doudou) [![Go Report Card](https://goreportcard.com/badge/github.com/unionj-cloud/go-doudou)](https://goreportcard.com/report/github.com/unionj-cloud/go-doudou) [![Release](https://img.shields.io/github/v/release/unionj-cloud/go-doudou?style=flat-square)](https://github.com/unionj-cloud/go-doudou) [![](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![](https://wakatime.com/badge/user/852bcf22-8a37-460a-a8e2-115833174eba/project/57c830f7-e507-4cb1-9fd1-feedd96685f6.svg)](https://wakatime.com/badge/user/852bcf22-8a37-460a-a8e2-115833174eba/project/57c830f7-e507-4cb1-9fd1-feedd96685f6) ### go-doudou > Lightweight Golang Microservice Framework * 💡 Starts from golang interface, no need to learn new IDL(interface definition language). * 🛠️ Built-in SWIM gossip protocol based service register and discovery mechanism to help you build a robust, scalable and decentralized service cluster. * 🔩 Powerful code generator cli built-in. After defining your interface methods, your only job is implementing your awesome idea. * ⚡ Born from the cloud-native era. Built-in CLI can speed up your product iteration. * 🔑 Built-in service governance support including remote configuration management, client-side load balancer, rate limiter, circuit breaker, bulkhead, timeout, retry and more. * 📦️ Supporting both monolith and microservice architectures gives you flexibility to design your system. Go-doudou(doudou pronounce /dəudəu/)is OpenAPI 3.0 (for REST) spec and Protobuf v3 (for grpc) based lightweight microservice framework. It supports monolith service application as well. Read the Docs <https://go-doudou.github.io> to Learn More. #### Benchmark ![benchmark](https://github.com/unionj-cloud/go-doudou/raw/v1.3.7/benchmark.png) Machine: `MacBook Pro (16-inch, 2019)` CPU: `2.3 GHz 8 cores Intel Core i9` Memory: `16 GB 2667 MHz DDR4` ProcessingTime: `0ms, 10ms, 100ms, 500ms` Concurrency: `1000` Duration: `30s` go-doudou Version: `v1.3.7` [Checkout the test code](https://github.com/wubin1989/go-web-framework-benchmark) #### Credits Give credits to following repositories and all their contributors: * [hashicorp/memberlist](https://github.com/hashicorp/memberlist): go-doudou is relying on it to implement service register/discovery/fault tolerance feature. * [gorilla/mux](https://github.com/gorilla/mux): go-doudou is relying on it to implement http router. * [go-redis/redis\_rate](https://github.com/unionj-cloud/go-doudou/blob/v1.3.7/github.com/go-redis/redis_rate): go-doudou is relying on it to implement redis based rate limit feature * [apolloconfig/agollo](https://github.com/apolloconfig/agollo): go-doudou is relying on it to implement remote configuration management support for [Apollo](https://github.com/apolloconfig/apollo) * [nacos-group/nacos-sdk-go](https://github.com/nacos-group/nacos-sdk-go): go-doudou is relying on it to implement service discovery and remote configuration management support for [Nacos](https://github.com/alibaba/nacos) #### Community Welcome to contribute to go-doudou by forking it and submitting pr or issues. If you like go-doudou, please give it a star! Welcome to contact me from * Facebook: <https://www.facebook.com/bin.wu.94617999/> * Twitter: <https://twitter.com/BINWU49205513> * Email: [<EMAIL>](mailto:<EMAIL>) * WeChat: ![wechat-group](https://github.com/unionj-cloud/go-doudou/raw/v1.3.7/qrcode.png) * WeChat Group: ![wechat-group](https://github.com/unionj-cloud/go-doudou/raw/v1.3.7/go-doudou-wechat-group.png) * QQ group: ![qq-group](https://github.com/unionj-cloud/go-doudou/raw/v1.3.7/go-doudou-qq-group.png) #### 🔋 JetBrains Open Source License Go-doudou has been being developed with GoLand under the **free JetBrains Open Source license(s)** granted by JetBrains s.r.o., hence I would like to express my gratitude here. [![JetBrains Logo (Main) logo.](https://resources.jetbrains.com/storage/products/company/brand/logos/jb_beam.png)](https://jb.gg/OpenSourceSupport) #### License MIT Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Copyright © 2021 wubin1989 <<EMAIL>Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
sars
cran
R
Package ‘sars’ December 14, 2022 Type Package Title Fit and Compare Species-Area Relationship Models Using Multimodel Inference Version 1.3.6 Description Implements the basic elements of the multi-model inference paradigm for up to twenty species-area relationship models (SAR), using simple R list-objects and functions, as in Triantis et al. 2012 <DOI:10.1111/j.1365-2699.2011.02652.x>. The package is scalable and users can easily create their own model and data objects. Additional SAR related functions are provided. License GPL-3 | file LICENSE URL https://github.com/txm676/sars, https://txm676.github.io/sars/ BugReports https://github.com/txm676/sars/issues Imports graphics, nortest, stats, utils, crayon, cli, numDeriv, doParallel, foreach, parallel, AICcmodavg Depends R(>= 3.6.0) Encoding UTF-8 LazyData true RoxygenNote 7.2.3 Suggests knitr, rmarkdown, testthat, covr VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-7624-244X>), <NAME> [aut] (<https://orcid.org/0000-0003-4707-8932>), <NAME> [rev] (<https://orcid.org/0000-0001-6619-9874>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-12-14 11:20:03 UTC R topics documented: sars-packag... 3 aegea... 4 aegean... 5 colema... 5 cole_si... 6 display_sars_model... 7 gala... 7 gd... 8 get_coe... 11 lin_po... 12 nierin... 13 plot.colema... 14 plot.mult... 15 plot.sar... 18 plot.threshol... 20 sars_model... 22 sar_asym... 23 sar_averag... 25 sar_beta... 29 sar_chapma... 31 sar_epm... 34 sar_epm... 36 sar_gompert... 38 sar_hele... 40 sar_kob... 43 sar_linea... 45 sar_log... 46 sar_logisti... 49 sar_mm... 51 sar_mono... 53 sar_mult... 55 sar_negexp... 57 sar_p... 59 sar_p... 61 sar_powe... 63 sar_power... 66 sar_pre... 68 sar_rati... 69 sar_threshol... 72 sar_weibull... 74 sar_weibull... 76 summary.sar... 79 threshold_c... 80 sars-package sars: Fit and compare species-area relationship models using multi- model inference Description This package provides functions to fit twenty models to species-area relationship (SAR) data (see Triantis et al. 2012), plot the model fits, and to construct a multimodel SAR curve using information criterion weights. A number of additional SAR functions are provided, e.g. to fit the log-log power model, the general dynamic model of island biogeography (GDM), Coleman’s Random Placement model, and piecewise ISAR models (i.e. models with thresholds in the ISAR). Details Functions are provided to fit 20 individual SAR models. Nineteen are fitted using non-linear regres- sion, whilst a single model (the linear model) is fitted using linear regression. Each model has its own function (e.g. sar_power). A set of multiple model fits can be combined into a fit collection (sar_multi). Plotting functions (plot.sars) are provided that enable individual model fits to be plotted on their own, or the fits of multiple models to be overlayed on the same plot. Model fits can be validated using a number of checks, e.g. the normality and homogeneity of the model residuals can be assessed. A multimodel SAR curve can be constructed using the sar_average function. This fits up to twenty SAR models and constructs the multimodel curve (with confidence intervals) using information criterion weights (see summary.sars to calculate a table of models ranked by information criterion weight). The plot.multi functions enables the multimodel SAR curve to be plotted with or without the fits of the individual models. Other SAR related functions include: (i) lin_pow, which fits the log-log power model and enables comparison of the model parameters with those calculated using the non-linear power model, (ii) gdm, which fits the general dynamic model of island biogeography (Whittaker et al. 2008) using several different functions, and (iii) coleman, which fits Coleman’s (1981) random placement model to a species-site abundance matrix. Version 1.3.0 has added functions for fitting, evaluating and plotting a range of commonly used piecewise SAR models (sar_threshold). Author(s) <NAME> and <NAME> References <NAME>. (1981). On random placement and species-area relations. Mathematical Bio- sciences, 54, 191-215. <NAME>., <NAME>., & <NAME>. (2010). mmSAR: an R-package for multimodel species–area relationship inference. Ecography, 33, 420-424. <NAME>., <NAME>., <NAME>, <NAME>., & <NAME>. (2015b) On the form of species–area relationships in habitat islands and true islands. Global Ecology & Biogeog- raphy. DOI: 10.1111/geb.12269. <NAME>., <NAME>. & <NAME>. (2012) The island species–area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. <NAME>., <NAME>. & <NAME>. (2008) A general dynamic theory of oceanic island biogeography. Journal of Biogeography, 35, 977-994. See Also https://github.com/txm676/sars Examples data(galap, package = "sars") #fit the power model fit <- sar_power(galap) summary(fit) plot(fit) #Construct a multimodel averaged SAR curve, using no grid_start simply #for speed (not recommended - see documentation for sar_average()) fit_multi <- sar_average(data = galap, grid_start = "none") summary(fit_multi) plot(fit_multi) aegean A SAR dataset describing invertebrates on islands in the Aegean Sea, Greece Description A sample dataset in the correct sars format: contains the areas of a number of islands in the Aegean Sea, Greece, and the number of invertebrate species recorded on each island. Usage data(aegean) Format A data frame with 2 columns and 90 rows. Each row contains the area of an island in the Aegean (1st column) and the number of inverts on that island (2nd column). Source Sfenthourakis, S. & <NAME>. (2009). Habitat diversity, ecological requirements of species and the Small Island Effect. Diversity Distrib.,15, 131–140. Examples data(aegean) aegean2 A SAR dataset describing plants on islands in the Aegean Sea, Greece Description A sample dataset in the correct sars format: contains the areas of a number of islands in the Aegean Sea, Greece, and the number of plant species recorded on each island. Usage data(aegean2) Format A data frame with 2 columns and 173 rows. Each row contains the area of an island in the Aegean (1st column) and the number of plants on that island (2nd column). Source Matthews, T.J. et al. (In review) Unravelling the small-island effect through phylogenetic commu- nity ecology Examples data(aegean2) coleman Fit Coleman’s Random Placement Model Description Fit Coleman’s (1981) random placement model to a species-site abundance matrix: rows are species and columns are sites. Note that the data must be abundance data and not presence-absence data. According to this model, the number of species occurring on an island depends on the relative area of the island and the regional relative species abundances. The fit of the random placement model can be determined through use of a diagnostic plot (see plot.coleman) of island area (log transformed) against species richness, alongside the model’s predicted values (see Wang et al., 2010). Following Wang et al. (2010), the model is rejected if more than a third of the observed data points fall beyond one standard deviation from the expected curve. Usage coleman(data, area) Arguments data A dataframe or matrix in which rows are species and columns are sites. Each element/value in the matrix is the abundance of a given species in a given site. area A vector of site (island) area values. The order of the vector must match the order of the columns in data. Value A list of class "coleman" with four elements. The first element contains the fitted values of the model. The second element contains the standard deviations of the fitted values, and the third and fourth contain the relative island areas and observed richness values, respectively. plot.coleman plots the model. References Coleman, <NAME>. (1981). On random placement and species-area relations. Mathematical Bio- sciences, 54, 191-215. <NAME>., <NAME>., & <NAME>. (2015). Quantifying and interpreting nestedness in habitat islands: a synthetic analysis of multiple datasets. Diversity and Distributions, 21, 392-404. <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2010). Nestedness for different reasons: the distributions of birds, lizards and small mammals on islands of an inundated lake. Diversity and Distributions, 16, 862-873. Examples data(cole_sim) fit <- coleman(cole_sim[[1]], cole_sim[[2]]) plot(fit, ModTitle = "Hetfield") cole_sim A simulated species-site abundance matrix with site areas Description A dataset in the correct sars format: Usage data(cole_sim) Format A list with two elements. The first element contains a species-site abundance matrix in which the rows are species, and the columns are sites/islands. Each value in the matrix is the abundance of a species at a given site. The second element contains a vector of the areas of each site. Source Matthews et al. 2015. Examples data(cole_sim) display_sars_models Display the model information table Description Display Table 1 of Matthews et al. (2019). See sar_multi for further information. Usage display_sars_models() Value A table of model information for 21 SAR models, including the model function, number of param- eters and general model shape. This includes the 20 models in Matthews et al. (2019); however, note that the mmf model has now been deprecated, and the standard logistic model listed in Tjorve (2003) added instead. Note also, an error in the Chapman Richards model equation has now been corrected, and the shape of some of the models have been updated from sigmoid to convex/sigmoid. References Matthews et al. (2019) sars: an R package for fitting, evaluating and comparing species–area rela- tionship models. Ecography, 42, 1446-1455. <NAME>. (2003) Shapes and functions of species–area curves: a review of possible models. Journal of Biogeography, 30, 827-835. galap A SAR dataset describing the plants of the Galapagos Islands Description A sample dataset in the correct sars format: contains the areas of a number of islands in the Gala- pagos, and the number of plant species recorded on each island. Usage data(galap) Format Adata frame with 2 columns and 16 rows. Each row contains the area of an island (km2) in the Galapagos (1st column) and the number of plants on that island (2nd column).Preston (1962) also includes the island of Albemarle, but we have excluded this as it is almost six times larger than the second largest island. Source Preston FW 1962. The Canonical Distribution of Commonness and Rarity: Part I. – Ecology 43:185-215. Examples data(galap) gdm Fit the General Dynamic Model of Island Biogeography Description Fit the general dynamic model (GDM) of island biogeography using a variety of non-linear and linear SAR models. Functions are provided to compare the GDM fitted using different SAR models, and also, for a given SAR model, to compare the GDM with alternative nested candidate models (e.g. S ~ Area + Time). Usage gdm(data, model = "linear", mod_sel = FALSE, AST = c(1, 2, 3), start_vals = NULL) Arguments data A dataframe or matrix with at least three columns, where one column should include island area values, one island richness values and one island age values. model Name of the SAR model to be used to fit the GDM. Can be any of ’loga’, ’linear’, ’power_area’, ’power_area_time’, ’all’, or ’ATT2’. mod_sel Logical argument specifying whether, for a given SAR model, a model compar- ison of the GDM with other nested candidate models should be undertaken. AST The column locations in data for the area, richness and time values (in that order). start_vals An optional dataframe with starting parameter values for the non-linear regres- sion models (same format as in nls). Default is set to NULL. Details The GDM models island species richness as a function of island area and island age, and takes the general form: S ~ A + T + T^2, where S = richness, A =area, and T = island age. The T^2 term is included as the GDM predicts a hump-shaped relationship between island richness and island age. However, a variety of different SAR models have been used to fit the GDM and five options are available here: four using non-linear regression and one using linear regression. Non-linear models Four SAR models can be used here to fit the GDM: the logarithmic (model = "loga"), linear (model = "linear") and power (model = "power_area") SAR models. Another variant of the GDM in- cludes power functions of both area and time (model = "power_area_time"). Model fitting follows the procedure in Cardoso et al. (2015). For example, when the linear SAR model is used, the GDM can be fitted using the expression: S ~ Int + A*Area + Ti*T + Ti2*T^2, where Int, A, Ti and Ti2 are free parameters to be estimated. When the power model is used just for area, the equivalent ex- pression is: S ~ exp(Int + A*log(Area) + Ti*T + Ti2*T^2). For all four models, the GDM is fitted using non-linear regression and the nls function. It should be noted that the two power models are fitted using S ~ exp(...) to ensure the same response variable (i.e. S and not log(S)) is used in all GDM models and thus AIC etc can be used to compare them. For each model fit, the residual standard error (RSE), R2 and AIC and AICc values are reported. However, as the model fit object is returned, it is possible to calculate or extract various other measures of goodness of fit (see nls). If mod_sel = TRUE, the GDM (using a particular SAR model) is fitted and compared with three other (nested) candidate models: area and time (i.e. no time^2 term), just area, and an intercept only model. The intercept only model is fitted using lm rather than nls. If model = "all", the GDM is fitted four times (using the power_area, power_area_time, loga and linear SAR models), and the fits compared using AIC and AICc. Non-linear regression models are sensitive to the starting parameter values selected. The defaults used here have been chosen as they provide a sensible general choice, but they will not work in all circumstances. As such, alternative starting values can be provided using the start_vals argument - this is done in the same way as for nls. The four parameter names are: Int (intercept), A (area), Ti (Time), Ti2 (Time^2) (see the example below). This only works for the full GDM non-linear models, and not for the nested models that are fitted when mod_sel = TRUE or for the linear models (where they are not needed). If used with model = "all", the same starting parameter values will be provided to each of the four GDM models (power_area, power_area_time, logarithmic and linear). Linear ATT2 Model As an alternative to fitting the GDM using non-linear regression, the model can be fitted in various ways using linear regression. This can also be useful if you are having problems with the non- linear regression algorithms not converging. If model = "ATT2" is used, the GDM is fitted using the semi-log logarithmic SAR model using linear regression (with untransformed richness and time, and log(area)); this is the original GDM model fitted by Whittaker et al. (2008) and we have used their chosen name (ATT2) to represent it. Steinbauer et al. (2013) fitted variants of this model using linear regression by log-transforming richness and / or time. While we do not provide functionality for fitting these variants, this is easily done by simply providing the log-transformed variable values to the function rather than the untransformed values. Using model = "ATT2" is basically a wrapper for the lm function. If mod_sel == TRUE, the GDM is fitted and compared with three other (nested) candidate models: log(area) and time (i.e. no time^2 term), just log(area), and an intercept only model. Value Different objects are returned depending on whether the non-linear or linear regression models are fitted. Non-linear models An object of class ’gdm’. If model is one of "loga", "linear", "power_area" or "power_area_time" the returned object is a nls model fit object. If model == "all", the returned object is a list with four elements; each element being a nls fit object. If mod_sel == TRUE and model != "all", a list with four elements is returned; each element being a lm or nls fit object. When model == "all", a list with four elements is returned; each element being a list of the four model fits for a particular SAR model. Linear ATT2 Model If model = "ATT2" is used, the returned object is of class ’gdm’ and ’lm’ and all of the method functions associated with standard ’lm’ objects (e.g. plot and summary) can be used. If mod_sel = TRUE a list with four elements is returned; each element being a lm object. Note The intercept (Int) parameter that is returned in the power models fits (model = "power_area" | "power_area_time") is on the log scale. References <NAME>., <NAME>., & <NAME>. (2008). A general dynamic theory of oceanic island biogeography. Journal of Biogeography, 35, 977-994. <NAME>. et al. (2017). Oceanic island biogeography through the lens of the general dynamic model: assessment and prospect. Biological Reviews, 92, 830-853. <NAME>., <NAME>., & <NAME>. (2015). BAT–Biodiversity Assessment Tools, an R package for the measurement and estimation of alpha and beta taxon, phylogenetic and functional diversity. Methods in Ecology and Evolution, 6, 232-236. <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2013) Re-evaluating the general dynamic theory of oceanic island biogeography. Frontiers of Biogeography, 5. <NAME>., <NAME>., <NAME>. & <NAME>. (2020) Towards an extended framework for the general dynamic theory of biogeography. Journal of Biogeography, 47, 2554-2566. Examples #create an example dataset and fit the GDM using the logarithmic SAR model data(galap) galap$t <- c(4, 1, 13, 16, 15, 2, 6, 4, 5, 11, 3, 9, 8, 10, 12, 7) g <- gdm(galap, model = "loga", mod_sel = FALSE) #Compare the GDM (using the logarithmic model) with other nested candidate #models g2 <- gdm(galap, model = "loga", mod_sel = TRUE) #compare the GDM fitted using the linear, logarithmic and both power models g3 <- gdm(galap, model = "all", mod_sel = FALSE) #fit the GDM using the original ATT2 model of Whittaker et al. 2008 using lm, #and compare it with other nested models g4 <- gdm(galap, model = "ATT2", mod_sel = TRUE) #provide different starting parameter values when fitting the non-linear #power model GDM g5 <- gdm(galap, model = "power_area", start_vals = data.frame("Int" = 0, "A" = 1, Ti = 1, Ti2 = 0)) get_coef Calculate the intercepts and slopes of the different segments Description Calculate the intercepts and slopes of the different segments in any of the fitted breakpoint regres- sion models available in the package. Usage get_coef(fit) Arguments fit An object of class ’thresholds’, generated using the sar_threshold function. Details The coefficients in the fitted breakpoint regression models do not all represent the intercepts and slopes of the different segments; to get these it is necessary to add different coefficients together. Value A dataframe with the intercepts (ci) and slopes (zi) of all segments in each fitted model. The numbers attached to c and z relate to the segment, e.g. c1 and z1 are the intercept and slope of the first segment. For the left-horizontal models, the slope of the first segment (i.e. the horizontal segment) is not returned. NA values represent cases where a given parameter is not present in a particular model. Examples data(aegean2) a2 <- aegean2[1:168,] fitT <- sar_threshold(data = a2, mod = c("ContOne", "DiscOne", "ZslopeOne"), interval = 0.1, non_th_models = TRUE, logAxes = "area", logT = log10) #get the slopes and intercepts for these three models coefs <- get_coef(fitT) coefs lin_pow Fit the log-log version of the power model Description Fit the log-log version of the power model to SAR data and return parameter values, summary statistics and the fitted values. Usage lin_pow(data, con = 1, logT = log, compare = FALSE, normaTest = "none", homoTest = "none", homoCor = "spearman") Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. con The constant to add to the species richness values in cases where one of the islands has zero species. logT The log-transformation to apply to the area and richness values. Can be any of log(default), log2 or log10. compare Fit the standard (non-linear) power model and return the z-value for comparison (default: compare = FALSE). normaTest The test used to test the normality of the residuals of the model. Can be any of "lillie" (Lilliefors Kolmogorov-Smirnov test), "shapiro" (Shapiro-Wilk test of normality), "kolmo" (Kolmogorov-Smirnov test), or "none" (no residuals nor- mality test is undertaken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of "cor.fitted" (a correlation of the residuals with the model fitted values), "cor.area" (a correlation of the residuals with the area values), or "none" (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest != "none". Can be any of "spear- man" (the default), "pearson", or "kendall". Details A check is made for any islands with zero species. If any zero species islands are found, a constant (default: con = 1) is added to each species richness value to enable log transformation. Natural logarithms are used as default, but log2 and log10 can be used instead using the logT argument. The compare argument can be used to compare the c and z values calculated using the log-log power model with that calculated using the non-linear power model. Note that the log-log function returns log(c). Value A list of class "sars" with up to seven elements. The first element is an object of class ’summary.lm’. This is the summary of the linear model fit using the lm function and the user’s data. The second element is a numeric vector of the model’s fitted values, and the third contains the log-transformed observed data. The remaining elements depend on the function arguments selected and can include the results of the non-linear power model fit, the log-transformation function used (i.e. logT) and the results of any residuals normality and heterogeneity tests. The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model. Examples data(galap) fit <- lin_pow(galap, con = 1) summary(fit) plot(fit) niering A SAR dataset describing the plants of the Kapingamarangi Atoll Description A sample dataset in the correct sars format: contains the areas of a number of islands in the Kapinga- marangi Atoll, and the number of plant species recorded on each island. Usage data(niering) Format A data frame with 2 columns and 32 rows. Each row contains the area of an island (km2) in the Kapingamarangi Atoll (1st column) and the number of plants on that island (2nd column). Source Niering, W.A. (1963). Terrestrial ecology of Kapingamarangi Atoll, Caroline Islands. Ecol. Monogr., 33, 131–160. Examples data(niering) plot.coleman Plot Model Fits for a ’coleman’ Object Description S3 method for class ’coleman’. plot.coleman creates a plot for objects of class coleman, using the R base plotting framework. Usage ## S3 method for class 'coleman' plot( x, xlab = "Relative area (log transformed)", ylab = "Species richness", pch = 16, cex = 1.2, pcol = "black", cex.lab = 1.3, cex.axis = 1, lwd = 2, lcol1 = "black", lcol2 = "darkgrey", ModTitle = NULL, TiAdj = 0, TiLine = 0.5, cex.main = 1.5, ... ) Arguments x An object of class ’coleman’. xlab Title for the x-axis. ylab Title for the y-axis. pch Plotting character (for points). cex A numerical vector giving the amount by which plotting symbols (points) should be scaled relative to the default. pcol Colour of the points. cex.lab The amount by which the the axis titles should be scaled relative to the default. cex.axis The amount by which the the axis labels should be scaled relative to the default. lwd Line width. lcol1 Line colour of the fitted model curve. lcol2 Line colour of the model standard deviation curves. ModTitle Plot title (default is null, which equates to no main title). TiAdj Which way the plot title (if included) is justified. TiLine Places the plot title (if included) this many lines outwards from the plot edge. cex.main The amount by which the the plot title (if included) should be scaled relative to the default. ... Further graphical parameters (see par, plot.default,title, lines) may be supplied as arguments. Details The resultant plot contains the observed richness values with the model fit and confidence intervals. Following Wang et al. (2010), the model is rejected if more than a third of the observed data points fall beyond one standard deviation from the expected curve. Examples data(cole_sim) fit <- coleman(cole_sim[[1]], cole_sim[[2]]) plot(fit, ModTitle = "Hetfield") plot.multi Plot Model Fits for a ’multi’ Object Description S3 method for class ’multi’. plot.multi creates plots for objects of class multi, using the R base plotting framework. Plots of all model fits, the multimodel SAR curve (with confidence intervals) and a barplot of the information criterion weights of the different models can be constructed. Usage ## S3 method for class 'multi' plot( x, type = "multi", allCurves = TRUE, xlab = NULL, ylab = NULL, pch = 16, cex = 1.2, pcol = "dodgerblue2", ModTitle = NULL, TiAdj = 0, TiLine = 0.5, cex.main = 1.5, cex.lab = 1.3, cex.axis = 1, yRange = NULL, lwd = 2, lcol = "dodgerblue2", mmSep = FALSE, lwd.Sep = 6, col.Sep = "black", pLeg = TRUE, modNames = NULL, cex.names = 0.88, subset_weights = NULL, confInt = FALSE, ... ) Arguments x An object of class ’multi’. type The type of plot to be constructed: either type = multi for a plot of the mul- timodel SAR curve, or type = bar for a barplot of the information criterion weights of each model. allCurves A logical argument for use with type = multi that specifies whether all the model fits should be plotted with the multimodel SAR curve (allCurves = TRUE; the default) or that only the multimodel SAR curve should be plotted (allCurves = FALSE). xlab Title for the x-axis. Only for use with type = multi. ylab Title for the y-axis. pch Plotting character (for points). Only for use with type = multi. cex A numerical vector giving the amount by which plotting symbols (points) should be scaled relative to the default. pcol Colour of the points. Only for use with type = multi. ModTitle Plot title (default is ModTitle = NULL, which reverts to "Multimodel SAR" for type = multi and to "Model weights" for type = bar). For no title, use ModTitle = "". TiAdj Which way the plot title is justified. TiLine Places the plot title this many lines outwards from the plot edge. cex.main The amount by which the plot title should be scaled relative to the default. cex.lab The amount by which the axis titles should be scaled relative to the default. cex.axis The amount by which the axis labels should be scaled relative to the default. yRange The range of the y-axis. Only for use with type = multi. lwd Line width. Only for use with type = multi. lcol Line colour. Only for use with type = multi. mmSep Logical argument of whether the multimodel curve should be plotted as a sep- arate line (default = FALSE) on top of the others, giving the user more control over line width and colour. Only for use with type = multi and allCurves = TRUE. lwd.Sep If mmSep = TRUE, the line width of the multimodel curve. col.Sep If mmSep = TRUE, the colour of the multimodel curve. pLeg Logical argument specifying whether or not the legend should be plotted (when type = multi and allCurves = TRUE). modNames A vector of model names for the barplot of weights (when type = bar). The default (modNames = NULL) uses abbreviated versions (see below) of the names from the sar_average function. cex.names The amount by which the axis labels (model names) should be scaled relative to the default. Only for use with type = bar. subset_weights Only create a barplot of the model weights for models with a weight value above a given threshold (subset_weights). Only for use with type = bar. confInt A logical argument specifying whether confidence intervals should be plotted around the multimodel curve. Can only be used if confidence intervals have been generated in the sar_average function. ... Further graphical parameters (see par, plot.default,title, lines) may be supplied as arguments. Note In some versions of R and R studio, when plotting all model fits on the same plot with a legend it is necessary to manually extend your plotting window (height and width; e.g. the ’Plots’ window of R studio) before plotting to ensure the legend fits in the plot. Extending the plotting window after plotting sometimes just stretches the legend. Occasionally a model fit will converge and pass the model fitting checks (e.g. residual normality) but the resulting fit is nonsensical (e.g. a horizontal line with intercept at zero). Thus, it can be useful to plot the resultant ’multi’ object to check the individual model fits. To re-run the sar_average function without a particular model, simply remove it from the obj argument. For visual interpretation of the model weights barplot it is necessary to abbreviate the model names when plotting the weights of several models. To plot fewer bars, use the subset_weights argument to filter out models with lower weights than a threshold value. To provide a different set of names use the modNames argument. The model abbreviations used as the default are: • Pow = Power • PowR = PowerR • E1 = Extended_Power_model_1 • E2 = Extended_Power_model_2 • P1 = Persistence_function_1 • P2 = Persistence_function_2 • Loga = Logarithmic • Kob = Kobayashi • MMF = MMF • Mon = Monod • NegE = Negative_exponential • CR = Chapman_Richards • CW3 = Cumulative_Weibull_3_par. • AR = Asymptotic_regression • RF = Rational_function • Gom = Gompertz • CW4 = Cumulative_Weibull_4_par. • BP = Beta-P_cumulative • Logi = Logistic(Standard) • Hel = Heleg(Logistic) • Lin = Linear_model Examples data(galap) #plot a multimodel SAR curve with all model fits included fit <- sar_average(data = galap, grid_start = "none") plot(fit) #remove the legend plot(fit, pLeg = FALSE) #plot just the multimodel curve plot(fit, allCurves = FALSE, ModTitle = "", lcol = "black") #plot all model fits and the multimodel curve on top as a thicker line plot(fit, allCurves = TRUE, mmSep = TRUE, lwd.Sep = 6, col.Sep = "orange") #Plot a barplot of the model weights plot(fit, type = "bar") #subset to plot only models with weight > 0.05 plot(fit, type = "bar", subset_weights = 0.05) plot.sars Plot Model Fits for a ’sars’ Object Description S3 method for class ’sars’. plot.sars creates plots for objects of class ’sars’ (type = ’fit’, "lin_pow’ and ’fit_collection’), using the R base plotting framework. The exact plot(s) constructed depends on the ’Type’ attribute of the ’sars’ object. For example, for a ’sars’ object of Type ’fit’, the plot.sars function returns a plot of the model fit (line) and the observed richness values (points). For a ’sars’ object of Type ’fit_collection’ the plot.sars function returns either a grid with n individual plots (corresponding to the n model fits in the fit_collection), or a single plot with all n model fits included. For plotting a ’sar_average’ object, see plot.multi. Usage ## S3 method for class 'sars' plot( x, mfplot = FALSE, xlab = NULL, ylab = NULL, pch = 16, cex = 1.2, pcol = "dodgerblue2", ModTitle = NULL, TiAdj = 0, TiLine = 0.5, cex.main = 1.5, cex.lab = 1.3, cex.axis = 1, yRange = NULL, lwd = 2, lcol = "dodgerblue2", di = NULL, pLeg = FALSE, ... ) Arguments x An object of class ’sars’. mfplot Logical argument specifying whether the model fits in a fit_collection should be plotted on one single plot (mfplot = TRUE) or separate plots (mfplot = FALSE; the default). xlab Title for the x-axis (default depends on the Type attribute). ylab Title for the y-axis (default depends on the Type attribute). pch Plotting character (for points). cex A numerical vector giving the amount by which plotting symbols (points) should be scaled relative to the default. pcol Colour of the points. ModTitle Plot title (default is ModTitle = NULL, which reverts to a default name depending on the type of plot). For no title, use ModTitle = "". For a sars object of type fit_collection, a vector of names can be provided (e.g. letters[1:3]). TiAdj Which way the plot title is justified. TiLine Places the plot title this many lines outwards from the plot edge. cex.main The amount by which the plot title should be scaled relative to the default. cex.lab The amount by which the axis titles should be scaled relative to the default. cex.axis The amount by which the axis labels should be scaled relative to the default. yRange The range of the y-axis. lwd Line width. lcol Line colour. di Dimensions to be passed to par(mfrow=()) to specify the size of the plotting window, when plotting multiple plots from a sars object of Type fit_collection. For example, di = c(1, 3) creates a plotting window with 1 row and 3 columns. The default (null) creates a square plotting window of the correct size. pLeg Logical argument specifying whether or not the legend should be plotted for fit_collection plots (when mfplot = TRUE) or. When a large number of model fits are plotted the legend takes up a lot of space, and thus the default is pLeg = FALSE. ... Further graphical parameters (see par, plot.default,title, lines) may be supplied as arguments. Examples data(galap) #fit and plot a sars object of Type fit. fit <- sar_power(galap) plot(fit, ModTitle = "A)", lcol = "blue") #fit and plot a sars object of Type fit_collection. fc <- sar_multi(data = galap, obj = c("power", "loga", "epm1"), grid_start = "none") plot(fc, ModTitle = letters[1:3], xlab = "Size of island") plot.threshold Plot Model Fits for a ’threshold’ Object Description S3 method for class ’threshold’. plot.threshold creates plots for objects of class threshold, using the R base plotting framework. Plots of single or multiple threshold models can be constructed. Usage ## S3 method for class 'threshold' plot( x, xlab = NULL, ylab = NULL, multPlot = TRUE, pch = 16, cex = 1.2, pcol = "black", ModTitle = NULL, TiAdj = 0, TiLine = 0.5, cex.main = 1.5, cex.lab = 1.3, cex.axis = 1, yRange = NULL, lwd = 2, lcol = "red", di = NULL, ... ) Arguments x An object of class ’threshold’. xlab Title for the x-axis. Defaults will depend on any axes log-transformations. ylab Title for the y-axis.Defaults will depend on any axes log-transformations. multPlot Whether separate plots should be built for each model fit (default = TRUE) or all model fits should be printed on the same plot (FALSE) pch Plotting character (for points). cex A numerical vector giving the amount by which plotting symbols (points) should be scaled relative to the default. pcol Colour of the points. ModTitle Plot title (default is ModTitle = NULL), which reverts to the model names. For no title, use ModTitle = "". TiAdj Which way the plot title is justified. TiLine Places the plot title this many lines outwards from the plot edge. cex.main The amount by which the plot title should be scaled relative to the default. cex.lab The amount by which the axis titles should be scaled relative to the default. cex.axis The amount by which the axis labels should be scaled relative to the default. yRange The range of the y-axis. Default taken as the largest value bacross the observed and fitted values. lwd Line width. lcol Line colour. If multPlot = TRUE, just a single colour should be given, If multPlot = FALSE, either a single colour, or a vector of colours the same length as the number of model fits in x. di Dimensions to be passed to par(mfrow=()) to specify the size of the plotting window, when plotting multiple plots. For example, di = c(1, 3) creates a plot- ting window with 1 row and 3 columns. The default (NULL) creates a plotting window large enough to fit all plots in. ... Further graphical parameters (see par, plot.default,title, lines) may be supplied as arguments. Note The raw lm model fit objects are returned with the sar_threshold function if the user wishes to construct their own plots. Use par(mai = c()) prior to calling plot, to set the graph margins, which can be useful when plotting multiple models in a single plot to ensure space within the plot taken up by the individual model fit plots is maximised. Examples data(aegean) #fit two threshold models (in logA-S space) and the linear and #intercept only models fct <- sar_threshold(aegean, mod = c("ContOne", "DiscOne"), non_th_models = TRUE, interval = 5, parallel = FALSE, logAxes = "area") #plot using default settings plot(fct) #change various plotting settings, and set the graph margins prior to #plotting par(mai = c(0.7,0.7, 0.4, 0.3)) plot(fct, pcol = "blue", pch = 18, lcol = "green", ModTitle = c("A", "B", "C", "D"), TiAdj = 0.5, xlab = "Yorke") #Plot multiple model fits in the same plot, with different colour for each #model fit plot(fct, multPlot = FALSE, lcol = c("black", "red", "green", "purple")) sars_models Display the 21 SAR model names Description Display the 21 SAR model names as a vector. See sar_multi for further information. Usage sars_models() Value A vector of model names. Note sar_mmf is included here for now but has been deprecated (see News) sar_asymp Fit the Asymptotic regression model Description Fit the Asymptotic regression model to SAR data. Usage sar_asymp(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_asymp(galap) summary(fit) plot(fit) sar_average Fit a multimodel averaged SAR curve Description Construct a multimodel averaged species-area relationship curve using information criterion weights and up to twenty SAR models. Usage sar_average(obj = c("power", "powerR","epm1","epm2","p1","p2","loga","koba", "monod","negexpo","chapman","weibull3","asymp", "ratio","gompertz","weibull4","betap","logistic", "heleg", "linear"), data = NULL, crit = "Info", normaTest = "none", homoTest = "none", homoCor = "spearman", neg_check = FALSE, alpha_normtest = 0.05, alpha_homotest = 0.05, grid_start = "partial", grid_n = NULL, confInt = FALSE, ciN = 100, verb = TRUE, display = TRUE) Arguments obj Either a vector of model names or a fit_collection object created using sar_multi. If a vector of names is provided, sar_average first calls sar_multi before gen- erating the averaged multimodel curve. data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. If obj is a fit_collection object, data should be NULL. crit The criterion used to compare models and compute the model weights. The default crit = "Info" switches to AIC or AICc depending on the number of data points in the dataset. AIC (crit = "AIC") or AICc (crit = "AICc") can be chosen regardless of the sample size. For BIC, use crit ="Bayes". normaTest The test used to test the normality of the residuals of each model. Can be any of "lillie" (Lilliefors Kolmogorov-Smirnov test), "shapiro" (Shapiro-Wilk test of normality), "kolmo" (Kolmogorov-Smirnov test), or "none" (no residuals nor- mality test is undertaken; the default). homoTest The test used to check for homogeneity of the residuals of each model. Can be any of "cor.fitted" (a correlation of the squared residuals with the model fitted values), "cor.area" (a correlation of the squared residuals with the area values), or "none" (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest != "none". Can be any of "spear- man" (the default), "pearson", or "kendall". neg_check Whether or not a check should be undertaken to flag any models that predict negative richness values. alpha_normtest The alpha value used in the residual normality test (default = 0.05, i.e. any test with a P value < 0.05 is flagged as failing the test). alpha_homotest The alpha value used in the residual homogeneity test (default = 0.05, i.e. any test with a P value < 0.05 is flagged as failing the test). grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space (see details). confInt A logical argument specifying whether confidence intervals should be calculated for the multimodel curve using bootstrapping. ciN The number of bootstrap samples to be drawn to calculate the confidence inter- vals (if confInt == TRUE). verb verbose - Whether or not to print certain warnings (default: verb == TRUE) display Show the model fitting output and related messages. (default: display == TRUE). Details The multimodel SAR curve is constructed using information criterion weights (see Burnham & Anderson, 2002; Guilhaumon et al. 2010). If obj is a vector of n model names the function fits the n models to the dataset provided using the sar_multi function. A dataset must have four or more datapoints to fit the multimodel curve. If any models cannot be fitted they are removed from the multimodel SAR. If obj is a fit_collection object (created using the sar_multi function), any model fits in the collection which are NA are removed. In addition, if any other model checks have been selected (i.e. residual normality and heterogeneity tests, and checks for negative predicted richness values), these are undertaken and any model that fails the selected test(s) is removed from the multimodel SAR. The order of the additional checks inside the function is (if all are turned on): normality of residuals, homogeneity of residuals, and a check for negative fitted values. Once a model fails one test it is removed and thus is not available for further tests. Thus, a model may fail multiple tests but the returned warning will only provide information on a single test. We have now changed the defaults so that no checks are undertaken, so it is up to the user to select any checks if appropriate. The resultant models are then used to construct the multimodel SAR curve. For each model in turn, the model fitted values are multiplied by the information criterion weight of that model, and the resultant values are summed across all models (Burnham & Anderson, 2002). Confidence inter- vals can be calculated (using confInt) around the multimodel averaged curve using the bootstrap procedure outlined in Guilhaumon et al (2010).The procedure transforms the residuals from the individual model fits and occasionally NAs / Inf values can be produced - in these cases, the model is removed from the confidence interval calculation (but not the multimodel curve itself). There is also a constraint within the procedure to remove any transformed residuals that result in negative richness values. When several SAR models are used, when grid_start is turned on and when the number of bootstraps (ciN) is large, generating the confidence intervals can take a (very) long time. Parallel processing will be added to future versions. Choosing starting parameter values for non-linear regression optimisation algorithms is not always straight forward, depending on the data at hand. In the package, we use various approaches to choose default starting parameters. However, we also use a grid search process which creates a large array of different possible starting parameter values (within certain bounds) and then ran- domly selects a proportion of these to test. There are three options for the grid_start argument to control this process. The default (grid_start = "partial") randomly samples 500 different sets of starting parameter values for each model, adds these to the model’s default starting val- ues and tests all of these. A more comprehensive set of starting parameter estimates can be used (grid_start = "exhaustive") - this option allows the user to choose the number of starting pa- rameter sets to be tested (using the grid_n argument) and includes a range of additional starting parameter estimates, e.g. very small values and particular values we have found to be useful for individual models. Using grid_start = "exhaustive" in combination with a large grid_n can be very time consuming; however, we would recommend it as it makes it more likely that the optimal model fit will be found, particularly for the more complex models. This is particularly true if any of the model fits does not converge, returns a singular gradient at parameter estimates, or the plot of the model fit does not look optimum. The grid start procedure can also be turned off (grid_start = "none"), meaning just the default starting parameter estimates are used. Note that grid_start has been disabled for a small number of models (e.g. Weibull 3 par.). See the vignette for more information. Remember that, as grid_start has a random component, when grid_start != "none", you can get slightly different results each time you fit a model or run sar_average. Even with grid_start, occasionally a model fit will be able to be fitted and pass the model fitting checks (e.g. residual normality) but the resulting fit is nonsensical (e.g. a horizontal line with intercept at zero). Thus, it can be useful to plot the resultant ’multi’ object to check the individual model fits. To re-run the sar_average function without a particular model, simply remove it from the obj argument. The sar_models() function can be used to bring up a list of the 20 model names. display_sars_models() generates a table of the 20 models with model information. Value A list of class "multi" and class "sars" with two elements. The first element (’mmi’) contains the fitted values of the multimodel sar curve. The second element (’details’) is a list with the following components: • mod_names Names of the models that were successfully fitted and passed any model check • fits A fit_collection object containing the successful model fits • ic The information criterion selected • norm_test The residual normality test selected • homo_test The residual homogeneity test selected • alpha_norm_test The alpha value used in the residual normality test • alpha_homo_test The alpha value used in the residual homogeneity test • ics The information criterion values (e.g. AIC values) of the model fits • delta_ics The delta information criterion values • weights_ics The information criterion weights of each model fit • n_points Number of data points • n_mods The number of successfully fitted models • no_fit Names of the models which could not be fitted or did not pass model checks • convergence Logical value indicating whether optim model convergence code = 0, for each model The summary.sars function returns a more useful summary of the model fit results, and the plot.multi plots the multimodel curve. Note There are different types of non-convergence and these are dealt with differently in the package. If the optimisation algorithm fails to return any solution, the model fit is defined as NA and is then removed, and so does not appear in the model summary table or multi-model curve etc. However, the optimisation algorithm (e.g. Nelder-Mead) can also return non-NA model fits but where the solution is potentially non-optimal (e.g. degeneracy of the Nelder–Mead simplex) - these cases are identified by any optim convergence code that is not zero. We have decided not to remove these fits (i.e. they are kept in the model summary table and multimodel curve) - as arguably a non-optimal fit is still better than no fit - but any instances can be checked using the returned details$converged vector and then the model fitting re-run without these models, if preferred. Increasing the starting parameters grid search (see above) may also help avoid this issue. The generation of confidence intervals around the multimodel curve (using confInt == TRUE), may throw up errors that we have yet to come across. Please report any issues to the package maintainer. There are different formulas for calculating the various information criteria (IC) used for model comparison (e.g. AIC, BIC). For example, some formulas use the residual sum of squares (rss) and others the log-likelihood (ll). Both are valid approaches and will give the same parameter estimates, but it is important to only compare IC values that have been calculated using the same approach. For example, the ’sars’ package used to use formulas based on the rss, while the nls function function in the stats package uses formulas based on the ll. To increase the compatibility between nls and sars, we have changed our formulas such that now our IC formulas are the same as those used in the nls function. See the "On the calculation of information criteria" section in the package vignette for more information. The mmf model was found to be equivalent to the He & Legendre logistic, and so the former has been deprecated (as of Feb 2021). We have removed it from the default models in sar_average, although it is still available to be used for the time being (using the obj argument). The standard logistic model has been added in its place, and is now used as default within sar_average. References <NAME>., & <NAME>. (2002). Model selection and multi-model inference: a practical information-theoretic approach (2nd ed.). New-York: Springer. <NAME>., <NAME>., & <NAME>. (2010). mmSAR: an R-package for multimodel species-area relationship inference. Ecography, 33, 420-424. <NAME>., <NAME>, <NAME>, & <NAME>. (2019). sars: an R package for fitting, evaluating and comparing species–area relationship models. Ecography, 42, 1446–55. Examples data(galap) #attempt to construct a multimodel SAR curve using all twenty sar models #using no grid_start just for speed here (not recommended generally) fit <- sar_average(data = galap, grid_start = "none") summary(fit) plot(fit) # construct a multimodel SAR curve using a fit_collection object ff <- sar_multi(galap, obj = c("power", "loga", "monod", "weibull3")) fit2 <- sar_average(obj = ff, data = NULL) summary(fit2) ## Not run: # construct a multimodel SAR curve using a more exhaustive set of starting # parameter values fit3 <- sar_average(data = galap, grid_start = "exhaustive", grid_n = 1000) ## End(Not run) sar_betap Fit the Beta-P cumulative model Description Fit the Beta-P cumulative model to SAR data. Usage sar_betap(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples #Grid_start turned off for speed (not recommended) data(galap) fit <- sar_betap(galap, grid_start = 'none') summary(fit) plot(fit) sar_chapman Fit the Chapman Richards model Description Fit the Chapman Richards model to SAR data. Usage sar_chapman(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_chapman(galap) summary(fit) plot(fit) sar_epm1 Fit the Extended Power model 1 model Description Fit the Extended Power model 1 model to SAR data. Usage sar_epm1(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_epm1(galap) summary(fit) plot(fit) sar_epm2 Fit the Extended Power model 2 model Description Fit the Extended Power model 2 model to SAR data. Usage sar_epm2(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_epm2(galap) summary(fit) plot(fit) sar_gompertz Fit the Gompertz model Description Fit the Gompertz model to SAR data. Usage sar_gompertz(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_gompertz(galap) summary(fit) plot(fit) sar_heleg Fit the Heleg(Logistic) model Description Fit the Heleg(Logistic) model to SAR data. Usage sar_heleg(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_heleg(galap) summary(fit) plot(fit) sar_koba Fit the Kobayashi model Description Fit the Kobayashi model to SAR data. Usage sar_koba(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_koba(galap) summary(fit) plot(fit) sar_linear Fit the linear model Description Fit the linear model to SAR data. Usage sar_linear(data, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors Kolmogorov-Smirnov test), ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals nor- mality test is undertaken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest != "none". Can be any of "spear- man" (the default), "pearson", or "kendall". verb Whether or not to print certain warnings (default = TRUE). Details The model is fitted using linear regression and the lm function. Model validation can be under- taken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • verge Logical code indicating model convergence • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. Examples data(galap) fit <- sar_linear(galap) summary(fit) plot(fit) sar_loga Fit the Logarithmic model Description Fit the Logarithmic model to SAR data. Usage sar_loga(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_loga(galap) summary(fit) plot(fit) sar_logistic Fit the Logistic(Standard) model Description Fit the Logistic(Standard) model to SAR data. Usage sar_logistic(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_logistic(galap) summary(fit) plot(fit) sar_mmf Fit the MMF model Description Fit the MMF model to SAR data. This function has been deprecated. Usage sar_mmf(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- suppressWarnings(sar_mmf(galap)) summary(fit) plot(fit) sar_monod Fit the Monod model Description Fit the Monod model to SAR data. Usage sar_monod(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_monod(galap) summary(fit) plot(fit) sar_multi Create a Collection of SAR Model Fits Description Creates a fit collection of SAR model fits, which can then be plotted using plot.sars. Usage sar_multi(data, obj = c("power", "powerR","epm1","epm2","p1","p2","loga","koba", "monod","negexpo","chapman","weibull3","asymp", "ratio","gompertz","weibull4","betap","logistic","heleg","linear"), normaTest = "none", homoTest = "none", homoCor = "spearman", grid_start = "partial", grid_n = NULL, verb = TRUE, display = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. obj A vector of model names. normaTest The test used to test the normality of the residuals of each model. Can be any of "lillie" (Lilliefors Kolmogorov-Smirnov test), "shapiro" (Shapiro-Wilk test of normality), "kolmo" (Kolmogorov-Smirnov test), or "none" (no residuals nor- mality test is undertaken; the default). homoTest The test used to check for homogeneity of the residuals of each model. Can be any of "cor.fitted" (a correlation of the squared residuals with the model fitted values), "cor.area" (a correlation of the squared residuals with the area values), or "none" (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest != "none". Can be any of "spear- man" (the default), "pearson", or "kendall". grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space (see details). verb verbose - Whether or not to print certain warnings (default: verb == TRUE). display Show the model fitting output and related messages. (default: display == TRUE). Details The sar_models() function can be used to bring up a list of the 20 model names. display_sars_models() generates a table of the 20 models with model information. Value A list of class ’sars’ with n elements, corresponding to the n individual SAR model fits. Examples data(galap) # construct a fit_collection object of 3 SAR model fits fit2 <- sar_multi(galap, obj = c("power", "loga", "linear")) plot(fit2) # construct a fit_collection object of all 20 SAR model fits # using no grid_start for speed fit3 <- sar_multi(galap, grid_start = "none") sar_negexpo Fit the Negative exponential model Description Fit the Negative exponential model to SAR data. Usage sar_negexpo(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_negexpo(galap) summary(fit) plot(fit) sar_p1 Fit the Persistence function 1 model Description Fit the Persistence function 1 model to SAR data. Usage sar_p1(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_p1(galap) summary(fit) plot(fit) sar_p2 Fit the Persistence function 2 model Description Fit the Persistence function 2 model to SAR data. Usage sar_p2(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_p2(galap) summary(fit) plot(fit) sar_power Fit the Power model Description Fit the Power model to SAR data. Usage sar_power(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. For the power model (and only this model) the returned object (sigConf) and model summary also includes the parameter estimates generated from fitting the model using nls and using as starting parameter estimates the parameter values from our model fitting. This also returns the confidence intervals generated with confint (which calls MASS:::confint.nls), which should be more accurate than the default sars CIs. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_power(galap) summary(fit) plot(fit) sar_powerR Fit the PowerR model Description Fit the PowerR model to SAR data. Usage sar_powerR(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_powerR(galap) summary(fit) plot(fit) sar_pred Use SAR model fits to predict richness on islands of a given size Description Predict the richness on an island of a given size using either individual SAR model fits, a fit_collection of model fits, or a multi-model SAR curve. Usage sar_pred(fit, area) Arguments fit Either a model fit object, a fit_collection object (generated using sar_multi), or a sar_multi object (generated using sar_average). area A numeric vector of area values (length >= 1). Details Extrapolation (e.g. predicting the richness of areas too large to be sampled) is one of the primary uses of the SAR. The sar_pred function provides an easy method for undertaking such an exercise. The function works by taking an already fitted SAR model, extacting the parameter values and then using these values and the model function to predict the richness for any value of area provided. If a multi-model SAR curve is used for prediction (i.e. using sar_average), the model information criterion weight (i.e. the conditional probabilities for each of the n models) for each of the individual model fits that were used to generate the curve are stored. The n models are then each used to predict the richness of a larger area and these predictions are multiplied by the respective model weights and summed to provide a multi-model averaged prediction. Value A data.frame of class ’sars’ with three columns: 1) the name of the model, 2) the area value for which a prediction has been generated, and 3) the prediction from the model extrapolation. Note This function is used in the ISAR extrapolation paper of Matthews & Aspin (2019). Code to calculate confidence intervals around the predictions using bootstrapping will be added in a later version of the package. As grid_start has a random component, when grid_start != "none" in your model fitting, you can get slightly different results each time you fit a model or run sar_average and then run sar_pred on it. We would recommend using grid_start = "exhaustive" as this is more likely to find the optimum fit for a given model. References Matthews, T.J. & Aspin, T.W.H. (2019) Model averaging fails to improve the extrapolation capabil- ity of the island species–area relationship. Journal of Biogeography, 46, 1558-1568. Examples data(galap) #fit the power model and predict richness on an island of area = 5000 fit <- sar_power(data = galap) p <- sar_pred(fit, area = 5000) #fit three SAR models and predict richness on islands of area = 5000 & 10000 #using no grid_start for speed fit2 <- sar_multi(galap, obj = c("power", "loga", "koba"), grid_start = "none") p2 <- sar_pred(fit2, area = c(5000, 10000)) #calculate a multi-model curve and predict richness on islands of area = 5000 & 10000 #using no grid_start for speed fit3 <- sar_average(data = galap, grid_start = "none") p3 <- sar_pred(fit3, area = c(5000, 10000)) sar_ratio Fit the Rational function model Description Fit the Rational function model to SAR data. Usage sar_ratio(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_ratio(galap) summary(fit) plot(fit) sar_threshold Fit threshold SAR models Description Fit up to six piecewise (threshold) regression models to SAR data. Usage sar_threshold(data, mod = "All", interval = NULL, nisl = NULL, non_th_models = TRUE, logAxes = "area", con = 1, logT = log, parallel = FALSE, cores = NULL) Arguments data A dataset in the form of a dataframe with at least two columns: the first with island/site areas, and the second with the species richness of each island/site. mod A vector of model names: an individual model, a set of models, or all models. Can be any of ’All’ (fit all models), ’ContOne’ (continuous one-threshold), ’Zs- lopeOne’ (left-horizontal one-threshold), ’DiscOne’ (discontinuous one-threshold), ’ContTwo’ (continuous two-threshold), ’ZslopeTwo’ (left-horizontal two-threshold), or ’DiscTwo’ (discontinuous two-threshold). interval The amount to increment the threshold value by in the iterative model fitting process (not applicable for the discontinuous models). The default for non- transformed area reverts to 1, while for log-transformed area it is 0.01. However, these values may not be suitable depending on the range of area values in a dataset, and thus users are advised to manually set this argument. nisl Set the minimum number of islands to be contained within each of the two segments (in the case of one-threshold models), or the first and last segments (in the case of two-threshold models). It needs to be less than than half of the total number of islands in the dataset. Default = NULL. non_th_models Logical argument (default = TRUE) of whether two non-threshold models (i.e. a simple linear regression: y ~ x; and an intercept only model: y ~ 1) should also be fitted. logAxes What log-transformation (if any) should be applied to the area and richness val- ues. Should be one of "none" (no transformation), "area" (only area is log- transformed; default) or "both" (both area and richness log-transformed). con The constant to add to the species richness values in cases where one of the islands has zero species. logT The log-transformation to apply to the area and richness values. Can be any of log(default), log2 or log10. parallel Logical argument for whether parallel processing should be used. Only applica- ble when the continuous two-threshold and left-horizontal two-threshold models are being fitted. cores Number of cores to use. Only applicable when parallel = TRUE. Details This function is described in more detail in the accompanying paper (Matthews & Rigal, 2020). Fitting the continuous and left-horizontal piecewise models (particularly the two-threshold models) can be time consuming if the range in area is large and/or the interval argument is small. For the two-threshold continuous slope and left-horizontal models, the use of parallel processing (using the parallel argument) is recommended. The number of cores (cores) must be provided. Note that the interval argument is not used to fit discontinuous models, as, in these cases, the breakpoint must be at a datapoint. There has been considerable debate regarding the number of parameters that are included in differ- ent piecewise models. Here (and thus in our calculation of AIC, AICc, BIC etc) we consider Con- tOne to have five parameters, ZslopeOne - 4, DiscOne - 6, ContTwo - 7, ZslopeTwo - 6, DiscTwo - 8. The standard linear model and the intercept model are considered to have 3 and 2 parameters, respectively. The raw lm model fits are provided in the output, however, if users want to calculate information criteria using different numbers of parameters. The raw lm model fits can also be used to explore classic diagnostic plots for linear regression analysis in R using the function plot or other diagnostic tests such outlierTest, leveragePlots or influencePlot, available in the car package. This is advised as currently there are no model validation checks undertaken automatically, unlike elsewhere in the sars package. Confidence intervals around the breakpoints in the one-threshold continuous and left- horizontal models can be calculated using the threshold_ci function. The intercepts and slopes of the differ- ent segments in the fitted breakpoint models can be calculated using the get_coef function. Rarely, multiple breakpoint values can return the same minimum rss (for a given model fit). In these cases, we just randomly choose and return one and also produce a warning. If this occurs it is worth checking the data and model fits carefully. The nisl argument can be useful to avoid situations where a segment contains only one island, for example. However, setting strict criteria on the number of data points to be included in segments could be seen as "forcing" the fit of the model, and arguably if a model fit is not interpretable, it is simply that the model does not provide a good representation of the data. Thus, it should not be used without careful thought. Value A list of class "threshold" and "sars" with five elements. The first element contains the different model fits (lm objects). The second element contains the names of the fitted models, the third contains the threshold values, the fourth element the dataset (i.e. a dataframe with area and richness values), and the fifth contains details of any axes log-transformations undertaken. summary.sars provides a more user-friendly ouput (including a model summary table) and plot.threshold plots the model fits. Note Due to the increased number of parameters, fitting piecewise regression models to datasets with few islands is not recommended. In particular, we would advise against fitting the two-threshold models to small SAR datasets (e.g. fewer than 10 islands for the one threshold models, and 20 islands for the two threshold models). Author(s) <NAME> and <NAME> References <NAME>. & <NAME>. (2001) Towards a more general species-area relationship: diversity on all islands, great and small. Journal of Biogeography, 28, 431-445. <NAME>., <NAME>., <NAME>. & <NAME>. (2019) On piecewise models and species-area patterns. Ecology and Evolution, 9, 8351-8361. <NAME>. et al. (2020) Unravelling the small-island effect through phylogenetic community ecology. Journal of Biogeography. <NAME>. & <NAME>. (In Review) Thresholds and the species–area relationship: a set of func- tions for fitting, evaluating and plotting a range of commonly used piecewise models. Frontiers of Biogeography. Examples data(aegean2) a2 <- aegean2[1:168,] fitT <- sar_threshold(data = a2, mod = c("ContOne", "DiscOne"), interval = 0.1, non_th_models = TRUE, logAxes = "area", logT = log10) summary(fitT) plot(fitT) #diagnostic plots for the ContOne model par(mfrow=c(2, 2)) plot(fitT[[1]][[1]]) sar_weibull3 Fit the Cumulative Weibull 3 par. model Description Fit the Cumulative Weibull 3 par. model to SAR data. Usage sar_weibull3(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_weibull3(galap) summary(fit) plot(fit) sar_weibull4 Fit the Cumulative Weibull 4 par. model Description Fit the Cumulative Weibull 4 par. model to SAR data. Usage sar_weibull4(data, start = NULL, grid_start = 'partial', grid_n = NULL, normaTest = 'none', homoTest = 'none', homoCor = 'spearman', verb = TRUE) Arguments data A dataset in the form of a dataframe with two columns: the first with island/site areas, and the second with the species richness of each island/site. start NULL or custom parameter start values for the optimisation algorithm. grid_start Should a grid search procedure be implemented to test multiple starting param- eter values. Can be one of ’none’, ’partial’ or ’exhaustive’ The default is set to ’partial’. grid_n If grid_start = exhaustive, the number of points sampled in the starting pa- rameter space. normaTest The test used to test the normality of the residuals of the model. Can be any of ’lillie’ (Lilliefors test , ’shapiro’ (Shapiro-Wilk test of normality), ’kolmo’ (Kolmogorov-Smirnov test), or ’none’ (no residuals normality test is under- taken; the default). homoTest The test used to check for homogeneity of the residuals of the model. Can be any of ’cor.fitted’ (a correlation of the residuals with the model fitted values), ’cor.area’ (a correlation of the residuals with the area values), or ’none’ (no residuals homogeneity test is undertaken; the default). homoCor The correlation test to be used when homoTest !='none'. Can be any of ’spear- man’ (the default), ’pearson’, or ’kendall’. verb Whether or not to print certain warnings (default = TRUE) Details The model is fitted using non-linear regression. The model parameters are estimated by minimizing the residual sum of squares with an unconstrained Nelder-Mead optimization algorithm and the optim function. To avoid numerical problems and speed up the convergence process, the starting values used to run the optimization algorithm are carefully chosen. However, if this does not work, custom values can be provided (using the start argument), or a more comprehensive search can be undertaken using the grid_start argument. See the vignette for more information. The fitting process also determines the observed shape of the model fit, and whether or not the observed fit is asymptotic (see Triantis et al. 2012 for further details). Model validation can be undertaken by assessing the normality (normaTest) and homogeneity (homoTest) of the residuals and a warning is provided in summary.sars if either test is chosen and fails. A selection of information criteria (e.g. AIC, BIC) are returned and can be used to compare models (see also sar_average). As grid_start has a random component, when grid_start != 'none' in your model fitting, you can get slightly different results each time you fit a model The parameter confidence intervals returned in sigConf are just simple confidence intervals, calcu- lated as 2 * standard error. Value A list of class ’sars’ with the following components: • par The model parameters • value Residual sum of squares • counts The number of iterations for the convergence of the fitting algorithm • convergence Numeric code returned from optim indicating model convergence (0 = con- verged) • message Any message from the model fit algorithm • hessian A symmetric matrix giving an estimate of the Hessian at the solution found • verge Logical code indicating that optim model convergence value is zero • startValues The start values for the model parameters used in the optimisation • data Observed data • model A list of model information (e.g. the model name and formula) • calculated The fitted values of the model • residuals The model residuals • AIC The AIC value of the model • AICc The AICc value of the model • BIC The BIC value of the model • R2 The R2 value of the model • R2a The adjusted R2 value of the model • sigConf The model coefficients table • normaTest The results of the residuals normality test • homoTest The results of the residuals homogeneity test • observed_shape The observed shape of the model fit • asymptote A logical value indicating whether the observed fit is asymptotic • neg_check A logical value indicating whether negative fitted values have been returned The summary.sars function returns a more useful summary of the model fit results, and the plot.sars plots the model fit. References <NAME>., <NAME>. & <NAME>. (2012) The island species-area relationship: biol- ogy and statistics. Journal of Biogeography, 39, 215-231. Examples data(galap) fit <- sar_weibull4(galap) summary(fit) plot(fit) summary.sars Summarising the results of the model fitting functions Description S3 method for class ’sars’. summary.sars creates summary statistics for objects of class ’sars’. The exact summary statistics computed depends on the ’Type’ attribute (e.g. ’multi’) of the ’sars’ object. The summary method generates more useful information for the user than the standard model fitting functions. Another S3 method (print.summary.sars; not documented) is used to print the output. Usage ## S3 method for class 'sars' summary(object, ...) Arguments object An object of class ’sars’. ... Further arguments. Value The summary.sars function returns an object of class "summary.sars". A print function is used to obtain and print a summary of the model fit results. For a ’sars’ object of Type ’fit’, a list with 16 elements is returned that contains useful information from the model fit, including the model parameter table (with t-values, p-values and confidence intervals), model fit statistics (e.g. R2, AIC), the observed shape of the model and whether or not the fit is asymptotic, and the results of any additional model checks undertaken (e.g. normality of the residuals). For a ’sars’ object of Type ’multi’, a list with 5 elements is returned: (i) a vector of the names of the models that were successfully fitted and passed any additional checks, (ii) a character string containing the name of the criterion used to rank models, (iii) a data frame of the ranked models, (iv) a vector of the names of any models that were not fitted or did not pass any additional checks, and (v) a logical vector specifying whether the optim convergence code for each model that passed all the checks is zero. In regards to (iii; Model_table), the dataframe contains the fit summaries for each successfully fitted model (including the value of the model criterion used to compare models, the R2 and adjusted R2, and the observed shape of the fit); the models are ranked in decreasing order of information criterion weight. For a ’sars’ object of Type ’lin_pow’, a list with up to 7 elements is returned: (i) the model fit output from the lm function, (ii) the fitted values of the model, (iii) the observed data, (iv and v) the results of the residuals normality and heterogeneity tests, and (vi) the log-transformation function used. If the argument compare = TRUE is used in lin_pow, a 7th element is returned that contains the parameter values from the non-linear power model. For a ’sars’ object of Type ’threshold’, a list with three elements is returned: (i) the information criterion used to order the ranked model summary table (currently just BIC), (ii) a model summary table (models are ranked using BIC), and (iii) details of any axes log-transformations undertaken. Note that in the model summary table, if log-area is used as the predictor, the threshold values will be on the log scale used. Thus it may be preferable to back-transform them (e.g. using exp(th) if natural logarithms are used) so that they are on the scale of untransformed area. Th1 and Th2 in the table are the threshold value(s), and seg1, seg2, seg3 provide the number of datapoints within each segment (for the threshold models); one-threshold models have two segements, and two-threshold models have three segments. Examples data(galap) #fit a multimodel SAR and get the model table mf <- sar_average(data = galap, grid_start = "none") summary(mf) summary(mf)$Model_table #Get a summary of the fit of the linear power model fit <- lin_pow(galap, con = 1, compare = TRUE) summary(fit) threshold_ci Calculate confidence intervals around breakpoints Description Generate confidence intervals around the breakpoints of the one-threshold continuous and left- horizontal models. Two types of confidence interval can be implemented: a confidence interval derived from an inverted F test and an empirical bootstrap confidence interval. Usage threshold_ci(object, cl = 0.95, method = "boot", interval = NULL, Nboot = 100, verb = TRUE) Arguments object An object of class ’thresholds’, generated using the sar_threshold function. The object must contain fits of either (or both) of the one-threshold continuous or the one-threshold left-horizontal model. cl The confidence level. Default value is 0.95 (95 percent). method Either bootstraping (boot) or inverted F test (F). interval The amount to increment the threshold value by in the iterative model fitting process used in both the F and boot methods. The default for non-transformed area reverts to 1, while for log-transformed area it is 0.01. It is advised that the same interval value used when running sar_threshold is used here. Nboot Number of bootstrap samples (for use with method = "boot"). verb Should progress be reported. If TRUE, every 50th bootstrap sample is reported (for use with method = "boot"). Details Full details of the two approaches can be found in Toms and Lesperance (2003). If the number of bootstrap samples is large, the function can take a while to run. Following Toms and Lesperance (2003), we therefore recommend the use of the inverted F test confidence interval when sample size is large, and bootstrapped confidence intervals when sample size is smaller. Currently only available for the one-threshold continuous and left- horizontal threshold models. Value A list of class "sars" with two elements. If method “F” is used, the list contains only the confidence interval values. If method “boot” is used, the list contains two elements. The first element is the full set of bootstrapped breakpoint estimates for each model and the second contains the confidence interval values. Author(s) <NAME> and <NAME> References Toms, J.D. & Lesperance, M.L. (2003) Piecewise regression: a tool for identifying ecological thresholds. Ecology, 84, 2034-2041. Examples data(aegean2) a2 <- aegean2[1:168,] fitT <- sar_threshold(data = a2, mod = "ContOne", interval = 0.1, non_th_models = TRUE, logAxes = "area", logT = log10) #calculate confidence intervals using bootstrapping #(very low Nboot just as an example) CI <- threshold_ci(fitT, method = "boot", interval = NULL, Nboot = 3) CI #Use the F method instead, with 90% confidence interval CI2 <- threshold_ci(fitT, cl = 0.90, method = "F", interval = NULL) CI2
zenoh-result
rust
Rust
Crate zenoh_result === ⚠️ WARNING ⚠️ This crate is intended for Zenoh’s internal use. Click here for Zenoh’s documentation Macros --- * anyhowConstruct an ad-hoc error from a string or existing non-`anyhow` error value. * bail * to_zerror * zerror Structs --- * NegativeI8 * ShmError * ZError Traits --- * ErrNo * IError`Error` is a trait representing the basic expectations for error values, i.e., values of type `E` in `Result<T, E>`. Functions --- * cold * likely * unlikely Type Aliases --- * Error * ZResult Crate zenoh_result === ⚠️ WARNING ⚠️ This crate is intended for Zenoh’s internal use. Click here for Zenoh’s documentation Macros --- * anyhowConstruct an ad-hoc error from a string or existing non-`anyhow` error value. * bail * to_zerror * zerror Structs --- * NegativeI8 * ShmError * ZError Traits --- * ErrNo * IError`Error` is a trait representing the basic expectations for error values, i.e., values of type `E` in `Result<T, E>`. Functions --- * cold * likely * unlikely Type Aliases --- * Error * ZResult Macro zenoh_result::anyhow === ``` macro_rules! anyhow { ($msg:literal $(,)?) => { ... }; ($err:expr $(,)?) => { ... }; ($fmt:expr, $($arg:tt)*) => { ... }; } ``` Construct an ad-hoc error from a string or existing non-`anyhow` error value. This evaluates to an `Error`. It can take either just a string, or a format string with arguments. It also can take any custom type which implements `Debug` and `Display`. If called with a single argument whose type implements `std::error::Error` (in addition to `Debug` and `Display`, which are always required), then that Error impl’s `source` is preserved as the `source` of the resulting `anyhow::Error`. Example --- ``` use anyhow::{anyhow, Result}; fn lookup(key: &str) -> Result<V> { if key.len() != 16 { return Err(anyhow!("key length must be 16 characters, got {:?}", key)); } // ... } ``` Struct zenoh_result::NegativeI8 === ``` #[repr(transparent)]pub struct NegativeI8(/* private fields */); ``` Implementations --- ### impl NegativeI8 #### pub const fn new(v: i8) -> Self #### pub const fn get(self) -> i8 #### pub const MIN: Self = _ Trait Implementations --- ### impl Clone for NegativeI8 #### fn clone(&self) -> NegativeI8 Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn hash<__H: Hasher>(&self, state: &mut __H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &NegativeI8) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &NegativeI8) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<NegativeI8> for NegativeI8 #### fn partial_cmp(&self, other: &NegativeI8) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. ### impl Eq for NegativeI8 ### impl StructuralEq for NegativeI8 ### impl StructuralPartialEq for NegativeI8 Auto Trait Implementations --- ### impl RefUnwindSafe for NegativeI8 ### impl Send for NegativeI8 ### impl Sync for NegativeI8 ### impl Unpin for NegativeI8 ### impl UnwindSafe for NegativeI8 Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct zenoh_result::ShmError === ``` pub struct ShmError(pub ZError); ``` Tuple Fields --- `0: ZError`Trait Implementations --- ### impl Debug for ShmError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn errno(&self) -> NegativeI8 ### impl Error for ShmError #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for ShmError ### impl Send for ShmError ### impl Sync for ShmError ### impl Unpin for ShmError ### impl !UnwindSafe for ShmError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct zenoh_result::ZError === ``` pub struct ZError { /* private fields */ } ``` Implementations --- ### impl ZError #### pub fn new<E: Into<AnyError>>( error: E, file: &'static str, line: u32, errno: NegativeI8 ) -> ZError #### pub fn set_source<S: Into<Error>>(self, source: S) -> Self Trait Implementations --- ### impl Debug for ZError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn errno(&self) -> NegativeI8 ### impl Error for ZError #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. ### impl Sync for ZError Auto Trait Implementations --- ### impl !RefUnwindSafe for ZError ### impl Unpin for ZError ### impl !UnwindSafe for ZError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait zenoh_result::IError === 1.0.0 · ``` pub trait IError: Debug + Display { // Provided methods fn source(&self) -> Option<&(dyn Error + 'static)> { ... } fn description(&self) -> &str { ... } fn cause(&self) -> Option<&dyn Error> { ... } fn provide<'a>(&'a self, request: &mut Request<'a>) { ... } } ``` `Error` is a trait representing the basic expectations for error values, i.e., values of type `E` in `Result<T, E>`. Errors must describe themselves through the `Display` and `Debug` traits. Error messages are typically concise lowercase sentences without trailing punctuation: ``` let err = "NaN".parse::<u32>().unwrap_err(); assert_eq!(err.to_string(), "invalid digit found in string"); ``` Errors may provide cause information. `Error::source()` is generally used when errors cross “abstraction boundaries”. If one module must report an error that is caused by an error from a lower-level module, it can allow accessing that error via `Error::source()`. This makes it possible for the high-level module to provide its own errors while also revealing some of the implementation for debugging. Provided Methods --- 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. ##### Examples ``` use std::error::Error; use std::fmt; #[derive(Debug)] struct SuperError { source: SuperErrorSideKick, } impl fmt::Display for SuperError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "SuperError is here!") } } impl Error for SuperError { fn source(&self) -> Option<&(dyn Error + 'static)> { Some(&self.source) } } #[derive(Debug)] struct SuperErrorSideKick; impl fmt::Display for SuperErrorSideKick { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "SuperErrorSideKick is here!") } } impl Error for SuperErrorSideKick {} fn get_super_error() -> Result<(), SuperError> { Err(SuperError { source: SuperErrorSideKick }) } fn main() { match get_super_error() { Err(e) => { println!("Error: {e}"); println!("Caused by: {}", e.source().unwrap()); } _ => println!("No error"), } } ``` #### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() ``` if let Err(e) = "xc".parse::<u32>() { // Print `e` itself, no need for description(). eprintln!("Error: {e}"); } ``` #### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Used in conjunction with `Request::provide_value` and `Request::provide_ref` to extract references to member variables from `dyn Error` trait objects. ##### Example ``` #![feature(error_generic_member_access)] #![feature(error_in_core)] use core::fmt; use core::error::{request_ref, Request}; #[derive(Debug)] enum MyLittleTeaPot { Empty, } #[derive(Debug)] struct MyBacktrace { // ... } impl MyBacktrace { fn new() -> MyBacktrace { // ... } } #[derive(Debug)] struct Error { backtrace: MyBacktrace, } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Example Error") } } impl std::error::Error for Error { fn provide<'a>(&'a self, request: &mut Request<'a>) { request .provide_ref::<MyBacktrace>(&self.backtrace); } } fn main() { let backtrace = MyBacktrace::new(); let error = Error { backtrace }; let dyn_error = &error as &dyn std::error::Error; let backtrace_ref = request_ref::<MyBacktrace>(dyn_error).unwrap(); assert!(core::ptr::eq(&error.backtrace, backtrace_ref)); assert!(request_ref::<MyLittleTeaPot>(dyn_error).is_none()); } ``` Implementations --- ### impl dyn Error 1.3.0 · source#### pub fn is<T>(&self) -> boolwhere T: Error + 'static, Returns `true` if the inner type is the same as `T`. 1.3.0 · source#### pub fn downcast_ref<T>(&self) -> Option<&T>where T: Error + 'static, Returns some reference to the inner value if it is of type `T`, or `None` if it isn’t. 1.3.0 · source#### pub fn downcast_mut<T>(&mut self) -> Option<&mut T>where T: Error + 'static, Returns some mutable reference to the inner value if it is of type `T`, or `None` if it isn’t. ### impl dyn Error + Send 1.3.0 · source#### pub fn is<T>(&self) -> boolwhere T: Error + 'static, Forwards to the method defined on the type `dyn Error`. 1.3.0 · source#### pub fn downcast_ref<T>(&self) -> Option<&T>where T: Error + 'static, Forwards to the method defined on the type `dyn Error`. 1.3.0 · source#### pub fn downcast_mut<T>(&mut self) -> Option<&mut T>where T: Error + 'static, Forwards to the method defined on the type `dyn Error`. ### impl dyn Error + Send + Sync 1.3.0 · source#### pub fn is<T>(&self) -> boolwhere T: Error + 'static, Forwards to the method defined on the type `dyn Error`. 1.3.0 · source#### pub fn downcast_ref<T>(&self) -> Option<&T>where T: Error + 'static, Forwards to the method defined on the type `dyn Error`. 1.3.0 · source#### pub fn downcast_mut<T>(&mut self) -> Option<&mut T>where T: Error + 'static, Forwards to the method defined on the type `dyn Error`. ### impl dyn Error #### pub fn sources(&self) -> Source<'_🔬This is a nightly-only experimental API. (`error_iter`)Returns an iterator starting with the current error and continuing with recursively calling `Error::source`. If you want to omit the current error and only use its sources, use `skip(1)`. ##### Examples ``` #![feature(error_iter)] use std::error::Error; use std::fmt; #[derive(Debug)] struct A; #[derive(Debug)] struct B(Option<Box<dyn Error + 'static>>); impl fmt::Display for A { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "A") } } impl fmt::Display for B { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "B") } } impl Error for A {} impl Error for B { fn source(&self) -> Option<&(dyn Error + 'static)> { self.0.as_ref().map(|e| e.as_ref()) } } let b = B(Some(Box::new(A))); // let err : Box<Error> = b.into(); // or let err = &b as &(dyn Error); let mut iter = err.sources(); assert_eq!("B".to_string(), iter.next().unwrap().to_string()); assert_eq!("A".to_string(), iter.next().unwrap().to_string()); assert!(iter.next().is_none()); assert!(iter.next().is_none()); ``` ### impl dyn Error + Send 1.3.0 · source#### pub fn downcast<T>( self: Box<dyn Error + Send, Global> ) -> Result<Box<T, Global>, Box<dyn Error + Send, Global>>where T: Error + 'static, Attempts to downcast the box to a concrete type. ### impl dyn Error + Send + Sync 1.3.0 · source#### pub fn downcast<T>( self: Box<dyn Error + Send + Sync, Global> ) -> Result<Box<T, Global>, Box<dyn Error + Send + Sync, Global>>where T: Error + 'static, Attempts to downcast the box to a concrete type. ### impl dyn Error 1.3.0 · source#### pub fn downcast<T>( self: Box<dyn Error, Global> ) -> Result<Box<T, Global>, Box<dyn Error, Global>>where T: Error + 'static, Attempts to downcast the box to a concrete type. Trait Implementations --- ### impl ErrNo for dyn Error #### fn errno(&self) -> NegativeI8 ### impl ErrNo for dyn Error + Send #### fn errno(&self) -> NegativeI8 ### impl ErrNo for dyn Error + Send + Sync #### fn errno(&self) -> NegativeI8 Implementors --- ### impl !Error for &str 1.8.0 · source### impl Error for Infallible ### impl Error for VarError 1.15.0 · source### impl Error for RecvTimeoutError ### impl Error for TryRecvError ### impl Error for ! 1.57.0 · source### impl Error for TryReserveError 1.58.0 · source### impl Error for FromVecWithNulError 1.7.0 · source### impl Error for IntoStringError ### impl Error for NulError ### impl Error for FromUtf8Error ### impl Error for FromUtf16Error 1.28.0 · source### impl Error for LayoutError ### impl Error for AllocError 1.34.0 · source### impl Error for TryFromSliceError 1.13.0 · source### impl Error for BorrowError 1.13.0 · source### impl Error for BorrowMutError 1.34.0 · source### impl Error for CharTryFromError 1.20.0 · source### impl Error for ParseCharError 1.9.0 · source### impl Error for DecodeUtf16Error 1.59.0 · source### impl Error for TryFromCharError 1.69.0 · source### impl Error for FromBytesUntilNulError 1.17.0 · source### impl Error for FromBytesWithNulError 1.11.0 · source### impl Error for core::fmt::Error 1.4.0 · source### impl Error for AddrParseError ### impl Error for ParseFloatError ### impl Error for ParseIntError 1.34.0 · source### impl Error for TryFromIntError ### impl Error for ParseBoolError ### impl Error for Utf8Error 1.66.0 · source### impl Error for TryFromFloatSecsError ### impl Error for JoinPathsError 1.56.0 · source### impl Error for WriterPanicked ### impl Error for std::io::error::Error 1.7.0 · source### impl Error for StripPrefixError ### impl Error for ExitStatusError ### impl Error for RecvError 1.26.0 · source### impl Error for AccessError 1.8.0 · source### impl Error for SystemTimeError ### impl Error for ShmError ### impl Error for ZError ### impl<'a, K, V> Error for alloc::collections::btree::map::entry::OccupiedError<'a, K, V, Global>where K: Debug + Ord, V: Debug, ### impl<'a, K, V> Error for std::collections::hash::map::OccupiedError<'a, K, V>where K: Debug, V: Debug, 1.51.0 · source### impl<'a, T> Error for &'a Twhere T: Error + ?Sized, ### impl<T> Error for TrySendError<T### impl<T> Error for TryLockError<T1.8.0 · source### impl<T> Error for Box<T, Global>where T: Error, ### impl<T> Error for ThinBox<T>where T: Error + ?Sized, 1.52.0 · source### impl<T> Error for Arc<T, Global>where T: Error + ?Sized, ### impl<T> Error for SendError<T### impl<T> Error for PoisonError<T### impl<W> Error for IntoInnerError<W>where W: Send + Debug, ### impl<const N: usize> Error for GetManyMutError<NType Alias zenoh_result::Error === ``` pub type Error = Box<dyn IError + Send + Sync + 'static>; ``` Aliased Type --- ``` struct Error(/* private fields */); ``` Trait Implementations --- 1.0.0 · source### impl<T, A> Deref for Box<T, A>where A: Allocator, T: ?Sized, #### type Target = T The resulting type after dereferencing.#### fn deref(&self) -> &T Dereferences the value.1.8.0 · source### impl<T> Error for Box<T, Global>where T: Error, #### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read more Type Alias zenoh_result::ZResult === ``` pub type ZResult<T> = Result<T, Error>; ``` Aliased Type --- ``` enum ZResult<T> { Ok(T), Err(Box<dyn Error + Send + Sync, Global>), } ``` Variants --- 1.0.0### Ok(T) Contains the success value 1.0.0### Err(Box<dyn Error + Send + Sync, Global>) Contains the error value
graphite-api
readthedoc
Unknown
Graphite-API Documentation Release 1.1.3 <NAME> Oct 25, 2017 Contents 2.1 Installatio... 5 2.2 Configuratio... 6 2.3 Deploymen... 9 2.4 HTTP AP... 13 2.5 Built-in function... 33 2.6 Storage finder... 54 2.7 Custom function... 56 2.8 Graphite-API release... 57 i ii Graphite-API Documentation, Release 1.1.3 Graphite-API is an alternative to Graphite-web, without any built-in dashboard. Its role is solely to fetch metrics from a time-series database (whisper, cyanite, etc.) and rendering graphs or JSON data out of these time series. It is meant to be consumed by any of the numerous Graphite dashboard applications. Graphite-API is a fork of Graphite-web and couldn’t have existed without the fantastic prior work done by the Graphite team. Graphite-API Documentation, Release 1.1.3 2 Contents CHAPTER 1 Why should I use it? Graphite-API offers a number of improvements over Graphite-web that you might find useful. Namely: • The Graphite-API application is completely stateless and doesn’t need a SQL database. It only needs to talk to a time series database. • Python 2 and 3 are both supported. • The HTTP API accepts JSON data additionnaly to form data and querystring parameters. • The application is extremely simple to install and configure. • The architecture has been drastically simplified and there are many fewer moving parts than in graphite-web: – No memcache integration – rendering is live. – No support for the Pickle format when rendering. – Plugin architecture for integrating with time series databases or adding more analysis functions. • The codebase has been thoroughly updated with a focus on test coverage and code quality. Note: Graphite-API does not provide any web/graphical interface. If you currently rely on the built-in Graphite composer, Graphite-API might not be for you. However, if you’re using a third-party dashboard interface, Graphite- API will do just fine. Graphite-API Documentation, Release 1.1.3 4 Chapter 1. Why should I use it? CHAPTER 2 Contents Installation Debian / Ubuntu: native package If you run Debian 8 or Ubuntu 14.04 LTS, you can use one of the available packages which provides a self-contained build of graphite-api. Builds are available on the releases page. Once installed, Graphite-api should be running as a service and available on port 8888. The package contains all the optional dependencies. Python package Prerequisites Installing Graphite-API requires: • Python 2 (2.6 and above) or 3 (3.3 and above), with development files. On debian/ubuntu, you’ll want to install python-dev. • gcc. On debian/ubuntu, install build-essential. • Cairo, including development files. On debian/ubuntu, install the libcairo2-dev package. • libffi with development files, libffi-dev on debian/ubuntu. • Pip, the Python package manager. On debian/ubuntu, install python-pip. Global installation To install Graphite-API globally on your system, run as root: Graphite-API Documentation, Release 1.1.3 $ pip install graphite-api Isolated installation (virtualenv) If you want to isolate Graphite-API from the system-wide python environment, you can install it in a virtualenv. $ virtualenv /usr/share/python/graphite $ /usr/share/python/graphite/bin/pip install graphite-api Extra dependencies When you install graphite-api, all the dependencies for running a Graphite server that uses Whisper as a storage backend are installed. You can specify extra dependencies: • For Sentry integration: pip install graphite-api[sentry]. • For Cyanite integration: pip install graphite-api[cyanite]. • For Cache support: pip install graphite-api[cache]. You’ll also need the driver for the type of caching you want to use (Redis, Memcache, etc.). See the Flask-Cache docs for supported cache types. You can also combine several extra dependencies: $ pip install graphite-api[sentry,cyanite] Configuration /etc/graphite-api.yaml The configuration file for Graphite-API lives at /etc/graphite-api.yaml and uses the YAML format. Creating the configuration file is optional: if Graphite-API doesn’t find the file, sane defaults are used. They are described below. Default values search_index: /srv/graphite/index finders: - graphite_api.finders.whisper.WhisperFinder functions: - graphite_api.functions.SeriesFunctions - graphite_api.functions.PieFunctions whisper: directories: - /srv/graphite/whisper time_zone: <system timezone> or UTC Config sections Default sections search_index Graphite-API Documentation, Release 1.1.3 The location of the search index used for searching metrics. Note that it needs to be a file that is writable by the Graphite-API process. finders A list of python paths to the storage finders you want to use when fetching metrics. functions A list of python paths to function definitions for transforming / analyzing time series data. whisper The configuration information for whisper. Only relevant when using WhisperFinder. Simply holds a directories key listing all directories containing whisper data. time_zone The time zone to use when generating graphs. By default, Graphite-API tries to detect your system timezone. If detection fails it falls back to UTC. You can also manually override it if you want another value than your system’s timezone. Extra sections carbon Configuration information for reading data from carbon’s cache. Items: hosts List of carbon-cache hosts, in the format hostname:port[:instance]. timeout Socket timeout for carbon connections, in seconds. retry_delay Time to wait before trying to re-establish a failed carbon connection, in seconds. hashing_keyfunc Python path to a hashing function for metrics. If you use Carbon with consistent hash- ing and a custom function, you need to point to the same hashing function. hashing_type Type of metric hashing function. The default carbon_ch is Graphite’s traditional consistent-hashing implementation. Alternatively, you can use fnv1a_ch, which supports the Fowler-Noll-Vo hash function (FNV-1a) hash implementation offered by the carbon-c-relay project. Default: carbon_ch carbon_prefix Prefix for carbon’s internal metrics. When querying metrics starting with this prefix, re- quests are made to all carbon-cache instances instead of one instance selected by the key function. Default: carbon. replication_factor The replication factor of your carbon setup. Default: 1. Example: carbon: hosts: - 127.0.0.1:7002 timeout: 1 retry_delay: 15 carbon_prefix: carbon replication_factor: 1 sentry_dsn This is useful if you want to send Graphite-API’s exceptions to a Sentry instance for easier debugging. Example: Graphite-API Documentation, Release 1.1.3 sentry_dsn: https://key:secret@app.getsentry.com/12345 Note: Sentry integration requires Graphite-API to be installed with the corresponding extra dependency: $ pip install graphite-api[sentry] allowed_origins Allows you to do cross-domain (CORS) requests to the Graphite API. Say you have a dashboard at dashboard.example.com that makes AJAX requests to graphite.example.com, just set the value accordingly: allowed_origins: - dashboard.example.com You can specify as many origins as you want. A wildcard can be used to allow all origins: allowed_origins: - * cache Lets you configure a cache for graph rendering. This is done via Flask-Cache which supports a number of backends including memcache, Redis, filesystem or in-memory caching. Cache configuration maps directly to Flask-Cache’s config values. For each CACHE_* config value, set the lowercased name in the cache section, without the prefix. Example: cache: type: redis redis_host: localhost This would configure Flask-Cache with CACHE_TYPE = 'redis' and CACHE_REDIS_HOST = 'localhost'. Some cache options have default values defined by Graphite-API: • default_timeout: 60 • key_prefix: 'graphite-api:. Note: Caching functionality requires you to install the cache extra dependency but also the underlying driver. E.g. for redis, you’ll need: $ pip install graphite-api[cache] redis statsd Attaches a statsd object to the application, which can be used for instrumentation. Currently Graphite-API itself doesn’t use this, but some backends do, like Graphite-Influxdb. Example: statsd: host: 'statsd_host' port: 8125 # not needed if default Graphite-API Documentation, Release 1.1.3 Note: This requires the statsd module: $ pip install statsd render_errors If True (default), full tracebacks are returned in the HTTP response in case of application errors. Custom location If you need the Graphite-API config file to be stored in another place than /etc/graphite-api.yaml, you can set a custom location using the GRAPHITE_API_CONFIG environment variable: export GRAPHITE_API_CONFIG=/var/lib/graphite/config.yaml Deployment There are several options available, depending on your setup. Gunicorn + nginx First, you need to install Gunicorn. The easiest way is to use pip: $ pip install gunicorn If you have installed Graphite-API in a virtualenv, install Gunicorn in the same virtualenv: $ /usr/share/python/graphite/bin/pip install gunicorn Next, create the script that will run Graphite-API using your process watcher of choice. Upstart description "Graphite-API server" start on runlevel [2345] stop on runlevel [!2345] respawn exec gunicorn -w2 graphite_api.app:app -b 127.0.0.1:8888 Supervisor [program:graphite-api] command = gunicorn -w2 graphite_api.app:app -b 127.0.0.1:8888 autostart = true autorestart = true systemd Graphite-API Documentation, Release 1.1.3 # This is /etc/systemd/system/graphite-api.socket [Unit] Description=graphite-api socket [Socket] ListenStream=/run/graphite-api.sock ListenStream=127.0.0.1:8888 [Install] WantedBy=sockets.target # This is /etc/systemd/system/graphite-api.service [Unit] Description=Graphite-API service Requires=graphite-api.socket [Service] ExecStart=/usr/bin/gunicorn -w2 graphite_api.app:app Restart=on-failure #User=graphite #Group=graphite ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s TERM $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target Note: If you have installed Graphite-API and Gunicorn in a virtualenv, you need to use the full path to Gunicorn. Instead of gunicorn, use /usr/share/python/graphite/bin/gunicorn (assuming your virtualenv is at /usr/share/python/graphite). See the Gunicorn docs for configuration options and command-line flags. Finally, configure the nginx vhost: # /etc/nginx/sites-available/graphite.conf upstream graphite { server 127.0.0.1:8888 fail_timeout=0; } server { server_name graph; listen 80 default; root /srv/www/graphite; location / { try_files $uri @graphite; } location @graphite { proxy_pass http://graphite; } } Graphite-API Documentation, Release 1.1.3 Enable the vhost and restart nginx: $ ln -s /etc/nginx/sites-available/graphite.conf /etc/nginx/sites-enabled $ service nginx restart Apache + mod_wsgi First, you need to install mod_wsgi. See the mod_wsgi InstallationInstructions for installation instructions. Then create the graphite-api.wsgi: # /var/www/wsgi-scripts/graphite-api.wsgi from graphite_api.app import app as application Finally, configure the apache vhost: # /etc/httpd/conf.d/graphite.conf LoadModule wsgi_module modules/mod_wsgi.so WSGISocketPrefix /var/run/wsgi Listen 8013 <VirtualHost *:8013> WSGIDaemonProcess graphite-api processes=5 threads=5 display-name='%{GROUP}' ˓→ inactivity-timeout=120 WSGIProcessGroup graphite-api WSGIApplicationGroup %{GLOBAL} WSGIImportScript /var/www/wsgi-scripts/graphite-api.wsgi process-group=graphite- ˓→api application-group=%{GLOBAL} WSGIScriptAlias / /var/www/wsgi-scripts/graphite-api.wsgi <Directory /var/www/wsgi-scripts/> Order deny,allow Allow from all </Directory> </VirtualHost> Adapt the mod_wsgi configuration to your requirements. See the mod_wsgi QuickConfigurationGuide for an overview of configurations and mod_wsgi ConfigurationDirectives to see all configuration directives Restart apache: $ service httpd restart Docker Create a graphite-api.yaml configuration file with your desired config. Create a Dockerfile: Graphite-API Documentation, Release 1.1.3 FROM brutasse/graphite-api Build your container: docker build -t graphite-api . Run it: docker run -t -i -p 8888:8888 graphite-api /srv/graphite is a docker VOLUME. You can use that to provide whisper data from the host (or from another docker container) to the graphite-api container: docker run -t -i -v /path/to/graphite:/srv/graphite -p 8888:8888 graphite-api This container has all the extra packages included. Cyanite backend and Sentry integration are available. Nginx + uWSGI First, you need to install uWSGI with Python support. On Debian, install uwsgi-plugin-python. Then create the uWSGI file for Graphite-API in /etc/uwsgi/apps-available/graphite-api.ini: [uwsgi] processes = 2 socket = localhost:8080 plugins = python27 module = graphite_api.app:app If you installed Graphite-API in a virtualenv, specify the virtualenv path: home = /var/www/wsgi-scripts/env If you need a custom location for Graphite-API’s config file, set the environment variable like this: env = GRAPHITE_API_CONFIG=/var/www/wsgi-scripts/config.yml Enable graphite-api.ini and restart uWSGI: $ ln -s /etc/uwsgi/apps-available/graphite-api.ini /etc/uwsgi/apps-enabled $ service uwsgi restart Finally, configure the nginx vhost: # /etc/nginx/sites-available/graphite.conf server { listen 80; location / { include uwsgi_params; uwsgi_pass localhost:8080; } } Enable the vhost and restart nginx: Graphite-API Documentation, Release 1.1.3 $ ln -s /etc/nginx/sites-available/graphite.conf /etc/nginx/sites-enabled $ service nginx restart Other deployment methods They currently aren’t described here but there are several other ways to serve Graphite-API: • nginx + circus + chaussette If you feel like contributing some documentation, feel free to open pull a request on the Graphite-API repository. HTTP API Here is the general behavior of the API: • When parameters are missing or wrong, an HTTP 400 response is returned with the detailed errors in the response body. • Request parameters can be passed via: – JSON data in the request body (application/json content-type). – Form data in the request body (application/www-form-urlencoded content-type). – Querystring parameters. You can pass some parameters by querystring and others by json/form data if you want to. Parameters are looked up in the order above, meaning that if a parameter is present in both the form data and the querystring, only the one from the querystring is taken into account. • URLs are given without a trailing slash but adding a trailing slash is fine for all API calls. • Parameters are case-sensitive. The Metrics API These API endpoints are useful for finding and listing metrics available in the system. /metrics/find Finds metrics under a given path. Other alias: /metrics. Example: GET /metrics/find?query=collectd.* {"metrics": [{ "is_leaf": 0, "name": "db01", "path": "collectd.db01." }, { "is_leaf": 1, "name": "foo", "path": "collectd.foo" }]} Graphite-API Documentation, Release 1.1.3 Parameters: query (mandatory) The query to search for. format The output format to use. Can be completer or treejson (default). wildcards (0 or 1) Whether to add a wildcard result at the end or no. Default: 0. from Epoch timestamp from which to consider metrics. until Epoch timestamp until which to consider metrics. jsonp (optional) Wraps the response in a JSONP callback. /metrics/expand Expands the given query with matching paths. Parameters: query (mandatory) The metrics query. Can be specified multiple times. groupByExpr (0 or 1) Whether to return a flat list of results or group them by query. Default: 0. leavesOnly (0 or 1) Whether to only return leaves or both branches and leaves. Default: 0 jsonp (optional) Wraps the response in a JSONP callback. /metrics/index.json Walks the metrics tree and returns every metric found as a sorted JSON array. Parameters: jsonp (optional) Wraps the response in a jsonp callback. Example: GET /metrics/index.json [ "collectd.host1.load.longterm", "collectd.host1.load.midterm", "collectd.host1.load.shortterm" ] The Render API – /render Graphite-API provides a /render endpoint for generating graphs and retrieving raw data. This endpoint accepts various arguments via query string parameters, form data or JSON data. To verify that the api is running and able to generate images, open http://<api-host>:<port>/render? target=test in a browser. The api should return a simple 600x300 image with the text “No Data”. Once the api is running and you’ve begun feeding data into the storage backend, use the parameters below to customize your graphs and pull out raw data. For example: Graphite-API Documentation, Release 1.1.3 # single server load on large graph http://graphite/render?target=server.web1.load&height=800&width=600 # average load across web machines over last 12 hours http://graphite/render?target=averageSeries(server.web*.load)&from=-12hours # number of registered users over past day as raw json data http://graphite/render?target=app.numUsers&format=json # rate of new signups per minute http://graphite/render?target=summarize(derivative(app.numUsers),"1min")&title=New_ ˓→Users_Per_Minute Note: Most of the functions and parameters are case sensitive. For example &linewidth=2 will fail silently. The correct parameter in this case is &lineWidth=2 Graphing Metrics To begin graphing specific metrics, pass one or more target parameters and specify a time window for the graph via from / until. target The target parameter specifies a path identifying one or several metrics, optionally with functions acting on those metrics. Paths are documented below, while functions are listed on the functions page. Paths and Wildcards Metric paths show the ”.” separated path from the root of the metrics tree (often starting with servers) to a metric, for example servers.ix02ehssvc04v.cpu.total.user. Paths also support the following wildcards, which allows you to identify more than one metric in a single path. Asterisk The asterisk (*) matches zero or more characters. It is non-greedy, so you can have more than one within a single path element. Example: servers.ix*ehssvc*v.cpu.total.* will return all total CPU metrics for all servers matching the given name pattern. Character list or range Characters in square brackets ([...]) specify a single character position in the path string, and match if the character in that position matches one of the characters in the list or range. A character range is indicated by 2 characters separated by a dash (-), and means that any character between those 2 characters (inclusive) will match. More than one range can be included within the square brackets, e.g. foo[a-z0-9]bar will match foopbar, foo7bar etc.. If the characters cannot be read as a range, they are treated as a list – any character in the list will match, e.g. foo[bc]ar will match foobar and foocar. If you want to include a dash (-) in your list, put it at the beginning or end, so it’s not interpreted as a range. Graphite-API Documentation, Release 1.1.3 Value list Comma-separated values within curly braces ({foo,bar,...}) are treated as value lists, and match if any of the values matches the current point in the path. For example, servers.ix01ehssvc04v. cpu.total.{user,system,iowait} will match the user, system and I/O wait total CPU metrics for the specified server. Note: All wildcards apply only within a single path element. In other words, they do not include or cross dots (.) Therefore, servers.* will not match servers.ix02ehssvc04v.cpu.total.user, while servers.*. *.*.* will. Examples This will draw one or more metrics Example: &target=company.server05.applicationInstance04.requestsHandled (draws one metric) Let’s say there are 4 identical application instances running on each server: &target=company.server05.applicationInstance*.requestsHandled (draws 4 metrics / lines) Now let’s say you have 10 servers: &target=company.server*.applicationInstance*.requestsHandled (draws 40 metrics / lines) You can also run any number of functions on the various metrics before graphing: &target=averageSeries(company.server*.applicationInstance.requestsHandled) (draws 1 aggregate line) The target param can also be repeated to graph multiple related metrics: &target=company.server1.loadAvg&target=company.server1.memUsage Note: If more than 10 metrics are drawn the legend is no longer displayed. See the hideLegend parameter for details. from / until These are optional parameters that specify the relative or absolute time period to graph from specifies the beginning, until specifies the end. If from is omitted, it defaults to 24 hours ago If until is omitted, it defaults to the current time (now). There are multiple possible formats for these functions: &from=-RELATIVE_TIME &from=ABSOLUTE_TIME Graphite-API Documentation, Release 1.1.3 RELATIVE_TIME is a length of time since the current time. It is always preceded by a minus sign (-) and followed by a unit of time. Valid units of time: Abbreviation Unit s Seconds min Minutes h Hours d Days w Weeks mon 30 Days (month) y 365 Days (year) ABSOLUTE_TIME is in the format HH:MM_YYMMDD, YYYYMMDD, MM/DD/YY, or any other at(1)- compatible time format. Abbreviation Meaning HH Hours, in 24h clock format. Times before 12PM must include leading zeroes. MM Minutes YYYY 4 Digit Year. MM Numeric month representation with leading zero DD Day of month with leading zero &from and &until can mix absolute and relative time if desired. Examples: &from=-8d&until=-7d (shows same day last week) &from=04:00_20110501&until=16:00_20110501 (shows 4AM-4PM on May 1st, 2011) &from=20091201&until=20091231 (shows December 2009) &from=noon+yesterday (shows data since 12:00pm on the previous day) &from=6pm+today (shows data since 6:00pm on the same day) &from=january+1 (shows data since the beginning of the current year) &from=monday (show data since the previous monday) template The target metrics can use a special template function which allows the metric paths to contain variables. Values for these variables can be provided via the template query parameter. Example: &target=template(hosts.$hostname.cpu)&template[hostname]=worker1 Default values for the template variables can also be provided: Graphite-API Documentation, Release 1.1.3 &target=template(hosts.$hostname.cpu, hostname="worker1") Positional arguments can be used instead of named ones: &target=template(hosts.$1.cpu, "worker1") &target=template(hosts.$1.cpu, "worker1")&template[1]=worker* In addition to path substitution, variables can be used for numeric and string literals: &target=template(constantLine($number))&template[number]=123 &target=template(sinFunction($name))&template[name]=nameOfMySineWaveMetric Data Display Formats Along with rendering an image, the api can also generate SVG with embedded metadata, PDF, or return the raw data in various formats for external graphing, analysis or monitoring. format Controls the format of data returned Affects all &targets passed in the URL. Examples: &format=png &format=raw &format=csv &format=json &format=svg &format=pdf &format=dygraph &format=rickshaw png Renders the graph as a PNG image of size determined by width and height raw Renders the data in a custom line-delimited format. Targets are output one per line and are of the format <target name>,<start timestamp>,<end timestamp>,<series step>|[data]*. Example: entries,1311836008,1311836013,1|1.0,2.0,3.0,5.0,6.0 csv Renders the data in a CSV format suitable for import into a spreadsheet or for processing in a script. Example: Graphite-API Documentation, Release 1.1.3 entries,2011-07-28 01:53:28,1.0 entries,2011-07-28 01:53:29,2.0 entries,2011-07-28 01:53:30,3.0 entries,2011-07-28 01:53:31,5.0 entries,2011-07-28 01:53:32,6.0 json Renders the data as a json object. The jsonp option can be used to wrap this data in a named call for cross-domain access. [{ "target": "entries", "datapoints": [ [1.0, 1311836008], [2.0, 1311836009], [3.0, 1311836010], [5.0, 1311836011], [6.0, 1311836012] ] }] svg Renders the graph as SVG markup of size determined by width and height. Metadata about the drawn graph is saved as an embedded script with the variable metadata being set to an object describing the graph. <script> <![CDATA[ metadata = { "area": { "xmin": 39.195507812499997, "ymin": 33.96875, "ymax": 623.794921875, "xmax": 1122 }, "series": [ { "start": 1335398400, "step": 1800, "end": 1335425400, "name": "summarize(test.data, \"30min\", \"sum\")", "color": "#859900", "data": [null, null, 1.0, null, 1.0, null, 1.0, null, 1.0, null, 1.0, null, ˓→null, null, null], "options": {}, "valuesPerPoint": 1 } ], "y": { "labelValues": [0, 0.25, 0.5, 0.75, 1.0], "top": 1.0, "labels": ["0 ", "0.25 ", "0.50 ", "0.75 ", "1.00 "], "step": 0.25, Graphite-API Documentation, Release 1.1.3 "bottom": 0 }, "x": { "start": 1335398400, "end": 1335423600 }, "font": { "bold": false, "name": "Sans", "italic": false, "size": 10 }, "options": { "lineWidth": 1.2 } } ]]> </script> pdf Renders the graph as a PDF of size determined by width and height. dygraph Renders the data as a json object suitable for passing to a Dygraph object. { "labels" : [ "Time", "entries" ], "data" : [ [1468791890000, 0.0], [1468791900000, 0.0] ] } rickshaw Renders the data as a json object suitable for passing to a Rickshaw object. [{ "target": "entries", "datapoints": [{ "y": 0.0, "x": 1468791890 }, { "y": 0.0, "x": 1468791900 }] }] Graphite-API Documentation, Release 1.1.3 rawData Deprecated since version 0.9.9: This option is deprecated in favor of format Used to get numerical data out of the webapp instead of an image Can be set to true, false, csv. Affects all &targets passed in the URL. Example: &target=carbon.agents.graphiteServer01.cpuUsage&from=-5min&rawData=true Returns the following text: carbon.agents.graphiteServer01.cpuUsage,1306217160,1306217460,60|0.0,0.00666666520965, ˓→0.00666666624282,0.0,0.0133345399694 Graph Parameters areaAlpha Default: 1.0 Takes a floating point number between 0.0 and 1.0. Sets the alpha (transparency) value of filled areas when using an areaMode. areaMode Default: none Enables filling of the area below the graphed lines. Fill area is the same color as the line color associated with it. See areaAlpha to make this area transparent. Takes one of the following parameters which determines the fill mode to use: none Disables areaMode first Fills the area under the first target and no other all Fills the areas under each target stacked Creates a graph where the filled area of each target is stacked on one another. Each target line is displayed as the sum of all previous lines plus the value of the current line. bgcolor Default: white Sets the background color of the graph. Graphite-API Documentation, Release 1.1.3 Color Names RGB Value black 0,0,0 white 255,255,255 blue 100,100,255 green 0,200,0 red 200,0,50 yellow 255,255,0 orange 255, 165, 0 purple 200,100,255 brown 150,100,50 aqua 0,150,150 gray 175,175,175 grey 175,175,175 magenta 255,0,255 pink 255,100,100 gold 200,200,0 rose 200,150,200 darkblue 0,0,255 darkgreen 0,255,0 darkred 255,0,0 darkgray 111,111,111 darkgrey 111,111,111 RGB can be passed directly in the format #RRGGBB where RR, GG, and BB are 2-digit hex vaules for red, green and blue, respectively. Examples: &bgcolor=blue &bgcolor=#2222FF cacheTimeout Default: the value of cache.default_timeout in your configuration file. By default, 60 seconds. colorList Default: blue,green,red,purple,brown,yellow,aqua,grey,magenta,pink,gold,rose Takes one or more comma-separated color names or RGB values (see bgcolor for a list of color names) and uses that list in order as the colors of the lines. If more lines / metrics are drawn than colors passed, the list is reused in order. Example: &colorList=green,yellow,orange,red,purple,#DECAFF drawNullAsZero Default: false Converts any None (null) values in the displayed metrics to zero at render time. Graphite-API Documentation, Release 1.1.3 fgcolor Default: black Sets the foreground color This only affects the title, legend text, and axis labels. See majorGridLineColor, and minorGridLineColor for further control of colors. See bgcolor for a list of color names and details on formatting this parameter. fontBold Default: false If set to true, makes the font bold. Example: &fontBold=true fontItalic Default: false If set to true, makes the font italic / oblique. Example: &fontItalic=true fontName Default: ‘Sans’ Change the font used to render text on the graph The font must be installed on the Graphite-API server. Example: &fontName=FreeMono fontSize Default: 10 Changes the font size Must be passed a positive floating point number or integer equal to or greater than 1. Example: &fontSize=8 format See: Data Display Formats Graphite-API Documentation, Release 1.1.3 from See: from / until graphOnly Default: false Display only the graph area with no grid lines, axes, or legend. graphType Default: line Sets the type of graph to be rendered. Currently there are only two graph types: line A line graph displaying metrics as lines over time. pie A pie graph with each slice displaying an aggregate of each metric calculated using the function specified by pieMode. hideLegend Default: <unset> If set to true, the legend is not drawn. If set to false, the legend is drawn. If unset, the legend is displayed if there are less than 10 items. Hint: If set to false the &height parameter may need to be increased to accommodate the additional text. Example: &hideLegend=false hideNullFromLegend Default: False If set to true, series with all null values will not be reported in the legend. Example: &hideNullFromLegend=true hideAxes Default: false If set to true the X and Y axes will not be rendered. Example: Graphite-API Documentation, Release 1.1.3 &hideAxes=true hideXAxis Default: false If set to true the X Axis will not be rendered. hideYAxis Default: false If set to true the Y Axis will not be rendered. hideGrid Default: false If set to true the grid lines will not be rendered. Example: &hideGrid=true height Default: 300 Sets the height of the generated graph image in pixels. See also: width Example: &width=650&height=250 jsonp Default: <unset> If set and combined with format=json, wraps the JSON response in a function call named by the parameter specified. leftColor Default: color chosen from colorList. In dual Y-axis mode, sets the color of all metrics associated with the left Y-axis. Graphite-API Documentation, Release 1.1.3 leftDashed Default: false In dual Y-axis mode, draws all metrics associated with the left Y-axis using dashed lines. leftWidth Default: value of the parameter lineWidth In dual Y-axis mode, sets the line width of all metrics associated with the left Y-axis. lineMode Default: slope Sets the line drawing behavior. Takes one of the following parameters: slope Slope line mode draws a line from each point to the next. Periods with Null values will not be drawn. staircase Staircase draws a flat line for the duration of a time period and then a vertical line up or down to the next value. connected Like a slope line, but values are always connected with a slope line, regardless of whether or not there are Null values between them. Example: &lineMode=staircase lineWidth Default: 1.2 Takes any floating point or integer (negative numbers do not error but will cause no line to be drawn). Changes the width of the line in pixels. Example: &lineWidth=2 logBase Default: <unset> If set, draws the graph with a logarithmic scale of the specified base (e.g. 10 for common logarithm). majorGridLineColor Default: rose Sets the color of the major grid lines. See bgcolor for valid color names and formats. Graphite-API Documentation, Release 1.1.3 Example: &majorGridLineColor=#FF22FF margin Default: 10 Sets the margin around a graph image in pixels on all sides. Example: &margin=20 max Deprecated since version 0.9.0: See yMax maxDataPoints Set the maximum numbers of datapoints returned when using json content. If the number of datapoints in a selected range exceeds the maxDataPoints value then the datapoints over the whole period are consolidated. minorGridLineColor Default: grey Sets the color of the minor grid lines. See bgcolor for valid color names and formats. Example: &minorGridLineColor=darkgrey minorY Default: 1 Sets the number of minor grid lines per major line on the y-axis. Example: &minorY=3 min Deprecated since version 0.9.0: See yMin Graphite-API Documentation, Release 1.1.3 minXStep Default: 1 Sets the minimum pixel-step to use between datapoints drawn. Any value below this will trigger a point consolidation of the series at render time. The default value of 1 combined with the default lineWidth of 1.2 will cause a minimal amount of line overlap between close-together points. To disable render-time point consolidation entirely, set this to 0 though note that series with more points than there are pixels in the graph area (e.g. a few month’s worth of per-minute data) will look very ‘smooshed’ as there will be a good deal of line overlap. In response, one may use lineWidth to compensate for this. noCache Default: False Set it to disable caching in rendered graphs. noNullPoints Default: False If set and combined with format=json, removes all null datapoints from the series returned. pieLabels Default: horizontal Orientation to use for slice labels inside of a pie chart. horizontal Labels are oriented horizontally within each slice rotated Labels are oriented radially within each slice pieMode Default: average The type of aggregation to use to calculate slices of a pie when graphType=pie. One of: average The average of non-null points in the series. maximum The maximum of non-null points in the series. minimum The minimum of non-null points in the series. rightColor Default: color chosen from colorList In dual Y-axis mode, sets the color of all metrics associated with the right Y-axis. Graphite-API Documentation, Release 1.1.3 rightDashed Default: false In dual Y-axis mode, draws all metrics associated with the right Y-axis using dashed lines. rightWidth Default: value of the parameter lineWidth In dual Y-axis mode, sets the line width of all metrics associated with the right Y-axis. template Default: default Used to specify a template from graphTemplates.conf to use for default colors and graph styles. Example: &template=plain thickness Deprecated since version 0.9.0: See: lineWidth title Default: <unset> Puts a title at the top of the graph, center aligned. If unset, no title is displayed. Example: &title=Apache Busy Threads, All Servers, Past 24h tz Default: The timezone specified in the graphite-api configuration Time zone to convert all times into. Examples: &tz=America/Los_Angeles &tz=UTC uniqueLegend Default: false Display only unique legend items, removing any duplicates. Graphite-API Documentation, Release 1.1.3 until See: from / until valueLabels Default: percent Determines how slice labels are rendered within a pie chart. none Slice labels are not shown numbers Slice labels are reported with the original values percent Slice labels are reported as a percent of the whole valueLabelsColor Default: black Color used to draw slice labels within a pie chart. valueLabelsMin Default: 5 Slice values below this minimum will not have their labels rendered. vtitle Default: <unset> Labels the y-axis with vertical text. If unset, no y-axis label is displayed. Example: &vtitle=Threads vtitleRight Default: <unset> In dual Y-axis mode, sets the title of the right Y-Axis (see: vtitle). width Default: 330 Sets the width of the generated graph image in pixels. See also: height Example: Graphite-API Documentation, Release 1.1.3 &width=650&height=250 xFormat Default: Determined automatically based on the time-width of the X axis Sets the time format used when displaying the X-axis. See datetime.date.strftime() for format specification details. yAxisSide Default: left Sets the side of the graph on which to render the Y-axis. Accepts values of left or right. yDivisors Default: 4,5,6 Sets the preferred number of intermediate values to display on the Y-axis (Y values between the minimum and max- imum). Note that Graphite will ultimately choose what values (and how many) to display based on a ‘pretty’ factor, which tries to maintain a sensible scale (e.g. preferring intermediary values like 25%,50%,75% over 33.3%,66.6%). To explicitly set the Y-axis values, see yStep. yLimit Reserved for future use See: yMax yLimitLeft Reserved for future use See: yMaxLeft yLimitRight Reserved for future use See: yMaxRight yMin Default: The lowest value of any of the series displayed Manually sets the lower bound of the graph. Can be passed any integer or floating point number. Example: Graphite-API Documentation, Release 1.1.3 &yMin=0 yMax Default: The highest value of any of the series displayed Manually sets the upper bound of the graph. Can be passed any integer or floating point number. Example: &yMax=0.2345 yMaxLeft In dual Y-axis mode, sets the upper bound of the left Y-Axis (see: yMax). yMaxRight In dual Y-axis mode, sets the upper bound of the right Y-Axis (see: yMax). yMinLeft In dual Y-axis mode, sets the lower bound of the left Y-Axis (see: yMin). yMinRight In dual Y-axis mode, sets the lower bound of the right Y-Axis (see: yMin). yStep Default: Calculated automatically Manually set the value step between Y-axis labels and grid lines. yStepLeft In dual Y-axis mode, Manually set the value step between the left Y-axis labels and grid lines (see: yStep). yStepRight In dual Y-axis mode, Manually set the value step between the right Y-axis labels and grid lines (see: yStep). Graphite-API Documentation, Release 1.1.3 yUnitSystem Default: si Set the unit system for compacting Y-axis values (e.g. 23,000,000 becomes 23M). Value can be one of: si Use si units (powers of 1000) - K, M, G, T, P. binary Use binary units (powers of 1024) - Ki, Mi, Gi, Ti, Pi. sec Use time units (seconds) - m, H, D, M, Y. msec Use time units (milliseconds) - s, m, H, D, M, Y. none Dont compact values, display the raw number. Built-in functions Functions are used to transform, combine, and perform computations on series data. They are applied by manipulating the target parameters in the Render API. Usage Most functions are applied to one series list. Functions with the parameter *seriesLists can take an arbitrary number of series lists. To pass multiple series lists to a function which only takes one, use the group() function. List of functions absolute(seriesList) Takes one metric or a wildcard seriesList and applies the mathematical abs function to each datapoint transform- ing it to its absolute value. Example: &target=absolute(Server.instance01.threads.busy) &target=absolute(Server.instance*.threads.busy) aggregateLine(seriesList, func=’avg’) Takes a metric or wildcard seriesList and draws a horizontal line based on the function applied to each series. Note: By default, the graphite renderer consolidates data points by averaging data points over time. If you are using the ‘min’ or ‘max’ function for aggregateLine, this can cause an unusual gap in the line drawn by this function and the data itself. To fix this, you should use the consolidateBy() function with the same function argument you are using for aggregateLine. This will ensure that the proper data points are retained and the graph should line up correctly. Example: &target=aggregateLine(server01.connections.total, 'avg') &target=aggregateLine(server*.connections.total, 'avg') alias(seriesList, newName) Takes one metric or a wildcard seriesList and a string in quotes. Prints the string instead of the metric name in the legend. Example: Graphite-API Documentation, Release 1.1.3 &target=alias(Sales.widgets.largeBlue,"Large Blue Widgets") aliasByMetric(seriesList) Takes a seriesList and applies an alias derived from the base metric name. Example: &target=aliasByMetric(carbon.agents.graphite.creates) aliasByNode(seriesList, *nodes) Takes a seriesList and applies an alias derived from one or more “node” portion/s of the target name. Node indices are 0 indexed. Example: &target=aliasByNode(ganglia.*.cpu.load5,1) aliasSub(seriesList, search, replace) Runs series names through a regex search/replace. Example: &target=aliasSub(ip.*TCP*,"^.*TCP(\d+)","\1") alpha(seriesList, alpha) Assigns the given alpha transparency setting to the series. Takes a float value between 0 and 1. applyByNode(seriesList, nodeNum, templateFunction, newName=None) Takes a seriesList and applies some complicated function (described by a string), replacing templates with unique prefixes of keys from the seriesList (the key is all nodes up to the index given as nodeNum). If the newName parameter is provided, the name of the resulting series will be given by that parameter, with any “%” characters replaced by the unique prefix. Example: &target=applyByNode(servers.*.disk.bytes_free,1, "divideSeries(%.disk.bytes_free,sumSeries(%.disk.bytes_*))") Would find all series which match servers.*.disk.bytes_free, then trim them down to unique series up to the node given by nodeNum, then fill them into the template function provided (replacing % by the prefixes). areaBetween(*seriesLists) Draws the vertical area in between the two series in seriesList. Useful for visualizing a range such as the minimum and maximum latency for a service. areaBetween expects exactly one argument that results in exactly two series (see example below). The order of the lower and higher values series does not matter. The visualization only works when used in conjunction with areaMode=stacked. Most likely use case is to provide a band within which another metric should move. In such case applying an alpha(), as in the second example, gives best visual results. Example: &target=areaBetween(service.latency.{min,max})&areaMode=stacked &target=alpha(areaBetween(service.latency.{min,max}),0.3)&areaMode=stacked Graphite-API Documentation, Release 1.1.3 If for instance, you need to build a seriesList, you should use the group function, like so: &target=areaBetween(group(minSeries(a.*.min),maxSeries(a.*.max))) asPercent(seriesList, total=None) Calculates a percentage of the total of a wildcard series. If total is specified, each series will be calculated as a percentage of that total. If total is not specified, the sum of all points in the wildcard series will be used instead. The total parameter may be a single series, reference the same number of series as seriesList or a numeric value. Example: &target=asPercent(Server01.connections.{failed,succeeded}, Server01.connections.attempted) &target=asPercent(Server*.connections.{failed,succeeded}, Server*.connections.attempted) &target=asPercent(apache01.threads.busy,1500) &target=asPercent(Server01.cpu.*.jiffies) averageAbove(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the metrics with an average value above N for the time period specified. Example: &target=averageAbove(server*.instance*.threads.busy,25) Draws the servers with average values above 25. averageBelow(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the metrics with an average value below N for the time period specified. Example: &target=averageBelow(server*.instance*.threads.busy,25) Draws the servers with average values below 25. averageOutsidePercentile(seriesList, n) Removes functions lying inside an average percentile interval averageSeries(*seriesLists) Short Alias: avg() Takes one metric or a wildcard seriesList. Draws the average value of all metrics passed at each time. Example: &target=averageSeries(company.server.*.threads.busy) averageSeriesWithWildcards(seriesList, *positions) Call averageSeries after inserting wildcards at the given position(s). Example: &target=averageSeriesWithWildcards( host.cpu-[0-7].cpu-{user,system}.value, 1) This would be the equivalent of: Graphite-API Documentation, Release 1.1.3 &target=averageSeries(host.*.cpu-user.value)&target=averageSeries( host.*.cpu-system.value) cactiStyle(seriesList, system=None, units=None) Takes a series list and modifies the aliases to provide column aligned output with Current, Max, and Min values in the style of cacti. Optionally takes a “system” value to apply unit formatting in the same style as the Y-axis, or a “unit” string to append an arbitrary unit suffix. NOTE: column alignment only works with monospace fonts such as terminus. Example: &target=cactiStyle(ganglia.*.net.bytes_out,"si") &target=cactiStyle(ganglia.*.net.bytes_out,"si","b") changed(seriesList) Takes one metric or a wildcard seriesList. Output 1 when the value changed, 0 when null or the same Example: &target=changed(Server01.connections.handled) color(seriesList, theColor) Assigns the given color to the seriesList Example: &target=color(collectd.hostname.cpu.0.user, 'green') &target=color(collectd.hostname.cpu.0.system, 'ff0000') &target=color(collectd.hostname.cpu.0.idle, 'gray') &target=color(collectd.hostname.cpu.0.idle, '6464ffaa') consolidateBy(seriesList, consolidationFunc) Takes one metric or a wildcard seriesList and a consolidation function name. Valid function names are ‘sum’, ‘average’, ‘min’, and ‘max’. When a graph is drawn where width of the graph size in pixels is smaller than the number of datapoints to be graphed, Graphite consolidates the values to to prevent line overlap. The consolidateBy() function changes the consolidation function from the default of ‘average’ to one of ‘sum’, ‘max’, or ‘min’. This is especially useful in sales graphs, where fractional values make no sense and a ‘sum’ of consolidated values is appropriate. Example: &target=consolidateBy(Sales.widgets.largeBlue, 'sum') &target=consolidateBy(Servers.web01.sda1.free_space, 'max') constantLine(value) Takes a float F. Draws a horizontal line at value F across the graph. Example: &target=constantLine(123.456) countSeries(*seriesLists) Draws a horizontal line representing the number of nodes found in the seriesList. Example: Graphite-API Documentation, Release 1.1.3 &target=countSeries(carbon.agents.*.*) cumulative(seriesList) Takes one metric or a wildcard seriesList. When a graph is drawn where width of the graph size in pixels is smaller than the number of datapoints to be graphed, Graphite consolidates the values to prevent line overlap. The cumulative() function changes the consolidation function from the default of ‘average’ to ‘sum’. This is especially useful in sales graphs, where fractional values make no sense and a ‘sum’ of consolidated values is appropriate. Alias for consolidateBy(series, 'sum') Example: &target=cumulative(Sales.widgets.largeBlue) currentAbove(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the metrics whose value is above N at the end of the time period specified. Example: &target=currentAbove(server*.instance*.threads.busy,50) Draws the servers with more than 50 busy threads. currentBelow(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the metrics whose value is below N at the end of the time period specified. Example: &target=currentBelow(server*.instance*.threads.busy,3) Draws the servers with less than 3 busy threads. dashed(seriesList, dashLength=5) Takes one metric or a wildcard seriesList, followed by a float F. Draw the selected metrics with a dotted line with segments of length F If omitted, the default length of the segments is 5.0 Example: &target=dashed(server01.instance01.memory.free,2.5) delay(seriesList, steps) This shifts all samples later by an integer number of steps. This can be used for custom derivative calculations, among other things. Note: this will pad the early end of the data with None for every step shifted. This complements other time-displacement functions such as timeShift and timeSlice, in that this function is indifferent about the step intervals being shifted. Example: &target=divideSeries(server.FreeSpace,delay(server.FreeSpace,1)) This computes the change in server free space as a percentage of the previous free space. Graphite-API Documentation, Release 1.1.3 derivative(seriesList) This is the opposite of the integral function. This is useful for taking a running total metric and calculating the delta between subsequent data points. This function does not normalize for periods of time, as a true derivative would. Instead see the perSecond() function to calculate a rate of change over time. Example: &target=derivative(company.server.application01.ifconfig.TXPackets) Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.) By applying the derivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total. diffSeries(*seriesLists) Subtracts series 2 through n from series 1. Example: &target=diffSeries(service.connections.total, service.connections.failed) To diff a series and a constant, one should use offset instead of (or in addition to) diffSeries. Example: &target=offset(service.connections.total, -5) &target=offset(diffSeries(service.connections.total, service.connections.failed), -4) divideSeries(dividendSeriesList, divisorSeriesList) Takes a dividend metric and a divisor metric and draws the division result. A constant may not be passed. To divide by a constant, use the scale() function (which is essentially a multiplication operation) and use the inverse of the dividend. (Division by 8 = multiplication by 1/8 or 0.125) Example: &target=divideSeries(Series.dividends,Series.divisors) divideSeriesLists(dividendSeriesList, divisorSeriesList) Iterates over a two lists and divides list1[0] by list2[0], list1[1] by list2[1] and so on. The lists need to be the same length drawAsInfinite(seriesList) Takes one metric or a wildcard seriesList. If the value is zero, draw the line at 0. If the value is above zero, draw the line at infinity. If the value is null or less than zero, do not draw the line. Useful for displaying on/off metrics, such as exit codes. (0 = success, anything else = failure.) Example: drawAsInfinite(Testing.script.exitCode) exclude(seriesList, pattern) Takes a metric or a wildcard seriesList, followed by a regular expression in double quotes. Excludes metrics that match the regular expression. Example: Graphite-API Documentation, Release 1.1.3 &target=exclude(servers*.instance*.threads.busy,"server02") exponentialMovingAverage(seriesList, windowSize) Takes a series of values and a window size and produces an exponential moving average utilizing the following formula: ema(current) = constant * (Current Value) + (1 - constant) * ema(previous) The Constant is calculated as: constant = 2 / (windowSize + 1) The first period EMA uses a simple moving average for its value. Example: &target=exponentialMovingAverage(*.transactions.count, 10) &target=exponentialMovingAverage(*.transactions.count, '-10s') fallbackSeries(seriesList, fallback) Takes a wildcard seriesList, and a second fallback metric. If the wildcard does not match any series, draws the fallback metric. Example: &target=fallbackSeries(server*.requests_per_second, constantLine(0)) Draws a 0 line when server metric does not exist. formatPathExpressions(seriesList) Returns a comma-separated list of unique path expressions. grep(seriesList, pattern) Takes a metric or a wildcard seriesList, followed by a regular expression in double quotes. Excludes metrics that don’t match the regular expression. Example: &target=grep(servers*.instance*.threads.busy,"server02") group(*seriesLists) Takes an arbitrary number of seriesLists and adds them to a single seriesList. This is used to pass multiple seriesLists to a function which only takes one. groupByNode(seriesList, nodeNum, callback) Takes a serieslist and maps a callback to subgroups within as defined by a common node. Example: &target=groupByNode(ganglia.by-function.*.*.cpu.load5,2,"sumSeries") Would return multiple series which are each the result of applying the “sumSeries” function to groups joined on the second node (0 indexed) resulting in a list of targets like: sumSeries(ganglia.by-function.server1.*.cpu.load5), sumSeries(ganglia.by-function.server2.*.cpu.load5),... groupByNodes(seriesList, callback, *nodes) Takes a serieslist and maps a callback to subgroups within as defined by multiple nodes. Example: Graphite-API Documentation, Release 1.1.3 &target=groupByNodes(ganglia.server*.*.cpu.load*,"sumSeries",1,4) Would return multiple series which are each the result of applying the “sumSeries” function to groups joined on the nodes’ list (0 indexed) resulting in a list of targets like: sumSeries(ganglia.server1.*.cpu.load5), sumSeries(ganglia.server1.*.cpu.load10), sumSeries(ganglia.server1.*.cpu.load15), sumSeries(ganglia.server2.*.cpu.load5), sumSeries(ganglia.server2.*.cpu.load10), sumSeries(ganglia.server2.*.cpu.load15), ... highestAverage(seriesList, n=1) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the top N metrics with the highest average value for the time period specified. Example: &target=highestAverage(server*.instance*.threads.busy,5) Draws the top 5 servers with the highest average value. highestCurrent(seriesList, n=1) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the N metrics with the highest value at the end of the time period specified. Example: &target=highestCurrent(server*.instance*.threads.busy,5) Draws the 5 servers with the highest busy threads. highestMax(seriesList, n=1) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the N metrics with the highest maximum value in the time period specified. Example: &target=highestMax(server*.instance*.threads.busy,5) Draws the top 5 servers who have had the most busy threads during the time period specified. hitcount(seriesList, intervalString, alignToInterval=False) Estimate hit counts from a list of time series. This function assumes the values in each time series represent hits per second. It calculates hits per some larger interval such as per day or per hour. This function is like summarize(), except that it compensates automatically for different time scales (so that a similar graph results from using either fine-grained or coarse-grained records) and handles rarely-occurring events gracefully. holtWintersAberration(seriesList, delta=3) Performs a Holt-Winters forecast using the series as input data and plots the positive or negative deviation of the series data from the forecast. holtWintersConfidenceArea(seriesList, delta=3) Performs a Holt-Winters forecast using the series as input data and plots the area between the upper and lower bands of the predicted forecast deviations. Graphite-API Documentation, Release 1.1.3 holtWintersConfidenceBands(seriesList, delta=3) Performs a Holt-Winters forecast using the series as input data and plots upper and lower bands with the pre- dicted forecast deviations. holtWintersForecast(seriesList) Performs a Holt-Winters forecast using the series as input data. Data from one week previous to the series is used to bootstrap the initial forecast. identity(name, step=60) Identity function: Returns datapoints where the value equals the timestamp of the datapoint. Useful when you have another series where the value is a timestamp, and you want to compare it to the time of the datapoint, to render an age Example: &target=identity("The.time.series") This would create a series named “The.time.series” that contains points where x(t) == t. Accepts optional second argument as ‘step’ parameter (default step is 60 sec) integral(seriesList) This will show the sum over time, sort of like a continuous addition function. Useful for finding totals or trends in metrics that are collected per minute. Example: &target=integral(company.sales.perMinute) This would start at zero on the left side of the graph, adding the sales each minute, and show the total sales for the time period selected at the right side, (time now, or the time specified by ‘&until=’). integralByInterval(seriesList, intervalUnit) This will do the same as integral() funcion, except resetting the total to 0 at the given time in the parameter “from” Useful for finding totals per hour/day/week/.. Example: &target=integralByInterval(company.sales.perMinute, "1d")&from=midnight-10days This would start at zero on the left side of the graph, adding the sales each minute, and show the evolution of sales per day during the last 10 days. interpolate(seriesList, limit=inf ) Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line. Example: &target=interpolate(Server01.connections.handled) &target=interpolate(Server01.connections.handled, 10) invert(seriesList) Takes one metric or a wildcard seriesList, and inverts each datapoint (i.e. 1/x). Example: &target=invert(Server.instance01.threads.busy) Graphite-API Documentation, Release 1.1.3 isNonNull(seriesList) Takes a metric or wild card seriesList and counts up how many non-null values are specified. This is useful for understanding which metrics have data at a given point in time (ie, to count which servers are alive). Example: &target=isNonNull(webapp.pages.*.views) Returns a seriesList where 1 is specified for non-null values, and 0 is specified for null values. keepLastValue(seriesList, limit=inf ) Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line. Example: &target=keepLastValue(Server01.connections.handled) &target=keepLastValue(Server01.connections.handled, 10) legendValue(seriesList, *valueTypes) Takes one metric or a wildcard seriesList and a string in quotes. Appends a value to the metric name in the legend. Currently one or several of: last, avg, total, min, max. The last argument can be si (default) or binary, in that case values will be formatted in the corresponding system. Example: &target=legendValue(Sales.widgets.largeBlue, 'avg', 'max', 'si') limit(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Only draw the first N metrics. Useful when testing a wildcard in a metric. Example: &target=limit(server*.instance*.memory.free,5) Draws only the first 5 instance’s memory free. lineWidth(seriesList, width) Takes one metric or a wildcard seriesList, followed by a float F. Draw the selected metrics with a line width of F, overriding the default value of 1, or the &lineWidth=X.X parameter. Useful for highlighting a single metric out of many, or having multiple line widths in one graph. Example: &target=lineWidth(server01.instance01.memory.free,5) linearRegression(seriesList, startSourceAt=None, endSourceAt=None) Graphs the liner regression function by least squares method. Takes one metric or a wildcard seriesList, followed by a quoted string with the time to start the line and another quoted string with the time to end the line. The start and end times are inclusive (default range is from to until). See from / until in the render_api_ for examples of time formats. Datapoints in the range is used to regression. Example: Graphite-API Documentation, Release 1.1.3 &target=linearRegression(Server.instance01.threads.busy,'-1d') &target=linearRegression(Server.instance*.threads.busy, "00:00 20140101","11:59 20140630") linearRegressionAnalysis(series) Returns factor and offset of linear regression function by least squares method. logarithm(seriesList, base=10) Takes one metric or a wildcard seriesList, a base, and draws the y-axis in logarithmic format. If base is omitted, the function defaults to base 10. Example: &target=log(carbon.agents.hostname.avgUpdateTime,2) lowestAverage(seriesList, n=1) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the bottom N metrics with the lowest average value for the time period specified. Example: &target=lowestAverage(server*.instance*.threads.busy,5) Draws the bottom 5 servers with the lowest average value. lowestCurrent(seriesList, n=1) Takes one metric or a wildcard seriesList followed by an integer N. Out of all metrics passed, draws only the N metrics with the lowest value at the end of the time period specified. Example: &target=lowestCurrent(server*.instance*.threads.busy,5) Draws the 5 servers with the least busy threads right now. mapSeries(seriesList, mapNode) Short form: map(). Takes a seriesList and maps it to a list of sub-seriesList. Each sub-seriesList has the given mapNode in common. Example (note: This function is not very useful alone. It should be used with reduceSeries()): mapSeries(servers.*.cpu.*,1) => [ servers.server1.cpu.*, servers.server2.cpu.*, ... servers.serverN.cpu.* ] maxSeries(*seriesLists) Takes one metric or a wildcard seriesList. For each datapoint from each metric passed in, pick the maximum value and graph it. Example: &target=maxSeries(Server*.connections.total) Graphite-API Documentation, Release 1.1.3 maximumAbove(seriesList, n) Takes one metric or a wildcard seriesList followed by a constant n. Draws only the metrics with a maximum value above n. Example: &target=maximumAbove(system.interface.eth*.packetsSent,1000) This would only display interfaces which at one point sent more than 1000 packets/min. maximumBelow(seriesList, n) Takes one metric or a wildcard seriesList followed by a constant n. Draws only the metrics with a maximum value below n. Example: &target=maximumBelow(system.interface.eth*.packetsSent,1000) This would only display interfaces which always sent less than 1000 packets/min. minSeries(*seriesLists) Takes one metric or a wildcard seriesList. For each datapoint from each metric passed in, pick the minimum value and graph it. Example: &target=minSeries(Server*.connections.total) minimumAbove(seriesList, n) Takes one metric or a wildcard seriesList followed by a constant n. Draws only the metrics with a minimum value above n. Example: &target=minimumAbove(system.interface.eth*.packetsSent,1000) This would only display interfaces which always sent more than 1000 packets/min. minimumBelow(seriesList, n) Takes one metric or a wildcard seriesList followed by a constant n. Draws only the metrics with a minimum value below n. Example: &target=minimumBelow(system.interface.eth*.packetsSent,1000) This would only display interfaces which sent at one point less than 1000 packets/min. mostDeviant(seriesList, n) Takes one metric or a wildcard seriesList followed by an integer N. Draws the N most deviant metrics. To find the deviants, the standard deviation (sigma) of each series is taken and ranked. The top N standard deviations are returned. Example: &target=mostDeviant(server*.instance*.memory.free, 5) Draws the 5 instances furthest from the average memory free. movingAverage(seriesList, windowSize) Graphs the moving average of a metric (or metrics) over a fixed number of past points, or a time interval. Graphite-API Documentation, Release 1.1.3 Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the average of the preceding datapoints for each point on the graph. Example: &target=movingAverage(Server.instance01.threads.busy,10) &target=movingAverage(Server.instance*.threads.idle,'5min') movingMax(seriesList, windowSize) Graphs the moving maximum of a metric (or metrics) over a fixed number of past points, or a time interval. Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the maximum of the preceeding datapoints for each point on the graph. Example: &target=movingMax(Server.instance01.requests,10) &target=movingMax(Server.instance*.errors,'5min') movingMedian(seriesList, windowSize) Graphs the moving median of a metric (or metrics) over a fixed number of past points, or a time interval. Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the median of the preceding datapoints for each point on the graph. Example: &target=movingMedian(Server.instance01.threads.busy,10) &target=movingMedian(Server.instance*.threads.idle,'5min') movingMin(seriesList, windowSize) Graphs the moving minimum of a metric (or metrics) over a fixed number of past points, or a time interval. Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the minimum of the preceeding datapoints for each point on the graph. Example: &target=movingMin(Server.instance01.requests,10) &target=movingMin(Server.instance*.errors,'5min') movingSum(seriesList, windowSize) Graphs the moving sum of a metric (or metrics) over a fixed number of past points, or a time interval. Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the sum of the preceeding datapoints for each point on the graph. Example: &target=movingSum(Server.instance01.requests,10) &target=movingSum(Server.instance*.errors,'5min') multiplySeries(*seriesLists) Takes two or more series and multiplies their points. A constant may not be used. To multiply by a constant, use the scale() function. Graphite-API Documentation, Release 1.1.3 Example: &target=multiplySeries(Series.dividends,Series.divisors) multiplySeriesWithWildcards(seriesList, *position) Call multiplySeries after inserting wildcards at the given position(s). Example: &target=multiplySeriesWithWildcards( web.host-[0-7].{avg-response,total-request}.value, 2) This would be the equivalent of: &target=multiplySeries(web.host-0.{avg-response,total-request}.value) &target=multiplySeries(web.host-1.{avg-response,total-request}.value) ... nPercentile(seriesList, n) Returns n-percent of each series in the seriesList. nonNegativeDerivative(seriesList, maxValue=None) Same as the derivative function above, but ignores datapoints that trend down. Useful for counters that increase for a long time, then wrap or reset. (Such as if a network interface is destroyed and recreated by unloading and re-loading a kernel module, common with USB / WiFi cards. Example: &target=nonNegativederivative( company.server.application01.ifconfig.TXPackets) offset(seriesList, factor) Takes one metric or a wildcard seriesList followed by a constant, and adds the constant to each datapoint. Example: &target=offset(Server.instance01.threads.busy,10) offsetToZero(seriesList) Offsets a metric or wildcard seriesList by subtracting the minimum value in the series from each datapoint. Useful to compare different series where the values in each series may be higher or lower on average but you’re only interested in the relative difference. An example use case is for comparing different round trip time results. When measuring RTT (like pinging a server), different devices may come back with consistently different results due to network latency which will be different depending on how many network hops between the probe and the device. To compare different devices in the same graph, the network latency to each has to be factored out of the results. This is a shortcut that takes the fastest response (lowest number in the series) and sets that to zero and then offsets all of the other datapoints in that series by that amount. This makes the assumption that the lowest response is the fastest the device can respond, of course the more datapoints that are in the series the more accurate this assumption is. Example: &target=offsetToZero(Server.instance01.responseTime) &target=offsetToZero(Server.instance*.responseTime) perSecond(seriesList, maxValue=None) NonNegativeDerivative adjusted for the series time interval This is useful for taking a running total metric and showing how many requests per second were handled. Graphite-API Documentation, Release 1.1.3 Example: &target=perSecond(company.server.application01.ifconfig.TXPackets) Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.) By applying the nonNegativeDerivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total. percentileOfSeries(seriesList, n, interpolate=False) percentileOfSeries returns a single series which is composed of the n-percentile values taken across a wildcard series at each point. Unless interpolate is set to True, percentile values are actual values contained in one of the supplied series. pow(seriesList, factor) Takes one metric or a wildcard seriesList followed by a constant, and raises the datapoint by the power of the constant provided at each point. Example: &target=pow(Server.instance01.threads.busy,10) &target=pow(Server.instance*.threads.busy,10) powSeries(*seriesLists) Takes two or more series and pows their points. A constant line may be used. Example: &target=powSeries(Server.instance01.app.requests, Server.instance01.app.replies) randomWalkFunction(name, step=60) Short Alias: randomWalk() Returns a random walk starting at 0. This is great for testing when there is no real data in whisper. Example: &target=randomWalk("The.time.series") This would create a series named “The.time.series” that contains points where x(t) == x(t-1)+random()-0.5, and x(0) == 0. Accepts an optional second argument as step parameter (default step is 60 sec). rangeOfSeries(*seriesLists) Takes a wildcard seriesList. Distills down a set of inputs into the range of the series Example: &target=rangeOfSeries(Server*.connections.total) reduceSeries(seriesLists, reduceFunction, reduceNode, *reduceMatchers) Short form: reduce(). Takes a list of seriesLists and reduces it to a list of series by means of the reduceFunction. Reduction is performed by matching the reduceNode in each series against the list of reduceMatchers. The each series is then passed to the reduceFunction as arguments in the order given by reduceMatchers. The reduceFunction should yield a single series. The resulting list of series are aliased so that they can easily be nested in other functions. Graphite-API Documentation, Release 1.1.3 Example: Map/Reduce asPercent(bytes_used,total_bytes) for each server. Assume that metrics in the form below exist: servers.server1.disk.bytes_used servers.server1.disk.total_bytes servers.server2.disk.bytes_used servers.server2.disk.total_bytes servers.server3.disk.bytes_used servers.server3.disk.total_bytes ... servers.serverN.disk.bytes_used servers.serverN.disk.total_bytes To get the percentage of disk used for each server: reduceSeries(mapSeries(servers.*.disk.*,1), "asPercent",3,"bytes_used","total_bytes") => alias(asPercent(servers.server1.disk.bytes_used, servers.server1.disk.total_bytes), "servers.server1.disk.reduce.asPercent"), alias(asPercent(servers.server2.disk.bytes_used, servers.server2.disk.total_bytes), "servers.server2.disk.reduce.asPercent"), ... alias(asPercent(servers.serverN.disk.bytes_used, servers.serverN.disk.total_bytes), "servers.serverN.disk.reduce.asPercent") In other words, we will get back the following metrics: servers.server1.disk.reduce.asPercent, servers.server2.disk.reduce.asPercent, ... servers.serverN.disk.reduce.asPercent See also: mapSeries() removeAbovePercentile(seriesList, n) Removes data above the nth percentile from the series or list of series provided. Values above this percentile are assigned a value of None. removeAboveValue(seriesList, n) Removes data above the given threshold from the series or list of series provided. Values above this threshold are assigned a value of None. removeBelowPercentile(seriesList, n) Removes data below the nth percentile from the series or list of series provided. Values below this percentile are assigned a value of None. removeBelowValue(seriesList, n) Removes data below the given threshold from the series or list of series provided. Values below this threshold are assigned a value of None. removeBetweenPercentile(seriesList, n) Removes lines who do not have an value lying in the x-percentile of all the values at a moment Graphite-API Documentation, Release 1.1.3 removeEmptySeries(seriesList) Takes one metric or a wildcard seriesList. Out of all metrics passed, draws only the metrics with not empty data. Example: &target=removeEmptySeries(server*.instance*.threads.busy) Draws only live servers with not empty data. scale(seriesList, factor) Takes one metric or a wildcard seriesList followed by a constant, and multiplies the datapoint by the constant provided at each point. Example: &target=scale(Server.instance01.threads.busy,10) &target=scale(Server.instance*.threads.busy,10) scaleToSeconds(seriesList, seconds) Takes one metric or a wildcard seriesList and returns “value per seconds” where seconds is a last argument to this functions. Useful in conjunction with derivative or integral function if you want to normalize its result to a known resolution for arbitrary retentions secondYAxis(seriesList) Graph the series on the secondary Y axis. sinFunction(name, amplitude=1, step=60) Short Alias: sin() Just returns the sine of the current time. The optional amplitude parameter changes the amplitude of the wave. Example: &target=sin("The.time.series", 2) This would create a series named “The.time.series” that contains sin(x)*2. A third argument can be provided as a step parameter (default is 60 secs). smartSummarize(seriesList, intervalString, func=’sum’) Smarter experimental version of summarize. sortByMaxima(seriesList) Takes one metric or a wildcard seriesList. Sorts the list of metrics by the maximum value across the time period specified. Useful with the &areaMode=all parameter, to keep the lowest value lines visible. Example: &target=sortByMaxima(server*.instance*.memory.free) sortByMinima(seriesList) Takes one metric or a wildcard seriesList. Sorts the list of metrics by the lowest value across the time period specified. Example: &target=sortByMinima(server*.instance*.memory.free) Graphite-API Documentation, Release 1.1.3 sortByName(seriesList, natural=False) Takes one metric or a wildcard seriesList. Sorts the list of metrics by the metric name using either alphabetical order or natural sorting. Natural sorting allows names containing numbers to be sorted more naturally, e.g: •Alphabetical sorting: server1, server11, server12, server2 •Natural sorting: server1, server2, server11, server12 sortByTotal(seriesList) Takes one metric or a wildcard seriesList. Sorts the list of metrics by the sum of values across the time period specified. squareRoot(seriesList) Takes one metric or a wildcard seriesList, and computes the square root of each datapoint. Example: &target=squareRoot(Server.instance01.threads.busy) stacked(seriesLists, stackName=’__DEFAULT__’) Takes one metric or a wildcard seriesList and change them so they are stacked. This is a way of stacking just a couple of metrics without having to use the stacked area mode (that stacks everything). By means of this a mixed stacked and non stacked graph can be made It can also take an optional argument with a name of the stack, in case there is more than one, e.g. for input and output metrics. Example: &target=stacked(company.server.application01.ifconfig.TXPackets, 'tx') stddevSeries(*seriesLists) Takes one metric or a wildcard seriesList. Draws the standard deviation of all metrics passed at each time. Example: &target=stddevSeries(company.server.*.threads.busy) stdev(seriesList, points, windowTolerance=0.1) Takes one metric or a wildcard seriesList followed by an integer N. Draw the Standard Deviation of all metrics passed for the past N datapoints. If the ratio of null points in the window is greater than windowTolerance, skip the calculation. The default for windowTolerance is 0.1 (up to 10% of points in the window can be missing). Note that if this is set to 0.0, it will cause large gaps in the output anywhere a single point is missing. Example: &target=stdev(server*.instance*.threads.busy,30) &target=stdev(server*.instance*.cpu.system,30,0.0) substr(seriesList, start=0, stop=0) Takes one metric or a wildcard seriesList followed by 1 or 2 integers. Assume that the metric name is a list or array, with each element separated by dots. Prints n - length elements of the array (if only one integer n is passed) or n - m elements of the array (if two integers n and m are passed). The list starts with element 0 and ends with element (length - 1). Example: Graphite-API Documentation, Release 1.1.3 &target=substr(carbon.agents.hostname.avgUpdateTime,2,4) The label would be printed as “hostname.avgUpdateTime”. sumSeries(*seriesLists) Short form: sum() This will add metrics together and return the sum at each datapoint. (See integral for a sum over time) Example: &target=sum(company.server.application*.requestsHandled) This would show the sum of all requests handled per minute (provided requestsHandled are collected once a minute). If metrics with different retention rates are combined, the coarsest metric is graphed, and the sum of the other metrics is averaged for the metrics with finer retention rates. sumSeriesWithWildcards(seriesList, *positions) Call sumSeries after inserting wildcards at the given position(s). Example: &target=sumSeriesWithWildcards(host.cpu-[0-7].cpu-{user,system}.value, 1) This would be the equivalent of: &target=sumSeries(host.*.cpu-user.value)&target=sumSeries( host.*.cpu-system.value) summarize(seriesList, intervalString, func=’sum’, alignToFrom=False) Summarize the data into interval buckets of a certain size. By default, the contents of each interval bucket are summed together. This is useful for counters where each increment represents a discrete event and retrieving a “per X” value requires summing all the events in that interval. Specifying ‘avg’ instead will return the mean for each bucket, which can be more useful when the value is a gauge that represents a certain value in time. ‘max’, ‘min’ or ‘last’ can also be specified. By default, buckets are calculated by rounding to the nearest interval. This works well for intervals smaller than a day. For example, 22:32 will end up in the bucket 22:00-23:00 when the interval=1hour. Passing alignToFrom=true will instead create buckets starting at the from time. In this case, the bucket for 22:32 depends on the from time. If from=6:30 then the 1hour bucket for 22:32 is 22:30-23:30. Example: # total errors per hour &target=summarize(counter.errors, "1hour") # new users per week &target=summarize(nonNegativeDerivative(gauge.num_users), "1week") # average queue size per hour &target=summarize(queue.size, "1hour", "avg") # maximum queue size during each hour &target=summarize(queue.size, "1hour", "max") Graphite-API Documentation, Release 1.1.3 # 2010 Q1-4 &target=summarize(metric, "13week", "avg", true)&from=midnight+20100101 threshold(value, label=None, color=None) Takes a float F, followed by a label (in double quotes) and a color. (See bgcolor in the render_api_ for valid color names & formats.) Draws a horizontal line at value F across the graph. Example: &target=threshold(123.456, "omgwtfbbq", "red") timeFunction(name, step=60) Short Alias: time() Just returns the timestamp for each X value. T Example: &target=time("The.time.series") This would create a series named “The.time.series” that contains in Y the same value (in seconds) as X. A second argument can be provided as a step parameter (default is 60 secs) timeShift(seriesList, timeShift, resetEnd=True, alignDST=False) Takes one metric or a wildcard seriesList, followed by a quoted string with the length of time (See from / until in the render_api_ for examples of time formats). Draws the selected metrics shifted in time. If no sign is given, a minus sign ( - ) is implied which will shift the metric back in time. If a plus sign ( + ) is given, the metric will be shifted forward in time. Will reset the end date range automatically to the end of the base stat unless resetEnd is False. Example case is when you timeshift to last week and have the graph date range set to include a time in the future, will limit this timeshift to pretend ending at the current time. If resetEnd is False, will instead draw full range including future time. Because time is shifted by a fixed number of seconds, comparing a time period with DST to a time period without DST, and vice-versa, will result in an apparent misalignment. For example, 8am might be overlaid with 7am. To compensate for this, use the alignDST option. Useful for comparing a metric against itself at a past periods or correcting data stored at an offset. Example: &target=timeShift(Sales.widgets.largeBlue,"7d") &target=timeShift(Sales.widgets.largeBlue,"-7d") &target=timeShift(Sales.widgets.largeBlue,"+1h") timeSlice(seriesList, startSliceAt, endSliceAt=’now’) Takes one metric or a wildcard metric, followed by a quoted string with the time to start the line and another quoted string with the time to end the line. The start and end times are inclusive. See from / until in the render api for examples of time formats. Useful for filtering out a part of a series of data from a wider range of data. Example: Graphite-API Documentation, Release 1.1.3 &target=timeSlice(network.core.port1,"00:00 20140101","11:59 20140630") &target=timeSlice(network.core.port1,"12:00 20140630","now") timeStack(seriesList, timeShiftUnit, timeShiftStart, timeShiftEnd) Takes one metric or a wildcard seriesList, followed by a quoted string with the length of time (See from / until in the render_api_ for examples of time formats). Also takes a start multiplier and end multiplier for the length of time- Create a seriesList which is composed the original metric series stacked with time shifts starting time shifts from the start multiplier through the end multiplier. Useful for looking at history, or feeding into averageSeries or stddevSeries. Example: # create a series for today and each of the previous 7 days &target=timeStack(Sales.widgets.largeBlue,"1d",0,7) transformNull(seriesList, default=0, referenceSeries=None) Takes a metric or wildcard seriesList and replaces null values with the value specified by default. The value 0 used if not specified. The optional referenceSeries, if specified, is a metric or wildcard series list that governs which time intervals nulls should be replaced. If specified, nulls are replaced only in intervals where a non-null is found for the same interval in any of referenceSeries. This method compliments the drawNullAsZero function in graphical mode, but also works in text-only mode. Example: &target=transformNull(webapp.pages.*.views,-1) This would take any page that didn’t have values and supply negative 1 as a default. Any other numeric value may be used as well. useSeriesAbove(seriesList, value, search, replace) Compares the maximum of each series against the given value. If the series maximum is greater than value, the regular expression search and replace is applied against the series name to plot a related metric. e.g. given useSeriesAbove(ganglia.metric1.reqs,10,’reqs’,’time’), the response time metric will be plotted only when the maximum value of the corresponding request/s metric is > 10 Example: &target=useSeriesAbove(ganglia.metric1.reqs,10,"reqs","time") verticalLine(ts, label=None, color=None) Takes a timestamp string ts. Draws a vertical line at the designated timestamp with optional ‘label’ and ‘color’. Supported timestamp formats include both relative (e.g. -3h) and absolute (e.g. 16:00_20110501) strings, such as those used with from and until parameters. When set, the ‘label’ will appear in the graph legend. Note: Any timestamps defined outside the requested range will raise a ‘ValueError’ exception. Example: &target=verticalLine("12:3420131108","event","blue") &target=verticalLine("16:00_20110501","event") &target=verticalLine("-5mins") weightedAverage(seriesListAvg, seriesListWeight, *nodes) Takes a series of average values and a series of weights and produces a weighted average for all values. Graphite-API Documentation, Release 1.1.3 The corresponding values should share one or more zero-indexed nodes. Example: &target=weightedAverage(*.transactions.mean,*.transactions.count,0) &target=weightedAverage(*.transactions.mean,*.transactions.count,1,3,4) Storage finders Graphite-API searches and fetches metrics from time series databases using an interface called finders. The default finder provided with Graphite-API is the one that integrates with Whisper databases. Customizing finders can be done in the finders section of the Graphite-API configuration file: finders: - graphite_api.finders.whisper.WhisperFinder Several values are allowed, to let you store different kinds of metrics at different places or smoothly handle transitions from one time series database to another. The default finder reads data from a Whisper database. Custom finders finders being a list of arbitrary python paths, it is relatively easy to write a custom finder if you want to read data from other places than Whisper. A finder is a python class with a find_nodes() method: class CustomFinder(object): def find_nodes(self, query): # ... query is a FindQuery object. find_nodes() is the entry point when browsing the metrics tree. It must yield leaf or branch nodes matching the query: from graphite_api.node import LeafNode, BranchNode class CustomFinder(object): def find_nodes(self, query): # find some paths matching the query, then yield them # is_branch or is_leaf are predicates you need to implement for path in matches: if is_branch(path): yield BranchNode(path) if is_leaf(path): yield LeafNode(path, CustomReader(path)) LeafNode is created with a reader, which is the class responsible for fetching the datapoints for the given path. It is a simple class with 2 methods: fetch() and get_intervals(): from graphite_api.intervals import IntervalSet, Interval class CustomReader(object): __slots__ = ('path',) # __slots__ is recommended to save memory on readers def __init__(self, path): Graphite-API Documentation, Release 1.1.3 self.path = path def fetch(self, start_time, end_time): # fetch data time_info = _from_, _to_, _step_ return time_info, series def get_intervals(self): return IntervalSet([Interval(start, end)]) fetch() must return a list of 2 elements: the time info for the data and the datapoints themselves. The time info is a list of 3 items: the start time of the datapoints (in unix time), the end time and the time step (in seconds) between the datapoints. The datapoints is a list of points found in the database for the required interval. There must be (end - start) / step points in the dataset even if the database has gaps: gaps can be filled with None values. get_intervals() is a method that hints graphite-web about the time range available for this given metric in the database. It must return an IntervalSet of one or more Interval objects. Fetching multiple paths at once If your storage backend allows it, fetching multiple paths at once is useful to avoid sequential fetches and save time and resources. This can be achieved in three steps: • Subclass LeafNode and add a __fetch_multi__ class attribute to your subclass: class CustomLeafNode(LeafNode): __fetch_multi__ = 'custom' The string 'custom' is used to identify backends and needs to be unique per-backend. • Add the __fetch_multi__ attribute to your finder class: class CustomFinder(objects): __fetch_multi__ = 'custom' • Implement a fetch_multi() method on your finder: class CustomFinder(objects): def fetch_multi(self, nodes, start_time, end_time): paths = [node.path for node in nodes] # fetch paths return time_info, series time_info is the same structure as the one returned by fetch(). series is a dictionnary with paths as keys and datapoints as values. Installing custom finders In order for your custom finder to be importable, you need to package it under a namespace of your choice. Python packaging won’t be covered here but you can look at third-party finders to get some inspiration: • Cyanite finder Graphite-API Documentation, Release 1.1.3 Configuration Graphite-API instantiates finders and passes it its whole parsed configuration file, as a Python data structure. External finders can require extra sections in the configuration file to setup access to the time series database they communicate with. For instance, let’s say your CustomFinder needs two configuration parameters, a host and a user: class CustomFinder(object): def __init__(self, config): config.setdefault('custom', {}) self.user = config['custom'].get('user', 'default') self.host = config['custom'].get('host', 'localhost') The configuration file would look like: finders: - custom.CustomFinder custom: user: myuser host: example.com When possible, try to use sane defaults that would “just work” for most common setups. Here if the custom section isn’t provided, the finder uses default as user and localhost as host. Custom functions Just like with storage finders, it is possible to extend Graphite-API to add custom processing functions. To give an example, let’s implement a function that reverses the time series, placing old values at the end and recent values at the beginning. # reverse.py def reverseSeries(requestContex, seriesList): reverse = [] for series in seriesList: reverse.append(TimeSeries(series.name, series.start, series.end, return reverse The first argument, requestContext, holds some information about the request parameters. seriesList is the list of paths found for the request target. Once you’ve created your function, declare it in a dictionnary: ReverseFunctions = { 'reverseSeries': reverseSeries, } Add your module to the Graphite-API Python path and add it to the configuration: functions: - graphite_api.functions.SeriesFunctions - graphite_api.functions.PieFunctions - reverse.ReverseFunctions Graphite-API Documentation, Release 1.1.3 Graphite-API releases 1.1.3 – 2016-05-23 • Remove extra parenthesis from aliasByMetric(). • Fix leap year handling in graphite_api.render.attime. • Allow colon and hash in node names in aliasByNode() • Fix calling reduceFunction in reduceSeries • Revert a whisper patch which broke multiple retentions handling. • Specify which function is invalid when providing an invalid consolidation function. 1.1.2 – 2015-11-19 • Fix regression in multi fetch handling: paths were queried multiple times, leading to erroneous behaviour and slowdown. • Continue on IndexError in remove{Above,Below}Percentile functions. 1.1.1 – 2015-10-23 • Fix areaMode=stacked. • Fix error when calling functions that use fetchWithBootstrap and the bootstrap range isn’t available (fill with nulls instead). 1.1 – 2015-10-05 • Add CarbonLink support. • Add support for configuring a cache backend and the noCache and cacheTimeout API options. • When no timezone is provided in the configuration file, try to guess from the system’s timezone with a fallback to UTC. • Now supporting Flask >= 0.8 and Pyparsing >= 1.5.7. • Add support for fetch_multi() in storage finders. This is useful for database-backed finders such as Cyanite because it allows fetching all time series at once instead of sequentially. • Add multiplySeriesWithWildcards, minimumBelow, changed, timeSlice and removeEmptySeries functions. • Add optional step argument to time, sin and randomWalk functions. • Add /metrics API call as an alias to /metrics/find. • Add missing /metrics/index.json API call. • Allow wildcards origins (*) in CORS configuration. • Whisper finder now logs debug information. • Fix parsing dates such as “feb27” during month days > 28. Graphite-API Documentation, Release 1.1.3 • Change sum() to return null instead of 0 when all series’ datapoints are null at the same time. This is graphite-web’s behavior. • Extract paths of all targets before fetching data. This is a significant optimization for storage backends such as Cyanite that allow bulk-fetching metrics. • Add JSONP support to all API endpoints that can return JSON. • Fix 500 error when generating a SVG graph without any data. • Return tracebacks in the HTTP response when app errors occur. This behavior can be disabled in the configura- tion. • Fixes for the following graphite-web issues: – #639 – proper timezone handling of from and until with client-supplied timezones. – #540 – provide the last data point when rendering to JSON format. – #381 – make areaBetween() work either when passed 2 arguments or a single wildcard series of length 2. – #702 – handle backslash as path separator on windows. – #410 – SVG output sometimes had an extra </g> tag. 1.0.1 – 2014-03-21 • time_zone set to UTC by default instead of Europe/Berlin. • Properly log app exceptions. • Fix constantLine for python 3. • Create whisper directories if they don’t exist. • Fixes for the following graphite-web issues: – #645, #625 – allow constantLine to work even if there are no other targets in the graph. 1.0.0 – 2014-03-20 Version 1.0 is based on the master branch of Graphite-web, mid-March 2014, with the following modifications: • New /index API endpoint for re-building the index (replaces the build-index command-line script from graphite-web). • Removal of memcache integration. • Removal of Pickle integration. • Removal of remote rendering. • Support for Python 3. • A lot more tests and test coverage. • Fixes for the following graphite-web issues: – (meta) #647 – strip out the API from graphite-web. – #665 – address some DeprecationWarnings. – #658 – accept a float value in maxDataPoints. Graphite-API Documentation, Release 1.1.3 – #654 – ignore invalid logBase values (<=1). – #591 – accept JSON data additionaly to querystring params or form data. Graphite-API Documentation, Release 1.1.3 60 Chapter 2. Contents CHAPTER 3 Indices and tables • genindex • modindex • search 61 Graphite-API Documentation, Release 1.1.3 62 Chapter 3. Indices and tables Python Module Index g graphite_api.functions, 33 63 Graphite-API Documentation, Release 1.1.3 64 Python Module Index
EATME
cran
R
Package ‘EATME’ October 12, 2022 Type Package Title Exponentially Weighted Moving Average with Adjustments to Measurement Error Version 0.1.0 Description The univariate statistical quality control tool aims to address measurement error ef- fects when constructing exponentially weighted moving average p control charts. The method pri- marily focuses on binary random variables, but it can be applied to any continuous random vari- ables by using sign statistic to transform them to discrete ones. With the correction of measure- ment error effects, we can obtain the corrected control limits of exponentially weighted mov- ing average p control chart and reasonably adjusted exponentially weighted moving aver- age p control charts. The methods in this package can be found in some relevant refer- ences, such as Chen and Yang (2022) <arXiv: 2203.03384>; Yang et al. (2011) <doi:10.1016/j.eswa.2010.11.044>; Yang an License GPL-3 Encoding UTF-8 Imports qcr, stats, graphics RoxygenNote 7.1.2 Suggests knitr, rmarkdown NeedsCompilation no Author <NAME> Developer [aut, cre, cph], <NAME> <NAME> [aut] Maintainer <NAME> Developer <<EMAIL>> Repository CRAN Date/Publication 2022-05-17 10:10:06 UTC R topics documented: cont_to_disc_... 2 cont_to_disc_... 3 ewm... 4 EWMA_p_chart_one_LC... 4 EWMA_p_chart_one_UC... 6 EWMA_p_chart_tw... 7 EWMA_p_one_LC... 8 EWMA_p_one_UC... 9 EWMA_p_tw... 11 ME_data_generat... 12 cont_to_disc_M Convert data to M statistic Description Convert continuous random variables in in-control process into discrete random variables with M statistic, where M statistic is the total number of samples satisfying Xij > µ at time i, where Xij is the observation for the ith sampling period and the j th sample in the in-control data, n is the number of the sample size and m is the number of the sampling periods. µ is the Ppopulation mean m P n Xij of continuous in-control data. If µ is unknown, it can be estimated by µ̂ = x = n×m . Usage cont_to_disc_M(ICdata, OCdata, mu.p = mean(ICdata)) Arguments ICdata The in-control data. OCdata The out-of-control data. mu.p Mean of the random variable in the in-control data. Value M0 The M statistic for in-control data. M1 The M statistic for out-of-control data. p0 The process proportion for in-control data. p1 The process proportion for out-of-control data. n The number of the sample size. References <NAME>., <NAME>., & <NAME>. (2011). A new nonparametric EWMA sign control chart. Expert Systems with Applications, 38(5), 6239-6243. <NAME>. & <NAME>. (2014). A simple approach for monitoring business service time variation.The Scientific World Journal, 2014:16. <NAME>. (2016). An improved distribution-free EWMA mean chart. Communications in Statistics- Simulation and Computation, 45(4), 1410-1427. Examples IC = matrix(rnorm(100,0,1),ncol = 10,byrow = TRUE) OC = matrix(rnorm(100,2,1),ncol = 10,byrow = TRUE) cont_to_disc_M(IC,OC) cont_to_disc_V Convert data to V statistic Description Convert continuous random variables in in-control process to discrete data with V statistic, where V statistic is the total number of sample satisfying Yij = i2j 2i(2j−1) > σ 2 at time i, where Xij is the observation for the ith sampling period and the j th sample in the in-control data, n is the number of the sample size and m is the number of the sampling periods. σ 2 isPpopulation m variance Pof continuous in-control data. If σ 2 is unknown, it can be estimated by σ̂ 2 = i=1 m and n (Xij −X i )2 Si2 = j=1 n−1 . Usage cont_to_disc_V(ICdata, OCdata, var.p = NULL) Arguments ICdata The in-control data. OCdata The out-of-control data. var.p Variance of the random variables in the in-control data. Value V0 The V statistic for in-control data. V1 The V statistic for out-of-control data. p0 The process proportion for in-control data. p1 The process proportion for out-of-control data. n The number of the sample size. References <NAME>. & <NAME>. (2014). A simple approach for monitoring business service time variation.The Scientific World Journal, 2014:16. <NAME>., & <NAME>. (2016). A new approach for monitoring process variance. Journal of Statistical Computation and Simulation, 86(14), 2749-2765. Examples IC = matrix(rnorm(100,0,1),ncol = 10,byrow = TRUE) OC = matrix(rnorm(100,0,2),ncol = 10,byrow = TRUE) cont_to_disc_V(IC,OC) ewma EWMA chart statistics of the data Description A conventional exponential weighted moving average (EWMA) charting statistic evaluated by the data. Usage ewma(data, lambda, EWMA0) Arguments data An one-dimensional random variable. lambda An EWMA smooth constant, which is a scalar in [0,1]. EWMA0 A starting point of EWMA charting statistic. Value A vector of EWMA charting statistics of data at different t times. Examples x = rnorm(20,0,1) ewma(x,0.05,0) EWMA_p_chart_one_LCL A one-sided lower EWMA-p control chart Description This function displays one-sided lower EWMA-p chart control charts based on in-control and out- of-control data that are number of defectives. In the presence of measurement error, this function is able to provide suitable charts with corrections of measurement error effects. Usage EWMA_p_chart_one_LCL( ICdata, OCdata, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10 ) Arguments ICdata The in-control data for attributes. OCdata The out-of-control data for attributes. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute difference between an iterated ARL value and pre- specified ARL0. Value The first chart is an EWMA-p chart obtained by the in-control data, and the second chart is an EWMA-p chart based in the out-of-control data. In two figures, horizontal solid line represents lower control limit (LCL), black solid dots are detections of in-control data, and red solid dots are detections of out-of-control data. References <NAME>. & <NAME>. (2022). A new p-chart with measurement error correction. arXiv: 2203.03384. Examples library(qcr) data = orangejuice IC = data[1:30,1] OC = data[31:54,1] EWMA_p_chart_one_LCL(IC,OC,0.05,50,1,1) EWMA_p_chart_one_UCL A one-sided upper EWMA-p control chart Description This function displays one-sided upper EWMA-p chart control charts based on in-control and out- of-control data that are number of defectives. In the presence of measurement error, this function is able to provide suitable charts with corrections of measurement error effects. Usage EWMA_p_chart_one_UCL( ICdata, OCdata, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10 ) Arguments ICdata The in-control data for attributes. OCdata The out-of-control data for attributes. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute difference between an iterated ARL value and pre- specified ARL0. Value The first chart is an EWMA-p chart obtained by the in-control data, and the second chart is an EWMA-p chart based in the out-of-control data. In two figures, horizontal solid line represents upper control limit (UCL), black solid dots are detections of in-control data, and red solid dots are detections of out-of-control data. References <NAME>. & <NAME>. (2022). A new p-chart with measurement error correction. arXiv: 2203.03384. Examples library(qcr) data = orangejuice IC = data[31:54,1] OC = data[1:30,1] EWMA_p_chart_one_UCL(IC,OC,0.05,50,1,1) EWMA_p_chart_two A two-sided EWMA-p control chart Description This function displays two-sided EWMA-p chart control charts based on in-control and out-of- control data that are number of defectives. In the presence of measurement error, this function is able to provide suitable charts with corrections of measurement error effects. Usage EWMA_p_chart_two( ICdata, OCdata, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10 ) Arguments ICdata The in-control data for attributes. OCdata The out-of-control data for attributes. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute difference between an iterated ARL value and pre- specified ARL0. Value The first chart is an EWMA-p chart obtained by the in-control data, and the second chart is an EWMA-p chart based in the out-of-control data. In two figures, horizontal solid lines represents upper control limit (UCL) and lower control limit (LCL), black solid dots are detections of in- control data, and red solid dots are detections of out-of-control data. References <NAME>. & <NAME>. (2022). A new p-chart with measurement error correction. arXiv: 2203.03384. Examples library(qcr) data = orangejuice IC = data[31:54,1] OC = data[1:30,1] set.seed(2) EWMA_p_chart_two(IC,OC,0.05,50,1,1,200,100,20) EWMA_p_one_LCL The one-sided lower control limit of an EWMA-p chart Description This function is used to calculate the one-sided lower control limit for EWMA-p charts with the correction of measurement error effects. If two truly classified probabilities pi1 and pi2 are given by 1, then the corresponding control limit is free of measurement error. Usage EWMA_p_one_LCL( p, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10 ) Arguments p The proportion of defectives in the in-control process. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute different between an itevated ARL calue and pre- specified ARL0. Value L2 The coefficient of the lower control limit. hat_ARL0 The estimated in-control average run length based on given L2. hat_MRL0 The estimated in-control median of run length based on given L2. hat_SDRL0 The estimated in-control standard deviation of run length based on given L2. LCL The limiting value of the lower control limit with L2. References <NAME>., & <NAME>. (2022). A New p-Control Chart with Measurement Error Correction. arXiv preprint arXiv:2203.03384. Examples EWMA_p_one_LCL(0.2,0.05,5,1,1) EWMA_p_one_UCL The one-sided upper control limit of an EWMA-p chart Description This function is used to calculate the one-sided upper control limit for EWMA-p charts with the correction of measurement error effects. If two truly classified probabilities pi1 and pi2 are given by 1, then the corresponding control limit is free of measurement error. Usage EWMA_p_one_UCL( p, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10 ) Arguments p The proportion of defectives in the in-control process. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute different between an itevated ARL calue and pre- specified ARL0. Value L1 The coefficient of the upper control limit. hat_ARL0 The estimated in-control average run length based on given L1. hat_MRL0 The estimated in-control median of run length based on given L1. hat_SDRL0 The estimated in-control standard deviation of run length based on given L1. UCL The limiting value of the upper control limit with L1. References <NAME>., & <NAME>. (2022). A New p-Control Chart with Measurement Error Correction. arXiv preprint arXiv:2203.03384. Examples EWMA_p_one_UCL(0.2,0.05,5,1,1) EWMA_p_two The two-sided control limits of an EWMA-p chart Description This function is used to calculate the two-sided control limit for EWMA-p charts with the correction of measurement error effects. If two truly classified probabilities pi1 and pi2 are given by 1, then the corresponding control limit is free of measurement error. Usage EWMA_p_two(p, lambda, n, pi1 = 1, pi2 = pi1, ARL0 = 200, M = 500, error = 10) Arguments p The proportion of defectives in the in-control process. lambda An EWMA smooth constant, which is a scalar in [0,1]. n A sample size in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. ARL0 A prespecified average run length (ARL) of a control chart in the in-control process. M The number of simulation times for the Monte Carlo method error The tolerant for the absolute different between an itevated ARL calue and pre- specified ARL0. Value L1 The coefficient of the upper control limit. L2 The coefficient of the lower control limit. hat_ARL0 The estimated in-control average run length based on given L1 and L2. hat_MRL0 The estimated in-control median of run length based on given L1 and L2. hat_SDRL0 The estimated in-control standard deviation of run length based on given L1 and L2. UCL The limiting value of the upper control limit with L1. LCL The limiting value of the lower control limit with L2. References <NAME>., & <NAME>. (2022). A New p-Control Chart with Measurement Error Correction. arXiv preprint arXiv:2203.03384. Examples set.seed(2) EWMA_p_two(0.2,0.05,5,1,1,200,100,20) ME_data_generate Generate the discrete random variable with measurement error Description Generate the discrete random variable with measurement error. Usage ME_data_generate(p, n, m, pi1, pi2 = pi1) Arguments p A probability of the unobserved defectives. n A number of sample size in the data. m A number of observation in the data. pi1 The proportion that the observed defectives are the same as unobserved ones. pi2 The proportion that the observed non-defectives are the same as unobserved ones. Value real_data The generated data without measurement error. obs_data The generated data with measurement error. n A sample size in the generated data. Examples ME_data_generate(0.7,50,50,0.95)
pyreadstat
readthedoc
Unknown
pyreadstat 1.2.3 documentation [pyreadstat](index.html#document-index) --- Welcome to pyreadstat’s documentation![¶](#welcome-to-pyreadstat-s-documentation) === Metadata Object Description[¶](#metadata-object-description) === Each parsing function returns a metadata object in addition to a pandas dataframe. That object contains the following fields: > * notes: notes or documents (text annotations) attached to the file if any (spss and stata). > * column\_names : a list with the names of the columns. > * column\_labels : a list with the column labels, if any. > * column\_names\_to\_labels : a dictionary with column\_names as keys and column\_labels as values > * file\_encoding : a string with the file encoding, may be empty > * number\_columns : an int with the number of columns > * number\_rows : an int with the number of rows. If metadataonly option was used, it may > be None if the number of rows could not be determined. If you need the number of rows in > this case you need to parse the whole file. This happens for xport and por files. > * variable\_value\_labels : a dict with keys being variable names, and values being a dict with values as keys and labels > as values. It may be empty if the dataset did not contain such labels. For sas7bdat files it will be empty unless > a sas7bcat was given. It is a combination of value\_labels and variable\_to\_label. > * value\_labels : a dict with label name as key and a dict as value, with values as keys and labels > as values. In the case of parsing a sas7bcat file this is where the formats are. > * variable\_to\_label : A dict with variable name as key and label name as value. Label names are those described in > value\_labels. Sas7bdat files may have this member populated and its information can be used to match the information > in the value\_labels coming from the sas7bcat file. > * original\_variable\_types : a dict of variable name to variable format in the original file. For debugging purposes. > * readstat\_variable\_types : a dict of variable name to variable type in the original file as extracted by Readstat.i > For debugging purposes. In SAS and SPSS variables will be either double (numeric in the original app) or string (character). > Stata has in addition int8, int32 and float types. > * table\_name : table name (string) > * file\_label : file label (SAS) (string) > * missing\_ranges: a dict with keys being variable names. Values are a list of dicts. Each dict contains two > keys, ‘lo’ and ‘hi’ being the lower boundary and higher boundary for the missing range. Even if the value in both > lo and hi are the same, the two elements will always be present. This appears for SPSS (sav) files when using the > option user\_missing=True: user defined missing values appear not as nan but as their true value and this dictionary > stores the information about which values are to be considered missing. > * missing\_user\_values: a dict with keys being variable names. Values are a list of character values (A to Z and \_ for SAS, a to z for SATA) representing user defined > missing values in SAS and STATA. This appears when using user\_missing=True in read\_sas7bdat or read\_dta if user defined missing values are present. > * variable\_alignment: a dict with keys being variable names and values being the display alignment: left, center, right or unknown > * variable\_storage\_width: a dict with keys being variable names and values being the storage width > * variable\_display\_width: a dict with keys being variable names and values being the display width > * variable\_measure: a dict with keys being variable names and values being the measure: nominal, ordinal, scale or unknown There are two functions to deal with value labels: set\_value\_labels and set\_catalog\_to\_sas. You can read about them in the next section. Functions Documentation[¶](#functions-documentation) === Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
onhtml
rust
Rust
Crate onhtml === by ontology A dsl (domain specific language) for writing html. This is NOT an html template (there are lots of those). This library is not complete and most possibly will never be -html is huge. ### usage ``` use onhtml::* ; fn myhomepage1() ->String { let mut x = Title("a page") ; x += &meta() .name("description") .Content("bla bla") ; x += &Style("some inline css") ; x = Head(&x) ; let mut y = a("nst") .href("nst.com") .Download() ; y += &P("nst") ; x += &Body(&y) ; x += &script("") .type_("module") .Src("/res/main.js") ; x = html(&x) .Lang("en") ; doctype(&x)} fn myhomepage2() ->String { let mut x = Title("a page") ; x += &meta() .name("description") .Content("bla bla") ; x += &link() .rel("stylesheet") .Href("/mycss.css") ; x = Head(&x) ; let mut y = a("nst") .href("nst.com") .data("a","1") .Download() ; y += &P("nst") ; x += &Body(&y) ; x += &script("") .type_("script") .Src("/res/main.js") ; x = html(&x) .Lang("en") ; doctype(&x)} ``` Notice that -and this is against rust conventions- some functions are capitalized. Every function has 2 variants. You can view the capitalized function as the non capitalized function call, followed by a hypothetical .finish() method to close the builder. i.e. ``` let x = a("some link") .href("https://somelink.com") .finish() ``` instead of having the above, we have: ``` let x = a("some link") .Href("https://somelink.com") ``` This was decided for purely ergonomic reasons. Since everything is a String, if the library misses something, you can always add it manually: ``` let mut x = Div("some content") x += "<span>some content</span>" ``` **note:** function names that collide with rust’s reserved keywords (type, loop etc) are suffixed with an underscore. i.e. type_ Structs --- T_AT_ARTICLET_ASIDET_BT_BIGT_BLOCKQUOTET_BODYT_BUTTONT_CANVAST_CODET_DATALISTT_DIVT_FIGURET_FOOTERT_FORMT_H1T_H2T_H3T_H4T_H5T_H6T_HEADT_HEADERT_HTMLT_IFRAMET_IMGT_INPUTT_LABELT_LIT_LINKT_MAINT_MARQUEET_METAT_NAVT_OLT_OPTIONT_PT_PRET_QT_SCRIPTT_SECTIONT_SELECTT_SMALLT_SOURCET_SPANT_STYLET_TEMPLATET_TEXTAREAT_TITLET_ULT_VIDEOFunctions --- AArticleAsideBBigBlockquoteBodyButtonCanvasCodeDatalistDivFigureFooterFormH1H2H3H4H5H6HeadHeaderHtmlIframeImgInputLabelLiMainMarqueeNavOlOptionPPreQScriptSectionSelectSmallSourceSpanStyleTemplateTextareaTitleUlVideoaarticleasidebbigblockquotebodybuttoncanvascodedatalistdivdoctypefigurefooterformh1h2h3h4h5h6headheaderhtmliframeimginputlabellilinkmain_marqueemetanavoloptionppreqscriptsectionselectsmallsourcespanstyletemplatetextareatitleulvideo Crate onhtml === by ontology A dsl (domain specific language) for writing html. This is NOT an html template (there are lots of those). This library is not complete and most possibly will never be -html is huge. ### usage ``` use onhtml::* ; fn myhomepage1() ->String { let mut x = Title("a page") ; x += &meta() .name("description") .Content("bla bla") ; x += &Style("some inline css") ; x = Head(&x) ; let mut y = a("nst") .href("nst.com") .Download() ; y += &P("nst") ; x += &Body(&y) ; x += &script("") .type_("module") .Src("/res/main.js") ; x = html(&x) .Lang("en") ; doctype(&x)} fn myhomepage2() ->String { let mut x = Title("a page") ; x += &meta() .name("description") .Content("bla bla") ; x += &link() .rel("stylesheet") .Href("/mycss.css") ; x = Head(&x) ; let mut y = a("nst") .href("nst.com") .data("a","1") .Download() ; y += &P("nst") ; x += &Body(&y) ; x += &script("") .type_("script") .Src("/res/main.js") ; x = html(&x) .Lang("en") ; doctype(&x)} ``` Notice that -and this is against rust conventions- some functions are capitalized. Every function has 2 variants. You can view the capitalized function as the non capitalized function call, followed by a hypothetical .finish() method to close the builder. i.e. ``` let x = a("some link") .href("https://somelink.com") .finish() ``` instead of having the above, we have: ``` let x = a("some link") .Href("https://somelink.com") ``` This was decided for purely ergonomic reasons. Since everything is a String, if the library misses something, you can always add it manually: ``` let mut x = Div("some content") x += "<span>some content</span>" ``` **note:** function names that collide with rust’s reserved keywords (type, loop etc) are suffixed with an underscore. i.e. type_ Structs --- T_AT_ARTICLET_ASIDET_BT_BIGT_BLOCKQUOTET_BODYT_BUTTONT_CANVAST_CODET_DATALISTT_DIVT_FIGURET_FOOTERT_FORMT_H1T_H2T_H3T_H4T_H5T_H6T_HEADT_HEADERT_HTMLT_IFRAMET_IMGT_INPUTT_LABELT_LIT_LINKT_MAINT_MARQUEET_METAT_NAVT_OLT_OPTIONT_PT_PRET_QT_SCRIPTT_SECTIONT_SELECTT_SMALLT_SOURCET_SPANT_STYLET_TEMPLATET_TEXTAREAT_TITLET_ULT_VIDEOFunctions --- AArticleAsideBBigBlockquoteBodyButtonCanvasCodeDatalistDivFigureFooterFormH1H2H3H4H5H6HeadHeaderHtmlIframeImgInputLabelLiMainMarqueeNavOlOptionPPreQScriptSectionSelectSmallSourceSpanStyleTemplateTextareaTitleUlVideoaarticleasidebbigblockquotebodybuttoncanvascodedatalistdivdoctypefigurefooterformh1h2h3h4h5h6headheaderhtmliframeimginputlabellilinkmain_marqueemetanavoloptionppreqscriptsectionselectsmallsourcespanstyletemplatetextareatitleulvideo Struct onhtml::T_A === ``` pub struct T_A(_); ``` Implementations --- source### impl T_A source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_A source#### pub fn download(&mut self) -> &mutSelf source#### pub fn href(&mut self, val: &str) -> &mutSelf source#### pub fn type_(&mut self, x: &str) -> &mutSelf source#### pub fn hreflang(&mut self, two_digit_code: &str) -> &mutSelf source#### pub fn ping(&mut self, urls: Vec<&str>) -> &mutSelf source#### pub fn target(&mut self, val: &str) -> &mutSelf source### impl T_A source#### pub fn Download(&mut self) -> String source### impl T_A source#### pub fn Href(&mut self, x: &str) -> String source### impl T_A source#### pub fn Type(&mut self, x: &str) -> String source### impl T_A source#### pub fn Hreflang(&mut self, x: &str) -> String source### impl T_A source#### pub fn Target(&mut self, x: &str) -> String source### impl T_A source#### pub fn Ping(&mut self, xs: Vec<&str>) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_A ### impl Send for T_A ### impl Sync for T_A ### impl Unpin for T_A ### impl UnwindSafe for T_A Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_ARTICLE === ``` pub struct T_ARTICLE(_); ``` Implementations --- source### impl T_ARTICLE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_ARTICLE ### impl Send for T_ARTICLE ### impl Sync for T_ARTICLE ### impl Unpin for T_ARTICLE ### impl UnwindSafe for T_ARTICLE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_ASIDE === ``` pub struct T_ASIDE(_); ``` Implementations --- source### impl T_ASIDE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_ASIDE ### impl Send for T_ASIDE ### impl Sync for T_ASIDE ### impl Unpin for T_ASIDE ### impl UnwindSafe for T_ASIDE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_B === ``` pub struct T_B(_); ``` Implementations --- source### impl T_B source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_B ### impl Send for T_B ### impl Sync for T_B ### impl Unpin for T_B ### impl UnwindSafe for T_B Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_BIG === ``` pub struct T_BIG(_); ``` Implementations --- source### impl T_BIG source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_BIG ### impl Send for T_BIG ### impl Sync for T_BIG ### impl Unpin for T_BIG ### impl UnwindSafe for T_BIG Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_BLOCKQUOTE === ``` pub struct T_BLOCKQUOTE(_); ``` Implementations --- source### impl T_BLOCKQUOTE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_BLOCKQUOTE source#### pub fn cite(&mut self, x: &str) -> &mutSelf source### impl T_BLOCKQUOTE source#### pub fn Cite(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_BLOCKQUOTE ### impl Send for T_BLOCKQUOTE ### impl Sync for T_BLOCKQUOTE ### impl Unpin for T_BLOCKQUOTE ### impl UnwindSafe for T_BLOCKQUOTE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_BODY === ``` pub struct T_BODY(_); ``` Implementations --- source### impl T_BODY source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_BODY ### impl Send for T_BODY ### impl Sync for T_BODY ### impl Unpin for T_BODY ### impl UnwindSafe for T_BODY Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_BUTTON === ``` pub struct T_BUTTON(_); ``` Implementations --- source### impl T_BUTTON source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_BUTTON source#### pub fn formaction(&mut self, val: &str) -> &mutSelf source#### pub fn autofocus(&mut self) -> &mutSelf source#### pub fn disabled(&mut self) -> &mutSelf source#### pub fn name(&mut self, x: &str) -> &mutSelf source#### pub fn form(&mut self, x: &str) -> &mutSelf source#### pub fn value(&mut self, x: &str) -> &mutSelf source#### pub fn type_(&mut self, val: &str) -> &mutSelf source### impl T_BUTTON source#### pub fn Autofocus(&mut self) -> String source### impl T_BUTTON source#### pub fn Disabled(&mut self) -> String source### impl T_BUTTON source#### pub fn Formaction(&mut self, x: &str) -> String source### impl T_BUTTON source#### pub fn Name(&mut self, x: &str) -> String source### impl T_BUTTON source#### pub fn Form(&mut self, x: &str) -> String source### impl T_BUTTON source#### pub fn Value(&mut self, x: &str) -> String source### impl T_BUTTON source#### pub fn Type(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_BUTTON ### impl Send for T_BUTTON ### impl Sync for T_BUTTON ### impl Unpin for T_BUTTON ### impl UnwindSafe for T_BUTTON Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_CANVAS === ``` pub struct T_CANVAS(_); ``` Implementations --- source### impl T_CANVAS source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_CANVAS ### impl Send for T_CANVAS ### impl Sync for T_CANVAS ### impl Unpin for T_CANVAS ### impl UnwindSafe for T_CANVAS Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_CODE === ``` pub struct T_CODE(_); ``` Implementations --- source### impl T_CODE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_CODE ### impl Send for T_CODE ### impl Sync for T_CODE ### impl Unpin for T_CODE ### impl UnwindSafe for T_CODE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_DATALIST === ``` pub struct T_DATALIST(_); ``` Implementations --- source### impl T_DATALIST source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_DATALIST ### impl Send for T_DATALIST ### impl Sync for T_DATALIST ### impl Unpin for T_DATALIST ### impl UnwindSafe for T_DATALIST Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_DIV === ``` pub struct T_DIV(_); ``` Implementations --- source### impl T_DIV source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_DIV ### impl Send for T_DIV ### impl Sync for T_DIV ### impl Unpin for T_DIV ### impl UnwindSafe for T_DIV Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_FIGURE === ``` pub struct T_FIGURE(_); ``` Implementations --- source### impl T_FIGURE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_FIGURE ### impl Send for T_FIGURE ### impl Sync for T_FIGURE ### impl Unpin for T_FIGURE ### impl UnwindSafe for T_FIGURE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_FOOTER === ``` pub struct T_FOOTER(_); ``` Implementations --- source### impl T_FOOTER source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_FOOTER ### impl Send for T_FOOTER ### impl Sync for T_FOOTER ### impl Unpin for T_FOOTER ### impl UnwindSafe for T_FOOTER Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_FORM === ``` pub struct T_FORM(_); ``` Implementations --- source### impl T_FORM source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_FORM source#### pub fn target(&mut self, val: &str) -> &mutSelf source#### pub fn rel(&mut self, val: &str) -> &mutSelf source#### pub fn novalidate(&mut self) -> &mutSelf source#### pub fn name(&mut self, val: &str) -> &mutSelf source#### pub fn onsubmit(&mut self, val: &str) -> &mutSelf source#### pub fn action(&mut self, val: &str) -> &mutSelf source#### pub fn method(&mut self, val: &str) -> &mutSelf source#### pub fn enctype(&mut self, val: &str) -> &mutSelf source#### pub fn autocomplete(&mut self, b: bool) -> &mutSelf source### impl T_FORM source#### pub fn Novalidate(&mut self) -> String source### impl T_FORM source#### pub fn Target(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Rel(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Name(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Onsubmit(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Action(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Method(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Enctype(&mut self, x: &str) -> String source### impl T_FORM source#### pub fn Autocomplete(&mut self, b: bool) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_FORM ### impl Send for T_FORM ### impl Sync for T_FORM ### impl Unpin for T_FORM ### impl UnwindSafe for T_FORM Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H1 === ``` pub struct T_H1(_); ``` Implementations --- source### impl T_H1 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H1 ### impl Send for T_H1 ### impl Sync for T_H1 ### impl Unpin for T_H1 ### impl UnwindSafe for T_H1 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H2 === ``` pub struct T_H2(_); ``` Implementations --- source### impl T_H2 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H2 ### impl Send for T_H2 ### impl Sync for T_H2 ### impl Unpin for T_H2 ### impl UnwindSafe for T_H2 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H3 === ``` pub struct T_H3(_); ``` Implementations --- source### impl T_H3 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H3 ### impl Send for T_H3 ### impl Sync for T_H3 ### impl Unpin for T_H3 ### impl UnwindSafe for T_H3 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H4 === ``` pub struct T_H4(_); ``` Implementations --- source### impl T_H4 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H4 ### impl Send for T_H4 ### impl Sync for T_H4 ### impl Unpin for T_H4 ### impl UnwindSafe for T_H4 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H5 === ``` pub struct T_H5(_); ``` Implementations --- source### impl T_H5 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H5 ### impl Send for T_H5 ### impl Sync for T_H5 ### impl Unpin for T_H5 ### impl UnwindSafe for T_H5 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_H6 === ``` pub struct T_H6(_); ``` Implementations --- source### impl T_H6 source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_H6 ### impl Send for T_H6 ### impl Sync for T_H6 ### impl Unpin for T_H6 ### impl UnwindSafe for T_H6 Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_HEAD === ``` pub struct T_HEAD(_); ``` Implementations --- source### impl T_HEAD source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_HEAD ### impl Send for T_HEAD ### impl Sync for T_HEAD ### impl Unpin for T_HEAD ### impl UnwindSafe for T_HEAD Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_HEADER === ``` pub struct T_HEADER(_); ``` Implementations --- source### impl T_HEADER source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_HEADER ### impl Send for T_HEADER ### impl Sync for T_HEADER ### impl Unpin for T_HEADER ### impl UnwindSafe for T_HEADER Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_HTML === ``` pub struct T_HTML(_); ``` Implementations --- source### impl T_HTML source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_HTML ### impl Send for T_HTML ### impl Sync for T_HTML ### impl Unpin for T_HTML ### impl UnwindSafe for T_HTML Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_IFRAME === ``` pub struct T_IFRAME(_); ``` Implementations --- source### impl T_IFRAME source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_IFRAME source#### pub fn src(&mut self, val: &str) -> &mutSelf source#### pub fn loading(&mut self, val: &str) -> &mutSelf source#### pub fn allowfullscreen(&mut self) -> &mutSelf source### impl T_IFRAME source#### pub fn Allowfullscreen(&mut self) -> String source### impl T_IFRAME source#### pub fn Src(&mut self, x: &str) -> String source### impl T_IFRAME source#### pub fn Loading(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_IFRAME ### impl Send for T_IFRAME ### impl Sync for T_IFRAME ### impl Unpin for T_IFRAME ### impl UnwindSafe for T_IFRAME Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_IMG === ``` pub struct T_IMG(_); ``` Implementations --- source### impl T_IMG source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_IMG source#### pub fn src(&mut self, val: &str) -> &mutSelf source### impl T_IMG source#### pub fn Src(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_IMG ### impl Send for T_IMG ### impl Sync for T_IMG ### impl Unpin for T_IMG ### impl UnwindSafe for T_IMG Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_INPUT === ``` pub struct T_INPUT(_); ``` Implementations --- source### impl T_INPUT source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_INPUT source#### pub fn multiple(&mut self) -> &mutSelf source#### pub fn minlength(&mut self, val: i32) -> &mutSelf source#### pub fn value(&mut self, val: &str) -> &mutSelf source#### pub fn formaction(&mut self, val: &str) -> &mutSelf source#### pub fn title(&mut self, val: &str) -> &mutSelf source#### pub fn size(&mut self, val: i32) -> &mutSelf source#### pub fn max(&mut self, val: &str) -> &mutSelf source#### pub fn list(&mut self, id: &str) -> &mutSelf source#### pub fn name(&mut self, val: &str) -> &mutSelf source#### pub fn placeholder(&mut self, val: &str) -> &mutSelf source#### pub fn readonly(&mut self) -> &mutSelf source#### pub fn min(&mut self, val: &str) -> &mutSelf source#### pub fn checked(&mut self) -> &mutSelf source#### pub fn autofocus(&mut self) -> &mutSelf source#### pub fn disabled(&mut self) -> &mutSelf source### impl T_INPUT source#### pub fn autocomplete(&mut self, val: bool) -> &mutSelf source#### pub fn alt(&mut self, val: &str) -> &mutSelf source#### pub fn accept(&mut self, val: &str) -> &mutSelf source#### pub fn type_(&mut self, val: &str) -> &mutSelf source#### pub fn onchange(&mut self, val: &str) -> &mutSelf source#### pub fn oninput(&mut self, val: &str) -> &mutSelf source#### pub fn oninvalid(&mut self, val: &str) -> &mutSelf source#### pub fn required(&mut self) -> &mutSelf source### impl T_INPUT source#### pub fn Multiple(&mut self) -> String source### impl T_INPUT source#### pub fn Readonly(&mut self) -> String source### impl T_INPUT source#### pub fn Checked(&mut self) -> String source### impl T_INPUT source#### pub fn Autofocus(&mut self) -> String source### impl T_INPUT source#### pub fn Disabled(&mut self) -> String source### impl T_INPUT source#### pub fn Required(&mut self) -> String source### impl T_INPUT source#### pub fn Value(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Formaction(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Title(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Max(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn List(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Name(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Placeholder(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Min(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Alt(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Accept(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Type(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Onchange(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Oninput(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Oninvalid(&mut self, x: &str) -> String source### impl T_INPUT source#### pub fn Minlength(&mut self, x: i32) -> String source### impl T_INPUT source#### pub fn Size(&mut self, x: i32) -> String source### impl T_INPUT source#### pub fn Autocomplete(&mut self, b: bool) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_INPUT ### impl Send for T_INPUT ### impl Sync for T_INPUT ### impl Unpin for T_INPUT ### impl UnwindSafe for T_INPUT Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_LABEL === ``` pub struct T_LABEL(_); ``` Implementations --- source### impl T_LABEL source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_LABEL source#### pub fn for_(&mut self, x: &str) -> &mutSelf source### impl T_LABEL source#### pub fn For(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_LABEL ### impl Send for T_LABEL ### impl Sync for T_LABEL ### impl Unpin for T_LABEL ### impl UnwindSafe for T_LABEL Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_LI === ``` pub struct T_LI(_); ``` Implementations --- source### impl T_LI source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_LI source#### pub fn value(&mut self, n: u32) -> &mutSelf source### impl T_LI source#### pub fn Value(&mut self, x: u32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_LI ### impl Send for T_LI ### impl Sync for T_LI ### impl Unpin for T_LI ### impl UnwindSafe for T_LI Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_LINK === ``` pub struct T_LINK(_); ``` Implementations --- source### impl T_LINK source#### pub fn rel(&mut self, val: &str) -> &mutSelf source#### pub fn href(&mut self, val: &str) -> &mutSelf source### impl T_LINK source#### pub fn Rel(&mut self, x: &str) -> String source### impl T_LINK source#### pub fn Href(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_LINK ### impl Send for T_LINK ### impl Sync for T_LINK ### impl Unpin for T_LINK ### impl UnwindSafe for T_LINK Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_MAIN === ``` pub struct T_MAIN(_); ``` Implementations --- source### impl T_MAIN source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_MAIN ### impl Send for T_MAIN ### impl Sync for T_MAIN ### impl Unpin for T_MAIN ### impl UnwindSafe for T_MAIN Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_MARQUEE === ``` pub struct T_MARQUEE(_); ``` Implementations --- source### impl T_MARQUEE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_MARQUEE source#### pub fn behavior(&mut self, val: &str) -> &mutSelf source#### pub fn direction(&mut self, val: &str) -> &mutSelf source#### pub fn loop_(&mut self, x: u32) -> &mutSelf source#### pub fn scrollamount(&mut self, x: u32) -> &mutSelf source#### pub fn hspace(&mut self, x: &str) -> &mutSelf source### impl T_MARQUEE source#### pub fn Behavior(&mut self, x: &str) -> String source### impl T_MARQUEE source#### pub fn Direction(&mut self, x: &str) -> String source### impl T_MARQUEE source#### pub fn Hspace(&mut self, x: &str) -> String source### impl T_MARQUEE source#### pub fn Loop(&mut self, x: u32) -> String source### impl T_MARQUEE source#### pub fn Scrollamount(&mut self, x: u32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_MARQUEE ### impl Send for T_MARQUEE ### impl Sync for T_MARQUEE ### impl Unpin for T_MARQUEE ### impl UnwindSafe for T_MARQUEE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_META === ``` pub struct T_META(_); ``` Implementations --- source### impl T_META source#### pub fn charset(&mut self, val: &str) -> &mutSelf source#### pub fn name(&mut self, val: &str) -> &mutSelf source#### pub fn content(&mut self, val: &str) -> &mutSelf source#### pub fn property(&mut self, val: &str) -> &mutSelf source#### pub fn httpequiv(&mut self, val: &str) -> &mutSelf source### impl T_META source#### pub fn Charset(&mut self, x: &str) -> String source### impl T_META source#### pub fn Name(&mut self, x: &str) -> String source### impl T_META source#### pub fn Content(&mut self, x: &str) -> String source### impl T_META source#### pub fn Property(&mut self, x: &str) -> String source### impl T_META source#### pub fn Httpequiv(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_META ### impl Send for T_META ### impl Sync for T_META ### impl Unpin for T_META ### impl UnwindSafe for T_META Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_NAV === ``` pub struct T_NAV(_); ``` Implementations --- source### impl T_NAV source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_NAV ### impl Send for T_NAV ### impl Sync for T_NAV ### impl Unpin for T_NAV ### impl UnwindSafe for T_NAV Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_OL === ``` pub struct T_OL(_); ``` Implementations --- source### impl T_OL source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_OL source#### pub fn reversed(&mut self) -> &mutSelf source#### pub fn start(&mut self, n: u32) -> &mutSelf source#### pub fn type_(&mut self, val: &str) -> &mutSelf source### impl T_OL source#### pub fn Reversed(&mut self) -> String source### impl T_OL source#### pub fn Type(&mut self, x: &str) -> String source### impl T_OL source#### pub fn Start(&mut self, x: u32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_OL ### impl Send for T_OL ### impl Sync for T_OL ### impl Unpin for T_OL ### impl UnwindSafe for T_OL Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_OPTION === ``` pub struct T_OPTION(_); ``` Implementations --- source### impl T_OPTION source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_OPTION source#### pub fn disabled(&mut self) -> &mutSelf source#### pub fn selected(&mut self) -> &mutSelf source#### pub fn hidden(&mut self) -> &mutSelf source#### pub fn value(&mut self, x: &str) -> &mutSelf source### impl T_OPTION source#### pub fn Disabled(&mut self) -> String source### impl T_OPTION source#### pub fn Selected(&mut self) -> String source### impl T_OPTION source#### pub fn Hidden(&mut self) -> String source### impl T_OPTION source#### pub fn Value(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_OPTION ### impl Send for T_OPTION ### impl Sync for T_OPTION ### impl Unpin for T_OPTION ### impl UnwindSafe for T_OPTION Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_P === ``` pub struct T_P(_); ``` Implementations --- source### impl T_P source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_P ### impl Send for T_P ### impl Sync for T_P ### impl Unpin for T_P ### impl UnwindSafe for T_P Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_PRE === ``` pub struct T_PRE(_); ``` Implementations --- source### impl T_PRE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_PRE source#### pub fn oninput(&mut self, f: &str) -> &mutSelf source#### pub fn onchange(&mut self, f: &str) -> &mutSelf source#### pub fn wrap(&mut self) -> &mutSelf source### impl T_PRE source#### pub fn Wrap(&mut self) -> String source### impl T_PRE source#### pub fn Oninput(&mut self, x: &str) -> String source### impl T_PRE source#### pub fn Onchange(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_PRE ### impl Send for T_PRE ### impl Sync for T_PRE ### impl Unpin for T_PRE ### impl UnwindSafe for T_PRE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_Q === ``` pub struct T_Q(_); ``` Implementations --- source### impl T_Q source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_Q ### impl Send for T_Q ### impl Sync for T_Q ### impl Unpin for T_Q ### impl UnwindSafe for T_Q Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SCRIPT === ``` pub struct T_SCRIPT(_); ``` Implementations --- source### impl T_SCRIPT source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_SCRIPT source#### pub fn type_(&mut self, val: &str) -> &mutSelf source#### pub fn src(&mut self, val: &str) -> &mutSelf source### impl T_SCRIPT source#### pub fn Type(&mut self, x: &str) -> String source### impl T_SCRIPT source#### pub fn Src(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SCRIPT ### impl Send for T_SCRIPT ### impl Sync for T_SCRIPT ### impl Unpin for T_SCRIPT ### impl UnwindSafe for T_SCRIPT Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SECTION === ``` pub struct T_SECTION(_); ``` Implementations --- source### impl T_SECTION source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SECTION ### impl Send for T_SECTION ### impl Sync for T_SECTION ### impl Unpin for T_SECTION ### impl UnwindSafe for T_SECTION Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SELECT === ``` pub struct T_SELECT(_); ``` Implementations --- source### impl T_SELECT source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_SELECT source#### pub fn autofocus(&mut self) -> &mutSelf source#### pub fn disabled(&mut self) -> &mutSelf source#### pub fn required(&mut self) -> &mutSelf source#### pub fn name(&mut self, x: &str) -> &mutSelf source#### pub fn size(&mut self, x: i32) -> &mutSelf source#### pub fn onchange(&mut self, f: &str) -> &mutSelf source### impl T_SELECT source#### pub fn Autofocus(&mut self) -> String source### impl T_SELECT source#### pub fn Disabled(&mut self) -> String source### impl T_SELECT source#### pub fn Required(&mut self) -> String source### impl T_SELECT source#### pub fn Name(&mut self, x: &str) -> String source### impl T_SELECT source#### pub fn Onchange(&mut self, x: &str) -> String source### impl T_SELECT source#### pub fn Size(&mut self, x: i32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SELECT ### impl Send for T_SELECT ### impl Sync for T_SELECT ### impl Unpin for T_SELECT ### impl UnwindSafe for T_SELECT Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SMALL === ``` pub struct T_SMALL(_); ``` Implementations --- source### impl T_SMALL source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SMALL ### impl Send for T_SMALL ### impl Sync for T_SMALL ### impl Unpin for T_SMALL ### impl UnwindSafe for T_SMALL Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SOURCE === ``` pub struct T_SOURCE(_); ``` Implementations --- source### impl T_SOURCE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_SOURCE source#### pub fn src(&mut self, val: &str) -> &mutSelf source#### pub fn type_(&mut self, val: &str) -> &mutSelf source### impl T_SOURCE source#### pub fn Src(&mut self, x: &str) -> String source### impl T_SOURCE source#### pub fn Type(&mut self, x: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SOURCE ### impl Send for T_SOURCE ### impl Sync for T_SOURCE ### impl Unpin for T_SOURCE ### impl UnwindSafe for T_SOURCE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_SPAN === ``` pub struct T_SPAN(_); ``` Implementations --- source### impl T_SPAN source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_SPAN ### impl Send for T_SPAN ### impl Sync for T_SPAN ### impl Unpin for T_SPAN ### impl UnwindSafe for T_SPAN Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_STYLE === ``` pub struct T_STYLE(_); ``` Implementations --- source### impl T_STYLE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_STYLE ### impl Send for T_STYLE ### impl Sync for T_STYLE ### impl Unpin for T_STYLE ### impl UnwindSafe for T_STYLE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_TEMPLATE === ``` pub struct T_TEMPLATE(_); ``` Implementations --- source### impl T_TEMPLATE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_TEMPLATE ### impl Send for T_TEMPLATE ### impl Sync for T_TEMPLATE ### impl Unpin for T_TEMPLATE ### impl UnwindSafe for T_TEMPLATE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_TEXTAREA === ``` pub struct T_TEXTAREA(_); ``` Implementations --- source### impl T_TEXTAREA source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_TEXTAREA source#### pub fn autofocus(&mut self) -> &mutSelf source#### pub fn disabled(&mut self) -> &mutSelf source#### pub fn readonly(&mut self) -> &mutSelf source#### pub fn required(&mut self) -> &mutSelf source#### pub fn cols(&mut self, x: i32) -> &mutSelf source#### pub fn rows(&mut self, x: i32) -> &mutSelf source#### pub fn maxlength(&mut self, x: i32) -> &mutSelf source#### pub fn name(&mut self, x: &str) -> &mutSelf source#### pub fn placeholder(&mut self, x: &str) -> &mutSelf source#### pub fn onchange(&mut self, f: &str) -> &mutSelf source#### pub fn oninput(&mut self, f: &str) -> &mutSelf source### impl T_TEXTAREA source#### pub fn Autofocus(&mut self) -> String source### impl T_TEXTAREA source#### pub fn Disabled(&mut self) -> String source### impl T_TEXTAREA source#### pub fn Readonly(&mut self) -> String source### impl T_TEXTAREA source#### pub fn Required(&mut self) -> String source### impl T_TEXTAREA source#### pub fn Name(&mut self, x: &str) -> String source### impl T_TEXTAREA source#### pub fn Placeholder(&mut self, x: &str) -> String source### impl T_TEXTAREA source#### pub fn Onchange(&mut self, x: &str) -> String source### impl T_TEXTAREA source#### pub fn Oninput(&mut self, x: &str) -> String source### impl T_TEXTAREA source#### pub fn Cols(&mut self, x: i32) -> String source### impl T_TEXTAREA source#### pub fn Rows(&mut self, x: i32) -> String source### impl T_TEXTAREA source#### pub fn Maxlength(&mut self, x: i32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_TEXTAREA ### impl Send for T_TEXTAREA ### impl Sync for T_TEXTAREA ### impl Unpin for T_TEXTAREA ### impl UnwindSafe for T_TEXTAREA Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_TITLE === ``` pub struct T_TITLE(_); ``` Implementations --- source### impl T_TITLE source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_TITLE ### impl Send for T_TITLE ### impl Sync for T_TITLE ### impl Unpin for T_TITLE ### impl UnwindSafe for T_TITLE Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_UL === ``` pub struct T_UL(_); ``` Implementations --- source### impl T_UL source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_UL ### impl Send for T_UL ### impl Sync for T_UL ### impl Unpin for T_UL ### impl UnwindSafe for T_UL Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct onhtml::T_VIDEO === ``` pub struct T_VIDEO(_); ``` Implementations --- source### impl T_VIDEO source#### pub fn id(&mut self, val: &str) -> &mutSelf source#### pub fn Id(&mut self, val: &str) -> String source#### pub fn class(&mut self, val: &str) -> &mutSelf source#### pub fn Class(&mut self, val: &str) -> String source#### pub fn style(&mut self, val: &str) -> &mutSelf source#### pub fn Style(&mut self, val: &str) -> String source#### pub fn onclick(&mut self, val: &str) -> &mutSelf source#### pub fn Onclick(&mut self, val: &str) -> String source#### pub fn onload(&mut self, val: &str) -> &mutSelf source#### pub fn Onload(&mut self, val: &str) -> String source#### pub fn data(&mut self, key: &str, val: &str) -> &mutSelf source#### pub fn Data(&mut self, key: &str, val: &str) -> String source#### pub fn contenteditable(&mut self) -> &mutSelf source#### pub fn Contenteditable(&mut self) -> String source#### pub fn draggable(&mut self) -> &mutSelf source#### pub fn Draggable(&mut self) -> String source#### pub fn tabindex(&mut self, n: i32) -> &mutSelf source#### pub fn Tabindex(&mut self, n: i32) -> String source#### pub fn autocorrect(&mut self, b: bool) -> &mutSelf source#### pub fn Autocorrect(&mut self, b: bool) -> String source#### pub fn autocapitalize(&mut self, b: bool) -> &mutSelf source#### pub fn Autocapitalize(&mut self, b: bool) -> String source#### pub fn spellcheck(&mut self, b: bool) -> &mutSelf source#### pub fn Spellcheck(&mut self, b: bool) -> String source#### pub fn lang(&mut self, val: &str) -> &mutSelf source#### pub fn Lang(&mut self, val: &str) -> String source### impl T_VIDEO source#### pub fn preload(&mut self, val: &str) -> &mutSelf source#### pub fn autoplay(&mut self) -> &mutSelf source#### pub fn controls(&mut self) -> &mutSelf source#### pub fn loop_(&mut self) -> &mutSelf source#### pub fn muted(&mut self) -> &mutSelf source#### pub fn poster(&mut self, url: &str) -> &mutSelf source#### pub fn src(&mut self, url: &str) -> &mutSelf source#### pub fn width(&mut self, x: i32) -> &mutSelf source#### pub fn height(&mut self, x: i32) -> &mutSelf source### impl T_VIDEO source#### pub fn Autoplay(&mut self) -> String source### impl T_VIDEO source#### pub fn Controls(&mut self) -> String source### impl T_VIDEO source#### pub fn Loop(&mut self) -> String source### impl T_VIDEO source#### pub fn Muted(&mut self) -> String source### impl T_VIDEO source#### pub fn Preload(&mut self, x: &str) -> String source### impl T_VIDEO source#### pub fn Poster(&mut self, x: &str) -> String source### impl T_VIDEO source#### pub fn Src(&mut self, x: &str) -> String source### impl T_VIDEO source#### pub fn Width(&mut self, x: i32) -> String source### impl T_VIDEO source#### pub fn Height(&mut self, x: i32) -> String Auto Trait Implementations --- ### impl RefUnwindSafe for T_VIDEO ### impl Send for T_VIDEO ### impl Sync for T_VIDEO ### impl Unpin for T_VIDEO ### impl UnwindSafe for T_VIDEO Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### pub fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### pub fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### pub fn from(t: T) -> T Performs the conversion. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### pub fn into(self) -> U Performs the conversion. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Function onhtml::A === ``` pub fn A(val: &str) -> String ``` Function onhtml::Article === ``` pub fn Article(val: &str) -> String ``` Function onhtml::Aside === ``` pub fn Aside(val: &str) -> String ``` Function onhtml::B === ``` pub fn B(val: &str) -> String ``` Function onhtml::Big === ``` pub fn Big(val: &str) -> String ``` Function onhtml::Blockquote === ``` pub fn Blockquote(val: &str) -> String ``` Function onhtml::Body === ``` pub fn Body(val: &str) -> String ``` Function onhtml::Button === ``` pub fn Button(val: &str) -> String ``` Function onhtml::Canvas === ``` pub fn Canvas(val: &str) -> String ``` Function onhtml::Code === ``` pub fn Code(val: &str) -> String ``` Function onhtml::Datalist === ``` pub fn Datalist(val: &str) -> String ``` Function onhtml::Div === ``` pub fn Div(val: &str) -> String ``` Function onhtml::Figure === ``` pub fn Figure(val: &str) -> String ``` Function onhtml::Footer === ``` pub fn Footer(val: &str) -> String ``` Function onhtml::Form === ``` pub fn Form(val: &str) -> String ``` Function onhtml::H1 === ``` pub fn H1(val: &str) -> String ``` Function onhtml::H2 === ``` pub fn H2(val: &str) -> String ``` Function onhtml::H3 === ``` pub fn H3(val: &str) -> String ``` Function onhtml::H4 === ``` pub fn H4(val: &str) -> String ``` Function onhtml::H5 === ``` pub fn H5(val: &str) -> String ``` Function onhtml::H6 === ``` pub fn H6(val: &str) -> String ``` Function onhtml::Head === ``` pub fn Head(val: &str) -> String ``` Function onhtml::Header === ``` pub fn Header(val: &str) -> String ``` Function onhtml::Html === ``` pub fn Html(val: &str) -> String ``` Function onhtml::Iframe === ``` pub fn Iframe(val: &str) -> String ``` Function onhtml::Img === ``` pub fn Img(val: &str) -> String ``` Function onhtml::Input === ``` pub fn Input(val: &str) -> String ``` Function onhtml::Label === ``` pub fn Label(val: &str) -> String ``` Function onhtml::Li === ``` pub fn Li(val: &str) -> String ``` Function onhtml::Main === ``` pub fn Main(val: &str) -> String ``` Function onhtml::Marquee === ``` pub fn Marquee(val: &str) -> String ``` Function onhtml::Nav === ``` pub fn Nav(val: &str) -> String ``` Function onhtml::Ol === ``` pub fn Ol(val: &str) -> String ``` Function onhtml::Option === ``` pub fn Option(val: &str) -> String ``` Function onhtml::P === ``` pub fn P(val: &str) -> String ``` Function onhtml::Pre === ``` pub fn Pre(val: &str) -> String ``` Function onhtml::Q === ``` pub fn Q(val: &str) -> String ``` Function onhtml::Script === ``` pub fn Script(val: &str) -> String ``` Function onhtml::Section === ``` pub fn Section(val: &str) -> String ``` Function onhtml::Select === ``` pub fn Select(val: &str) -> String ``` Function onhtml::Small === ``` pub fn Small(val: &str) -> String ``` Function onhtml::Source === ``` pub fn Source(val: &str) -> String ``` Function onhtml::Span === ``` pub fn Span(val: &str) -> String ``` Function onhtml::Style === ``` pub fn Style(val: &str) -> String ``` Function onhtml::Template === ``` pub fn Template(val: &str) -> String ``` Function onhtml::Textarea === ``` pub fn Textarea(val: &str) -> String ``` Function onhtml::Title === ``` pub fn Title(val: &str) -> String ``` Function onhtml::Ul === ``` pub fn Ul(val: &str) -> String ``` Function onhtml::Video === ``` pub fn Video(val: &str) -> String ``` Function onhtml::a === ``` pub fn a(val: &str) -> T_A ``` Function onhtml::article === ``` pub fn article(val: &str) -> T_ARTICLE ``` Function onhtml::aside === ``` pub fn aside(val: &str) -> T_ASIDE ``` Function onhtml::b === ``` pub fn b(val: &str) -> T_B ``` Function onhtml::big === ``` pub fn big(val: &str) -> T_BIG ``` Function onhtml::blockquote === ``` pub fn blockquote(val: &str) -> T_BLOCKQUOTE ``` Function onhtml::body === ``` pub fn body(val: &str) -> T_BODY ``` Function onhtml::button === ``` pub fn button(val: &str) -> T_BUTTON ``` Function onhtml::canvas === ``` pub fn canvas(val: &str) -> T_CANVAS ``` Function onhtml::code === ``` pub fn code(val: &str) -> T_CODE ``` Function onhtml::datalist === ``` pub fn datalist(val: &str) -> T_DATALIST ``` Function onhtml::div === ``` pub fn div(val: &str) -> T_DIV ``` Function onhtml::doctype === ``` pub fn doctype(val: &str) -> String ``` Function onhtml::figure === ``` pub fn figure(val: &str) -> T_FIGURE ``` Function onhtml::footer === ``` pub fn footer(val: &str) -> T_FOOTER ``` Function onhtml::form === ``` pub fn form(val: &str) -> T_FORM ``` Function onhtml::h1 === ``` pub fn h1(val: &str) -> T_H1 ``` Function onhtml::h2 === ``` pub fn h2(val: &str) -> T_H2 ``` Function onhtml::h3 === ``` pub fn h3(val: &str) -> T_H3 ``` Function onhtml::h4 === ``` pub fn h4(val: &str) -> T_H4 ``` Function onhtml::h5 === ``` pub fn h5(val: &str) -> T_H5 ``` Function onhtml::h6 === ``` pub fn h6(val: &str) -> T_H6 ``` Function onhtml::head === ``` pub fn head(val: &str) -> T_HEAD ``` Function onhtml::header === ``` pub fn header(val: &str) -> T_HEADER ``` Function onhtml::html === ``` pub fn html(val: &str) -> T_HTML ``` Function onhtml::iframe === ``` pub fn iframe(val: &str) -> T_IFRAME ``` Function onhtml::img === ``` pub fn img(val: &str) -> T_IMG ``` Function onhtml::input === ``` pub fn input(val: &str) -> T_INPUT ``` Function onhtml::label === ``` pub fn label(val: &str) -> T_LABEL ``` Function onhtml::li === ``` pub fn li(val: &str) -> T_LI ``` Function onhtml::link === ``` pub fn link() -> T_LINK ``` Function onhtml::main_ === ``` pub fn main_(val: &str) -> T_MAIN ``` Function onhtml::marquee === ``` pub fn marquee(val: &str) -> T_MARQUEE ``` Function onhtml::meta === ``` pub fn meta() -> T_META ``` Function onhtml::nav === ``` pub fn nav(val: &str) -> T_NAV ``` Function onhtml::ol === ``` pub fn ol(val: &str) -> T_OL ``` Function onhtml::option === ``` pub fn option(val: &str) -> T_OPTION ``` Function onhtml::p === ``` pub fn p(val: &str) -> T_P ``` Function onhtml::pre === ``` pub fn pre(val: &str) -> T_PRE ``` Function onhtml::q === ``` pub fn q(val: &str) -> T_Q ``` Function onhtml::script === ``` pub fn script(val: &str) -> T_SCRIPT ``` Function onhtml::section === ``` pub fn section(val: &str) -> T_SECTION ``` Function onhtml::select === ``` pub fn select(val: &str) -> T_SELECT ``` Function onhtml::small === ``` pub fn small(val: &str) -> T_SMALL ``` Function onhtml::source === ``` pub fn source(val: &str) -> T_SOURCE ``` Function onhtml::span === ``` pub fn span(val: &str) -> T_SPAN ``` Function onhtml::style === ``` pub fn style(val: &str) -> T_STYLE ``` Function onhtml::template === ``` pub fn template(val: &str) -> T_TEMPLATE ``` Function onhtml::textarea === ``` pub fn textarea(val: &str) -> T_TEXTAREA ``` Function onhtml::title === ``` pub fn title(val: &str) -> T_TITLE ``` Function onhtml::ul === ``` pub fn ul(val: &str) -> T_UL ``` Function onhtml::video === ``` pub fn video(val: &str) -> T_VIDEO ```
CondIndTests
cran
R
Package ‘CondIndTests’ October 12, 2022 Type Package Title Nonlinear Conditional Independence Tests Version 0.1.5 Date 2019-11-11 Author <NAME> <<EMAIL>>, <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Depends R (>= 3.1.0) Maintainer <NAME> <<EMAIL>> Description Code for a variety of nonlinear conditional independence tests: Kernel conditional independence test (Zhang et al., UAI 2011, <arXiv:1202.3775>), Residual Prediction test (based on Shah and Buehlmann, <arXiv:1511.03334>), Invariant environment prediction, Invariant target prediction, Invariant residual distribution test, Invariant conditional quantile prediction (all from Heinze-Deml et al., <arXiv:1706.08576>). License GPL LazyData TRUE Imports methods, randomForest, quantregForest, lawstat, RPtests, caTools, mgcv, MASS, kernlab, pracma, mize URL https://github.com/christinaheinze/nonlinearICP-and-CondIndTests BugReports https://github.com/christinaheinze/nonlinearICP-and-CondIndTests/issues RoxygenNote 6.1.1 Suggests testthat NeedsCompilation no Repository CRAN Date/Publication 2019-11-12 06:50:21 UTC R topics documented: CondIndTes... 2 fishersTestExceedanc... 3 fTestTarget... 4 InvariantConditionalQuantilePredictio... 4 InvariantEnvironmentPredictio... 6 InvariantResidualDistributionTes... 7 InvariantTargetPredictio... 9 KC... 10 ksResidualDistribution... 12 leveneAndWilcoxResidualDistribution... 12 propTestTarget... 13 ResidualPredictionTes... 14 wilcoxTestTarget... 15 CondIndTest Wrapper function for conditional independence tests. Description Tests the null hypothesis that Y and E are independent given X. Usage CondIndTest(Y, E, X, method = "KCI", alpha = 0.05, parsMethod = list(), verbose = FALSE) Arguments Y An n-dimensional vector or a matrix or dataframe with n rows and p columns. E An n-dimensional vector or a matrix or dataframe with n rows and p columns. X An n-dimensional vector or a matrix or dataframe with n rows and p columns. method The conditional indepdence test to use, can be one of "KCI", "InvariantConditionalQuantilePredict "InvariantEnvironmentPrediction", "InvariantResidualDistributionTest", "InvariantTargetPrediction", "ResidualPredictionTest". alpha Significance level. Defaults to 0.05. parsMethod Named list to pass options to method. verbose If TRUE, intermediate output is provided. Defaults to FALSE. Value A list with the p-value of the test (pvalue) and possibly additional entries, depending on the output of the chosen conditional independence test in method. References Please cite <NAME>, <NAME> and <NAME>: "Invariant Causal Prediction for Nonlin- ear Models", arXiv:1706.08576 and the corresponding reference for the conditional independence test. Examples # Example 1 set.seed(1) n <- 100 Z <- rnorm(n) X <- 4 + 2 * Z + rnorm(n) Y <- 3 * X^2 + Z + rnorm(n) test1 <- CondIndTest(X,Y,Z, method = "KCI") cat("These data come from a distribution, for which X and Y are NOT cond. ind. given Z.") cat(paste("The p-value of the test is: ", test1$pvalue)) # Example 2 set.seed(1) Z <- rnorm(n) X <- 4 + 2 * Z + rnorm(n) Y <- 3 + Z + rnorm(n) test2 <- CondIndTest(X,Y,Z, method = "KCI") cat("The data come from a distribution, for which X and Y are cond. ind. given Z.") cat(paste("The p-value of the test is: ", test2$pvalue)) fishersTestExceedance Fishers test to test whether the exceedance of the conditional quantiles is independent of the categorical variable E. Description Used as a subroutine in InvariantConditionalQuantilePrediction to test whether the ex- ceedance of the conditional quantiles is independent of the categorical variable E. Usage fishersTestExceedance(Y, predicted, E, verbose) Arguments Y An n-dimensional vector. predicted A matrix with n rows. The columns contain predictions for different conditional quantiles of Y|X. E An n-dimensional vector. E needs to be a factor. verbose Set to TRUE if output should be printed. Value A list with the p-value for the test. fTestTargetY F-test for a nested model comparison. Description Used as a subroutine in InvariantTargetPrediction to test whether out-of-sample prediction performance is better when using X and E as predictors for Y, compared to using X only. Usage fTestTargetY(Y, predictedOnlyX, predictedXE, verbose, ...) Arguments Y An n-dimensional vector. predictedOnlyX Predictions for Y based on predictors in X only. predictedXE Predictions for Y based on predictors in X and E. verbose Set to TRUE if output should be printed. ... The dimensions of X (df) and E (dimE) need to be passed via the ... argument to allow for coherent interface of fTestTargetY and wilcoxTestTargetY. Value A list with the p-value for the test. InvariantConditionalQuantilePrediction Invariant conditional quantile prediction. Description Tests the null hypothesis that Y and E are independent given X. Usage InvariantConditionalQuantilePrediction(Y, E, X, alpha = 0.05, verbose = FALSE, test = fishersTestExceedance, mtry = sqrt(NCOL(X)), ntree = 100, nodesize = 5, maxnodes = NULL, quantiles = c(0.1, 0.5, 0.9), returnModel = FALSE) Arguments Y An n-dimensional vector. E An n-dimensional vector. If test = fishersTestExceedance, E needs to be a factor. X A matrix or dataframe with n rows and p columns. alpha Significance level. Defaults to 0.05. verbose If TRUE, intermediate output is provided. Defaults to FALSE. test Unconditional independence test that tests whether exceedence is independent of E. Defaults to fishersTestExceedance. mtry Random forest parameter: Number of variables randomly sampled as candidates at each split. Defaults to sqrt(NCOL(X)). ntree Random forest parameter: Number of trees to grow. Defaults to 100. nodesize Random forest parameter: Minimum size of terminal nodes. Defaults to 5. maxnodes Random forest parameter: Maximum number of terminal nodes trees in the for- est can have. Defaults to NULL. quantiles Quantiles for which to test independence between exceedence and E. Defaults to c(0.1, 0.5, 0.9). returnModel If TRUE, the fitted quantile regression forest model will be returned. Defaults to FALSE. Value A list with the following entries: • pvalue The p-value for the null hypothesis that Y and E are independent given X. • model The fitted quantile regression forest model if returnModel = TRUE. Examples # Example 1 n <- 1000 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantConditionalQuantilePrediction(Y, as.factor(E), X) # Example 2 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * E + rnorm(n) InvariantConditionalQuantilePrediction(Y, as.factor(E), X) InvariantEnvironmentPrediction Invariant environment prediction. Description Tests the null hypothesis that Y and E are independent given X. Usage InvariantEnvironmentPrediction(Y, E, X, alpha = 0.05, verbose = FALSE, trainTestSplitFunc = caTools::sample.split, argsTrainTestSplitFunc = list(Y = E, SplitRatio = 0.8), test = propTestTargetE, mtry = sqrt(NCOL(X)), ntree = 100, nodesize = 5, maxnodes = NULL, permute = TRUE, returnModel = FALSE) Arguments Y An n-dimensional vector. E An n-dimensional vector. If test = propTestTargetE, E needs to be a factor. X A matrix or dataframe with n rows and p columns. alpha Significance level. Defaults to 0.05. verbose If TRUE, intermediate output is provided. Defaults to FALSE. trainTestSplitFunc Function to split sample. Defaults to stratified sampling using caTools::sample.split, assuming E is a factor. argsTrainTestSplitFunc Arguments for sampling splitting function. test Unconditional independence test that tests whether the out-of-sample prediction accuracy is the same when using X only vs. X and Y as predictors for E. Defaults to propTestTargetE. mtry Random forest parameter: Number of variables randomly sampled as candidates at each split. Defaults to sqrt(NCOL(X)). ntree Random forest parameter: Number of trees to grow. Defaults to 100. nodesize Random forest parameter: Minimum size of terminal nodes. Defaults to 5. maxnodes Random forest parameter: Maximum number of terminal nodes trees in the for- est can have. Defaults to NULL. permute Random forest parameter: If TRUE, model that would use X only for predicting Y also includes a random permutation of E. Defaults to TRUE. returnModel If TRUE, the fitted quantile regression forest model will be returned. Defaults to FALSE. Value A list with the following entries: • pvalue The p-value for the null hypothesis that Y and E are independent given X. • model The fitted models if returnModel = TRUE. Examples # Example 1 n <- 1000 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantEnvironmentPrediction(Y, as.factor(E), X) # Example 2 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * E + rnorm(n) InvariantEnvironmentPrediction(Y, as.factor(E), X) # Example 3 E <- rnorm(n) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantEnvironmentPrediction(Y, E, X, test = wilcoxTestTargetY) InvariantEnvironmentPrediction(Y, X, E, test = wilcoxTestTargetY) InvariantResidualDistributionTest Invariant residual distribution test. Description Tests the null hypothesis that Y and E are independent given X. Usage InvariantResidualDistributionTest(Y, E, X, alpha = 0.05, verbose = FALSE, fitWithGam = TRUE, test = leveneAndWilcoxResidualDistributions, colNameNoSmooth = NULL, mtry = sqrt(NCOL(X)), ntree = 100, nodesize = 5, maxnodes = NULL, returnModel = FALSE) Arguments Y An n-dimensional vector. E An n-dimensional vector. E needs to be a factor. X A matrix or dataframe with n rows and p columns. alpha Significance level. Defaults to 0.05. verbose If TRUE, intermediate output is provided. Defaults to FALSE. fitWithGam If TRUE, a GAM is used for the nonlinear regression, else a random forest is used. Defaults to TRUE. test Unconditional independence test that tests whether residual distribution is in- variant across different levels of E. Defaults to leveneAndWilcoxResidDistributions. colNameNoSmooth Gam parameter: Name of variables that should enter linearly into the model. Defaults to NULL. mtry Random forest parameter: Number of variables randomly sampled as candidates at each split. Defaults to sqrt(NCOL(X)). ntree Random forest parameter: Number of trees to grow. Defaults to 100. nodesize Random forest parameter: Minimum size of terminal nodes. Defaults to 5. maxnodes Random forest parameter: Maximum number of terminal nodes trees in the for- est can have. Defaults to NULL. returnModel If TRUE, the fitted quantile regression forest model will be returned. Defaults to FALSE. Value A list with the following entries: • pvalue The p-value for the null hypothesis that Y and E are independent given X. • model The fitted model if returnModel = TRUE. Examples # Example 1 n <- 1000 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantResidualDistributionTest(Y, as.factor(E), X) InvariantResidualDistributionTest(Y, as.factor(E), X, test = ksResidualDistributions) # Example 2 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * E + rnorm(n) InvariantResidualDistributionTest(Y, as.factor(E), X) InvariantResidualDistributionTest(Y, as.factor(E), X, test = ksResidualDistributions) InvariantTargetPrediction Invariant target prediction. Description Tests the null hypothesis that Y and E are independent given X. Usage InvariantTargetPrediction(Y, E, X, alpha = 0.05, verbose = FALSE, fitWithGam = TRUE, trainTestSplitFunc = caTools::sample.split, argsTrainTestSplitFunc = NULL, test = fTestTargetY, colNameNoSmooth = NULL, mtry = sqrt(NCOL(X)), ntree = 100, nodesize = 5, maxnodes = NULL, permute = TRUE, returnModel = FALSE) Arguments Y An n-dimensional vector. E An n-dimensional vector or an nxq dimensional matrix or dataframe. X A matrix or dataframe with n rows and p columns. alpha Significance level. Defaults to 0.05. verbose If TRUE, intermediate output is provided. Defaults to FALSE. fitWithGam If TRUE, a GAM is used for the nonlinear regression, else a random forest is used. Defaults to TRUE. trainTestSplitFunc Function to split sample. Defaults to stratified sampling using caTools::sample.split, assuming E is a factor. argsTrainTestSplitFunc Arguments for sampling splitting function. test Unconditional independence test that tests whether the out-of-sample prediction accuracy is the same when using X only vs. X and E as predictors for Y. Defaults to fTestTargetY. colNameNoSmooth Gam parameter: Name of variables that should enter linearly into the model. Defaults to NULL. mtry Random forest parameter: Number of variables randomly sampled as candidates at each split. Defaults to sqrt(NCOL(X)). ntree Random forest parameter: Number of trees to grow. Defaults to 100. nodesize Random forest parameter: Minimum size of terminal nodes. Defaults to 5. maxnodes Random forest parameter: Maximum number of terminal nodes trees in the for- est can have. Defaults to NULL. permute Random forest parameter: If TRUE, model that would use X only for predicting Y also includes a random permutation of E. Defaults to TRUE. returnModel If TRUE, the fitted quantile regression forest model will be returned. Defaults to FALSE. Value A list with the following entries: • pvalue The p-value for the null hypothesis that Y and E are independent given X. • model The fitted models if returnModel = TRUE. Examples # Example 1 n <- 1000 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantTargetPrediction(Y, as.factor(E), X) InvariantTargetPrediction(Y, as.factor(E), X, test = wilcoxTestTargetY) # Example 2 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * E + rnorm(n) InvariantTargetPrediction(Y, as.factor(E), X) InvariantTargetPrediction(Y, as.factor(E), X, test = wilcoxTestTargetY) # Example 3 E <- rnorm(n) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) InvariantTargetPrediction(Y, E, X) InvariantTargetPrediction(Y, X, E) InvariantTargetPrediction(Y, E, X, test = wilcoxTestTargetY) InvariantTargetPrediction(Y, X, E, test = wilcoxTestTargetY) KCI Kernel conditional independence test. Description Tests the null hypothesis that Y and E are independent given X. The distribution of the test statistic under the null hypothesis equals an infinite weighted sum of chi squared variables. This distribution can either be approximated by a gamma distribution or by a Monte Carlo approach. This version includes an implementation of choosing the hyperparameters by Gaussian Process regression. Usage KCI(Y, E, X, width = 0, alpha = 0.05, unbiased = FALSE, gammaApprox = TRUE, GP = TRUE, nRepBs = 5000, lambda = 0.001, thresh = 1e-05, numEig = NROW(Y), verbose = FALSE) Arguments Y A vector of length n or a matrix or dataframe with n rows and p columns. E A vector of length n or a matrix or dataframe with n rows and p columns. X A matrix or dataframe with n rows and p columns. width Kernel width; if it is set to zero, the width is chosen automatically (default: 0). alpha Significance level (default: 0.05). unbiased A boolean variable that indicates whether a bias correction should be applied (default: FALSE). gammaApprox A boolean variable that indicates whether the null distribution is approximated by a Gamma distribution. If it is FALSE, a Monte Carlo approach is used (de- fault: TRUE). GP Flag whether to use Gaussian Process regression to choose the hyperparameters nRepBs Number of draws for the Monte Carlo approach (default: 500). lambda Regularization parameter (default: 1e-03). thresh Threshold for eigenvalues. Whenever eigenvalues are computed, they are set to zero if they are smaller than thresh times the maximum eigenvalue (default: 1e-05). numEig Number of eigenvalues computed (only relevant for computing the distribution under the hypothesis of conditional independence) (default: length(Y)). verbose If TRUE, intermediate output is provided. (default: FALSE). Value A list with the following entries: • testStatistic the statistic Tr(K_(ddot(Y)|X) * K_(E|X)) • criticalValue the critical point at the p-value equal to alpha; obtained by a Monte Carlo approach if gammaApprox = FALSE, otherwise obtained by Gamma approximation. • pvalue The p-value for the null hypothesis that Y and E are independent given X. It is ob- tained by a Monte Carlo approach if gammaApprox = FALSE, otherwise obtained by Gamma approximation. Examples # Example 1 n <- 100 E <- rnorm(n) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) KCI(Y, E, X) KCI(Y, X, E) ksResidualDistributions Kolmogorov-Smirnov test to compare residual distributions Description Used as a subroutine in InvariantResidualDistributionTest to test whether residual distribu- tion remains invariant across different levels of E. Usage ksResidualDistributions(Y, predicted, E, verbose) Arguments Y An n-dimensional vector. predicted An n-dimensional vector of predictions for Y. E An n-dimensional vector. E needs to be a factor. verbose Set to TRUE if output should be printed. Value A list with the p-value for the test. leveneAndWilcoxResidualDistributions Levene and wilcoxon test to compare first and second moments of residual distributions Description Used as a subroutine in InvariantResidualDistributionTest to test whether residual distribu- tion remains invariant across different levels of E. Usage leveneAndWilcoxResidualDistributions(Y, predicted, E, verbose) Arguments Y An n-dimensional vector. predicted An n-dimensional vector of predictions for Y. E An n-dimensional vector. E needs to be a factor. verbose Set to TRUE if output should be printed. Value A list with the p-value for the test. propTestTargetE Proportion test to compare two misclassification rates. Description Used as a subroutine in InvariantEnvironmentPrediction to test whether out-of-sample perfor- mance is better when using X and Y as predictors for E, compared to using X only. Usage propTestTargetE(E, predictedOnlyX, predictedXY, verbose) Arguments E An n-dimensional vector. predictedOnlyX Predictions for E based on predictors in X only. predictedXY Predictions for E based on predictors in X and Y. verbose Set to TRUE if output should be printed. Value A list with the p-value for the test. ResidualPredictionTest Residual prediction test. Description Tests the null hypothesis that Y and E are independent given X. Usage ResidualPredictionTest(Y, E, X, alpha = 0.05, verbose = FALSE, degree = 4, basis = c("nystrom", "nystrom_poly", "fourier", "polynomial", "provided")[1], resid_type = "OLS", XBasis = NULL, noiseMat = NULL, getnoiseFct = function(n, ...) { rnorm(n) }, argsGetNoiseFct = NULL, nSim = 100, funcOfRes = function(x) { abs(x) }, useX = TRUE, returnXBasis = FALSE, nSub = ceiling(NROW(X)/4), ntree = 100, nodesize = 5, maxnodes = NULL) Arguments Y An n-dimensional vector. E An n-dimensional vector or an nxq dimensional matrix or dataframe. X A matrix or dataframe with n rows and p columns. alpha Significance level. Defaults to 0.05. verbose If TRUE, intermediate output is provided. Defaults to FALSE. degree Degree of polynomial to use if basis="polynomial" or basis="nystrom_poly". Defaults to 4. basis Can be one of "nystrom","nystrom_poly","fourier","polynomial","provided". Defaults to "nystrom". resid_type Can be "Lasso" or "OLS". Defaults to "OLS". XBasis Basis if basis="provided". Defaults to NULL. noiseMat Matrix with simulated noise. Defaults to NULL in which case the simulation is performed inside the function. getnoiseFct Function to use to generate the noise matrix. Defaults to function(n, ...){rnorm(n)}. argsGetNoiseFct Arguments for getnoiseFct. Defaults to NULL. nSim Number of simulations to use. Defaults to 100. funcOfRes Function of residuals to use in addition to predicting the conditional mean. De- faults to function(x){abs(x)}. useX Set to TRUE if the predictors in X should also be used when predicting the scaled residuals with E. Defaults to TRUE. returnXBasis Set to TRUE if basis expansion should be returned. Defaults to FALSE. nSub Number of random features to use if basis is one of "nystrom","nystrom_poly" or "fourier". Defaults to ceiling(NROW(X)/4). ntree Random forest parameter: Number of trees to grow. Defaults to 500. nodesize Random forest parameter: Minimum size of terminal nodes. Defaults to 5. maxnodes Random forest parameter: Maximum number of terminal nodes trees in the for- est can have. Defaults to NULL. Value A list with the following entries: • pvalue The p-value for the null hypothesis that Y and E are independent given X. • XBasis Basis expansion if returnXBasis was set to TRUE. • fctBasisExpansion Function used to create basis expansion if basis is not "provided". Examples # Example 1 n <- 100 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * (X)^2 + rnorm(n) ResidualPredictionTest(Y, as.factor(E), X) # Example 2 E <- rbinom(n, size = 1, prob = 0.2) X <- 4 + 2 * E + rnorm(n) Y <- 3 * E + rnorm(n) ResidualPredictionTest(Y, as.factor(E), X) # not run: # # Example 3 # E <- rnorm(n) # X <- 4 + 2 * E + rnorm(n) # Y <- 3 * (X)^2 + rnorm(n) # ResidualPredictionTest(Y, E, X) # ResidualPredictionTest(Y, X, E) wilcoxTestTargetY Wilcoxon test to compare two mean squared error rates. Description Used as a subroutine in InvariantTargetPrediction to test whether out-of-sample performance is better when using X and E as predictors for Y, compared to using X only. Usage wilcoxTestTargetY(Y, predictedOnlyX, predictedXE, verbose, ...) Arguments Y An n-dimensional vector. predictedOnlyX Predictions for Y based on predictors in X only. predictedXE Predictions for Y based on predictors in X and E. verbose Set to TRUE if output should be printed. ... Argument to allow for coherent interface of fTestTargetY and wilcoxTestTar- getY. Value A list with the p-value for the test.
ExtUtils-ParseXS
cpan
Perl
Changes for version 3.51 --- * Initialize $self correctly in EU::PXS::Utilities::death() * C++ builds: avoid generating C<< extern "C" extern "C" >[ Show lessShow more ] Documentation --- [xsubpp](/dist/ExtUtils-ParseXS/view/lib/ExtUtils/xsubpp) compiler to convert Perl XS code into C code Modules --- [ExtUtils::ParseXS](/pod/ExtUtils::ParseXS) converts Perl XS code into C code [ExtUtils::ParseXS::Constants](/pod/ExtUtils::ParseXS::Constants) Initialization values for some globals [ExtUtils::ParseXS::Eval](/pod/ExtUtils::ParseXS::Eval) Clean package to evaluate code in [ExtUtils::ParseXS::Utilities](/pod/ExtUtils::ParseXS::Utilities) Subroutines used with ExtUtils::ParseXS [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) Read/Write/Modify Perl/XS typemap files [ExtUtils::Typemaps::Cmd](/pod/ExtUtils::Typemaps::Cmd) Quick commands for handling typemaps [ExtUtils::Typemaps::InputMap](/pod/ExtUtils::Typemaps::InputMap) Entry in the INPUT section of a typemap [ExtUtils::Typemaps::OutputMap](/pod/ExtUtils::Typemaps::OutputMap) Entry in the OUTPUT section of a typemap [ExtUtils::Typemaps::Type](/pod/ExtUtils::Typemaps::Type) Entry in the TYPEMAP section of a typemap Provides --- [ExtUtils::ParseXS::CountLines](/release/LEONT/ExtUtils-ParseXS-3.51/source/lib/ExtUtils/ParseXS/CountLines.pm#PExtUtils::ParseXS::CountLines) in lib/ExtUtils/ParseXS/CountLines.pm Other files --- * [Changes](/release/LEONT/ExtUtils-ParseXS-3.51/source/Changes) * [MANIFEST](/release/LEONT/ExtUtils-ParseXS-3.51/source/MANIFEST) * [META.json](/release/LEONT/ExtUtils-ParseXS-3.51/source/META.json) * [META.yml](/release/LEONT/ExtUtils-ParseXS-3.51/source/META.yml) * [Makefile.PL](/release/LEONT/ExtUtils-ParseXS-3.51/source/Makefile.PL) × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [OPTIONS](#OPTIONS) * [ENVIRONMENT](#ENVIRONMENT) * [AUTHOR](#AUTHOR) * [MODIFICATION HISTORY](#MODIFICATION-HISTORY) * [SEE ALSO](#SEE-ALSO) NAME === xsubpp - compiler to convert Perl XS code into C code SYNOPSIS === **xsubpp** [**-v**] [**-except**] [**-s pattern**] [**-prototypes**] [**-noversioncheck**] [**-nolinenumbers**] [**-nooptimize**] [**-typemap typemap**] [**-output filename**]... file.xs DESCRIPTION === This compiler is typically run by the makefiles created by [ExtUtils::MakeMaker](/pod/ExtUtils::MakeMaker) or by [Module::Build](/pod/Module::Build) or other Perl module build tools. *xsubpp* will compile XS code into C code by embedding the constructs necessary to let C functions manipulate Perl values and creates the glue necessary to let Perl access those functions. The compiler uses typemaps to determine how to map C function parameters and variables to Perl values. The compiler will search for typemap files called *typemap*. It will use the following search path to find default typemaps, with the rightmost typemap taking precedence. ``` ../../../typemap:../../typemap:../typemap:typemap ``` It will also use a default typemap installed as `ExtUtils::typemap`. OPTIONS === Note that the `XSOPT` MakeMaker option may be used to add these options to any makefiles generated by MakeMaker. **-hiertype** Retains '::' in type names so that C++ hierarchical types can be mapped. **-except** Adds exception handling stubs to the C code. **-typemap typemap** Indicates that a user-supplied typemap should take precedence over the default typemaps. This option may be used multiple times, with the last typemap having the highest precedence. **-output filename** Specifies the name of the output file to generate. If no file is specified, output will be written to standard output. **-v** Prints the *xsubpp* version number to standard output, then exits. **-prototypes** By default *xsubpp* will not automatically generate prototype code for all xsubs. This flag will enable prototypes. **-noversioncheck** Disables the run time test that determines if the object file (derived from the `.xs` file) and the `.pm` files have the same version number. **-nolinenumbers** Prevents the inclusion of '#line' directives in the output. **-nooptimize** Disables certain optimizations. The only optimization that is currently affected is the use of *target*s by the output C code (see [perlguts](/pod/perlguts)). This may significantly slow down the generated code, but this is the way **xsubpp** of 5.005 and earlier operated. **-noinout** Disable recognition of `IN`, `OUT_LIST` and `INOUT_LIST` declarations. **-noargtypes** Disable recognition of ANSI-like descriptions of function signature. **-C++** Currently doesn't do anything at all. This flag has been a no-op for many versions of perl, at least as far back as perl5.003_07. It's allowed here for backwards compatibility. **-s=...** or **-strip=...** *This option is obscure and discouraged.* If specified, the given string will be stripped off from the beginning of the C function name in the generated XS functions (if it starts with that prefix). This only applies to XSUBs without `CODE` or `PPCODE` blocks. For example, the XS: ``` void foo_bar(int i); ``` when `xsubpp` is invoked with `-s foo_` will install a `foo_bar` function in Perl, but really call `bar(i)` in C. Most of the time, this is the opposite of what you want and failure modes are somewhat obscure, so please avoid this option where possible. ENVIRONMENT === No environment variables are used. AUTHOR === Originally by <NAME>. Turned into the `ExtUtils::ParseXS` module by <NAME>. MODIFICATION HISTORY === See the file *Changes*. SEE ALSO === perl(1), perlxs(1), perlxstut(1), ExtUtils::ParseXS × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [EXPORT](#EXPORT) * [METHODS](#METHODS) * [AUTHOR](#AUTHOR) * [COPYRIGHT](#COPYRIGHT) * [SEE ALSO](#SEE-ALSO) NAME === ExtUtils::ParseXS - converts Perl XS code into C code SYNOPSIS === ``` use ExtUtils::ParseXS; my $pxs = ExtUtils::ParseXS->new; $pxs->process_file( filename => 'foo.xs' ); $pxs->process_file( filename => 'foo.xs', output => 'bar.c', 'C++' => 1, typemap => 'path/to/typemap', hiertype => 1, except => 1, versioncheck => 1, linenumbers => 1, optimize => 1, prototypes => 1, die_on_error => 0, ); # Legacy non-OO interface using a singleton: use ExtUtils::ParseXS qw(process_file); process_file( filename => 'foo.xs' ); ``` DESCRIPTION === `ExtUtils::ParseXS` will compile XS code into C code by embedding the constructs necessary to let C functions manipulate Perl values and creates the glue necessary to let Perl access those functions. The compiler uses typemaps to determine how to map C function parameters and variables to Perl values. The compiler will search for typemap files called *typemap*. It will use the following search path to find default typemaps, with the rightmost typemap taking precedence. ``` ../../../typemap:../../typemap:../typemap:typemap ``` EXPORT === None by default. `process_file()` and/or `report_error_count()` may be exported upon request. Using the functional interface is discouraged. METHODS === $pxs->new() Returns a new, empty XS parser/compiler object. $pxs->process_file() This method processes an XS file and sends output to a C file. The method may be called as a function (this is the legacy interface) and will then use a singleton as invocant. Named parameters control how the processing is done. The following parameters are accepted: **C++** Adds `extern "C"` to the C code. Default is false. **hiertype** Retains `::` in type names so that C++ hierarchical types can be mapped. Default is false. **except** Adds exception handling stubs to the C code. Default is false. **typemap** Indicates that a user-supplied typemap should take precedence over the default typemaps. A single typemap may be specified as a string, or multiple typemaps can be specified in an array reference, with the last typemap having the highest precedence. **prototypes** Generates prototype code for all xsubs. Default is false. **versioncheck** Makes sure at run time that the object file (derived from the `.xs` file) and the `.pm` files have the same version number. Default is true. **linenumbers** Adds `#line` directives to the C output so error messages will look like they came from the original XS file. Default is true. **optimize** Enables certain optimizations. The only optimization that is currently affected is the use of *target*s by the output C code (see [perlguts](/pod/perlguts)). Not optimizing may significantly slow down the generated code, but this is the way **xsubpp** of 5.005 and earlier operated. Default is to optimize. **inout** Enable recognition of `IN`, `OUT_LIST` and `INOUT_LIST` declarations. Default is true. **argtypes** Enable recognition of ANSI-like descriptions of function signature. Default is true. **s** *Maintainer note:* I have no clue what this does. Strips function prefixes? **die_on_error** Normally ExtUtils::ParseXS will terminate the program with an `exit(1)` after printing the details of the exception to STDERR via (warn). This can be awkward when it is used programmatically and not via xsubpp, so this option can be used to cause it to die instead by providing a true value. When not provided this defaults to the value of `$ExtUtils::ParseXS::DIE_ON_ERROR` which in turn defaults to false. $pxs->report_error_count() This method returns the number of [a certain kind of] errors encountered during processing of the XS file. The method may be called as a function (this is the legacy interface) and will then use a singleton as invocant. AUTHOR === Based on xsubpp code, written by <NAME>. Maintained by: * <NAME>, <<EMAIL>> * <NAME>, <<EMAIL>> * <NAME>, <<EMAIL>> * <NAME>, <<EMAIL>COPYRIGHT === Copyright 2002-2014 by <NAME>, <NAME> and other contributors. All rights reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Based on the `ExtUtils::xsubpp` code by Larry Wall and the Perl 5 Porters, which was released under the same license terms. SEE ALSO === [perl](/pod/perl), ExtUtils::xsubpp, ExtUtils::MakeMaker, [perlxs](/pod/perlxs), [perlxstut](/pod/perlxstut). × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) NAME === ExtUtils::ParseXS::Constants - Initialization values for some globals SYNOPSIS === ``` use ExtUtils::ParseXS::Constants (); $PrototypeRegexp = $ExtUtils::ParseXS::Constants::PrototypeRegexp; ``` DESCRIPTION === Initialization of certain non-subroutine variables in ExtUtils::ParseXS and some of its supporting packages has been moved into this package so that those values can be defined exactly once and then re-used in any package. Nothing is exported. Use fully qualified variable names. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [SUBROUTINES](#SUBROUTINES) + [$pxs->eval_output_typemap_code($typemapcode, $other_hashref)](#$pxs-%3Eeval_output_typemap_code($typemapcode,-$other_hashref)) + [$pxs->eval_input_typemap_code($typemapcode, $other_hashref)](#$pxs-%3Eeval_input_typemap_code($typemapcode,-$other_hashref)) * [TODO](#TODO) NAME === ExtUtils::ParseXS::Eval - Clean package to evaluate code in SYNOPSIS === ``` use ExtUtils::ParseXS::Eval; my $rv = ExtUtils::ParseXS::Eval::eval_typemap_code( $parsexs_obj, "some Perl code" ); ``` SUBROUTINES === $pxs->eval_output_typemap_code($typemapcode, $other_hashref) --- Sets up various bits of previously global state (formerly ExtUtils::ParseXS package variables) for eval'ing output typemap code that may refer to these variables. Warns the contents of `$@` if any. Not all these variables are necessarily considered "public" wrt. use in typemaps, so beware. Variables set up from the ExtUtils::ParseXS object: ``` $Package $ALIAS $func_name $Full_func_name $pname ``` Variables set up from `$other_hashref`: ``` $var $type $ntype $subtype $arg ``` $pxs->eval_input_typemap_code($typemapcode, $other_hashref) --- Sets up various bits of previously global state (formerly ExtUtils::ParseXS package variables) for eval'ing output typemap code that may refer to these variables. Warns the contents of `$@` if any. Not all these variables are necessarily considered "public" wrt. use in typemaps, so beware. Variables set up from the ExtUtils::ParseXS object: ``` $Package $ALIAS $func_name $Full_func_name $pname ``` Variables set up from `$other_hashref`: ``` $var $type $ntype $subtype $num $init $printed_name $arg $argoff ``` TODO === Eventually, with better documentation and possible some cleanup, this could be part of `ExtUtils::Typemaps`. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [SUBROUTINES](#SUBROUTINES) + [standard_typemap_locations()](#standard_typemap_locations()) + [trim_whitespace()](#trim_whitespace()) + [C_string()](#C_string()) + [valid_proto_string()](#valid_proto_string()) + [process_typemaps()](#process_typemaps()) + [map_type()](#map_type()) + [standard_XS_defs()](#standard_XS_defs()) + [assign_func_args()](#assign_func_args()) + [analyze_preprocessor_statements()](#analyze_preprocessor_statements()) + [set_cond()](#set_cond()) + [current_line_number()](#current_line_number()) + [Warn()](#Warn()) + [WarnHint()](#WarnHint()) + [_MsgHint()](#_MsgHint()) + [blurt()](#blurt()) + [death()](#death()) + [check_conditional_preprocessor_statements()](#check_conditional_preprocessor_statements()) + [escape_file_for_line_directive()](#escape_file_for_line_directive()) + [report_typemap_failure](#report_typemap_failure) NAME === ExtUtils::ParseXS::Utilities - Subroutines used with ExtUtils::ParseXS SYNOPSIS === ``` use ExtUtils::ParseXS::Utilities qw( standard_typemap_locations trim_whitespace C_string valid_proto_string process_typemaps map_type standard_XS_defs assign_func_args analyze_preprocessor_statements set_cond Warn blurt death check_conditional_preprocessor_statements escape_file_for_line_directive report_typemap_failure ); ``` SUBROUTINES === The following functions are not considered to be part of the public interface. They are documented here for the benefit of future maintainers of this module. `standard_typemap_locations()` --- * Purpose Provide a list of filepaths where *typemap* files may be found. The filepaths -- relative paths to files (not just directory paths) -- appear in this list in lowest-to-highest priority. The highest priority is to look in the current directory. ``` 'typemap' ``` The second and third highest priorities are to look in the parent of the current directory and a directory called *lib/ExtUtils* underneath the parent directory. ``` '../typemap', '../lib/ExtUtils/typemap', ``` The fourth through ninth highest priorities are to look in the corresponding grandparent, great-grandparent and great-great-grandparent directories. ``` '../../typemap', '../../lib/ExtUtils/typemap', '../../../typemap', '../../../lib/ExtUtils/typemap', '../../../../typemap', '../../../../lib/ExtUtils/typemap', ``` The tenth and subsequent priorities are to look in directories named *ExtUtils* which are subdirectories of directories found in `@INC` -- *provided* a file named *typemap* actually exists in such a directory. Example: ``` '/usr/local/lib/perl5/5.10.1/ExtUtils/typemap', ``` However, these filepaths appear in the list returned by `standard_typemap_locations()` in reverse order, *i.e.*, lowest-to-highest. ``` '/usr/local/lib/perl5/5.10.1/ExtUtils/typemap', '../../../../lib/ExtUtils/typemap', '../../../../typemap', '../../../lib/ExtUtils/typemap', '../../../typemap', '../../lib/ExtUtils/typemap', '../../typemap', '../lib/ExtUtils/typemap', '../typemap', 'typemap' ``` * Arguments ``` my @stl = standard_typemap_locations( \@INC ); ``` Reference to `@INC`. * Return Value Array holding list of directories to be searched for *typemap* files. `trim_whitespace()` --- * Purpose Perform an in-place trimming of leading and trailing whitespace from the first argument provided to the function. * Argument ``` trim_whitespace($arg); ``` * Return Value None. Remember: this is an *in-place* modification of the argument. `C_string()` --- * Purpose Escape backslashes (`\`) in prototype strings. * Arguments ``` $ProtoThisXSUB = C_string($_); ``` String needing escaping. * Return Value Properly escaped string. `valid_proto_string()` --- * Purpose Validate prototype string. * Arguments String needing checking. * Return Value Upon success, returns the same string passed as argument. Upon failure, returns `0`. `process_typemaps()` --- * Purpose Process all typemap files. * Arguments ``` my $typemaps_object = process_typemaps( $args{typemap}, $pwd ); ``` List of two elements: `typemap` element from `%args`; current working directory. * Return Value Upon success, returns an [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) object. `map_type()` --- * Purpose Performs a mapping at several places inside `PARAGRAPH` loop. * Arguments ``` $type = map_type($self, $type, $varname); ``` List of three arguments. * Return Value String holding augmented version of second argument. `standard_XS_defs()` --- * Purpose Writes to the `.c` output file certain preprocessor directives and function headers needed in all such files. * Arguments None. * Return Value Returns true. `assign_func_args()` --- * Purpose Perform assignment to the `func_args` attribute. * Arguments ``` $string = assign_func_args($self, $argsref, $class); ``` List of three elements. Second is an array reference; third is a string. * Return Value String. `analyze_preprocessor_statements()` --- * Purpose Within each function inside each Xsub, print to the *.c* output file certain preprocessor statements. * Arguments ``` ( $self, $XSS_work_idx, $BootCode_ref ) = analyze_preprocessor_statements( $self, $statement, $XSS_work_idx, $BootCode_ref ); ``` List of four elements. * Return Value Modifed values of three of the arguments passed to the function. In particular, the `XSStack` and `InitFileCode` attributes are modified. `set_cond()` --- * Purpose * Arguments * Return Value `current_line_number()` --- * Purpose Figures out the current line number in the XS file. * Arguments `$self` * Return Value The current line number. `Warn()` --- * Purpose Print warnings with line number details at the end. * Arguments List of text to output. * Return Value None. `WarnHint()` --- * Purpose Prints warning with line number details. The last argument is assumed to be a hint string. * Arguments List of strings to warn, followed by one argument representing a hint. If that argument is defined then it will be split on newlines and output line by line after the main warning. * Return Value None. `_MsgHint()` --- * Purpose Constructs an exception message with line number details. The last argument is assumed to be a hint string. * Arguments List of strings to warn, followed by one argument representing a hint. If that argument is defined then it will be split on newlines and concatenated line by line (parenthesized) after the main message. * Return Value The constructed string. `blurt()` --- * Purpose * Arguments * Return Value `death()` --- * Purpose * Arguments * Return Value `check_conditional_preprocessor_statements()` --- * Purpose * Arguments * Return Value `escape_file_for_line_directive()` --- * Purpose Escapes a given code source name (typically a file name but can also be a command that was read from) so that double-quotes and backslashes are escaped. * Arguments A string. * Return Value A string with escapes for double-quotes and backslashes. `report_typemap_failure` --- * Purpose Do error reporting for missing typemaps. * Arguments The `ExtUtils::ParseXS` object. An `ExtUtils::Typemaps` object. The string that represents the C type that was not found in the typemap. Optionally, the string `death` or `blurt` to choose whether the error is immediately fatal or not. Default: `blurt` * Return Value Returns nothing. Depending on the arguments, this may call `death` or `blurt`, the former of which is fatal. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [METHODS](#METHODS) + [new](#new) + [file](#file) + [add_typemap](#add_typemap) + [add_inputmap](#add_inputmap) + [add_outputmap](#add_outputmap) + [add_string](#add_string) + [remove_typemap](#remove_typemap) + [remove_inputmap](#remove_inputmap) + [remove_outputmap](#remove_outputmap) + [get_typemap](#get_typemap) + [get_inputmap](#get_inputmap) + [get_outputmap](#get_outputmap) + [write](#write) + [as_string](#as_string) + [as_embedded_typemap](#as_embedded_typemap) + [merge](#merge) + [is_empty](#is_empty) + [list_mapped_ctypes](#list_mapped_ctypes) + [_get_typemap_hash](#_get_typemap_hash) + [_get_inputmap_hash](#_get_inputmap_hash) + [_get_outputmap_hash](#_get_outputmap_hash) + [_get_prototype_hash](#_get_prototype_hash) + [clone](#clone) + [tidy_type](#tidy_type) * [CAVEATS](#CAVEATS) * [SEE ALSO](#SEE-ALSO) * [AUTHOR](#AUTHOR) * [COPYRIGHT & LICENSE](#COPYRIGHT-&-LICENSE) NAME === ExtUtils::Typemaps - Read/Write/Modify Perl/XS typemap files SYNOPSIS === ``` # read/create file my $typemap = ExtUtils::Typemaps->new(file => 'typemap'); # alternatively create an in-memory typemap # $typemap = ExtUtils::Typemaps->new(); # alternatively create an in-memory typemap by parsing a string # $typemap = ExtUtils::Typemaps->new(string => $sometypemap); # add a mapping $typemap->add_typemap(ctype => 'NV', xstype => 'T_NV'); $typemap->add_inputmap( xstype => 'T_NV', code => '$var = ($type)SvNV($arg);' ); $typemap->add_outputmap( xstype => 'T_NV', code => 'sv_setnv($arg, (NV)$var);' ); $typemap->add_string(string => $typemapstring); # will be parsed and merged # remove a mapping (same for remove_typemap and remove_outputmap...) $typemap->remove_inputmap(xstype => 'SomeType'); # save a typemap to a file $typemap->write(file => 'anotherfile.map'); # merge the other typemap into this one $typemap->merge(typemap => $another_typemap); ``` DESCRIPTION === This module can read, modify, create and write Perl XS typemap files. If you don't know what a typemap is, please confer the [perlxstut](/pod/perlxstut) and [perlxs](/pod/perlxs) manuals. The module is not entirely round-trip safe: For example it currently simply strips all comments. The order of entries in the maps is, however, preserved. We check for duplicate entries in the typemap, but do not check for missing `TYPEMAP` entries for `INPUTMAP` or `OUTPUTMAP` entries since these might be hidden in a different typemap. METHODS === new --- Returns a new typemap object. Takes an optional `file` parameter. If set, the given file will be read. If the file doesn't exist, an empty typemap is returned. Alternatively, if the `string` parameter is given, the supplied string will be parsed instead of a file. file --- Get/set the file that the typemap is written to when the `write` method is called. add_typemap --- Add a `TYPEMAP` entry to the typemap. Required named arguments: The `ctype` (e.g. `ctype => 'double'`) and the `xstype` (e.g. `xstype => 'T_NV'`). Optional named arguments: `replace => 1` forces removal/replacement of existing `TYPEMAP` entries of the same `ctype`. `skip => 1` triggers a *"first come first serve"* logic by which new entries that conflict with existing entries are silently ignored. As an alternative to the named parameters usage, you may pass in an `ExtUtils::Typemaps::Type` object as first argument, a copy of which will be added to the typemap. In that case, only the `replace` or `skip` named parameters may be used after the object. Example: ``` $map->add_typemap($type_obj, replace => 1); ``` add_inputmap --- Add an `INPUT` entry to the typemap. Required named arguments: The `xstype` (e.g. `xstype => 'T_NV'`) and the `code` to associate with it for input. Optional named arguments: `replace => 1` forces removal/replacement of existing `INPUT` entries of the same `xstype`. `skip => 1` triggers a *"first come first serve"* logic by which new entries that conflict with existing entries are silently ignored. As an alternative to the named parameters usage, you may pass in an `ExtUtils::Typemaps::InputMap` object as first argument, a copy of which will be added to the typemap. In that case, only the `replace` or `skip` named parameters may be used after the object. Example: ``` $map->add_inputmap($type_obj, replace => 1); ``` add_outputmap --- Add an `OUTPUT` entry to the typemap. Works exactly the same as `add_inputmap`. add_string --- Parses a string as a typemap and merge it into the typemap object. Required named argument: `string` to specify the string to parse. remove_typemap --- Removes a `TYPEMAP` entry from the typemap. Required named argument: `ctype` to specify the entry to remove from the typemap. Alternatively, you may pass a single `ExtUtils::Typemaps::Type` object. remove_inputmap --- Removes an `INPUT` entry from the typemap. Required named argument: `xstype` to specify the entry to remove from the typemap. Alternatively, you may pass a single `ExtUtils::Typemaps::InputMap` object. remove_outputmap --- Removes an `OUTPUT` entry from the typemap. Required named argument: `xstype` to specify the entry to remove from the typemap. Alternatively, you may pass a single `ExtUtils::Typemaps::OutputMap` object. get_typemap --- Fetches an entry of the TYPEMAP section of the typemap. Mandatory named arguments: The `ctype` of the entry. Returns the `ExtUtils::Typemaps::Type` object for the entry if found. get_inputmap --- Fetches an entry of the INPUT section of the typemap. Mandatory named arguments: The `xstype` of the entry or the `ctype` of the typemap that can be used to find the `xstype`. To wit, the following pieces of code are equivalent: ``` my $type = $typemap->get_typemap(ctype => $ctype) my $input_map = $typemap->get_inputmap(xstype => $type->xstype); my $input_map = $typemap->get_inputmap(ctype => $ctype); ``` Returns the `ExtUtils::Typemaps::InputMap` object for the entry if found. get_outputmap --- Fetches an entry of the OUTPUT section of the typemap. Mandatory named arguments: The `xstype` of the entry or the `ctype` of the typemap that can be used to resolve the `xstype`. (See above for an example.) Returns the `ExtUtils::Typemaps::InputMap` object for the entry if found. write --- Write the typemap to a file. Optionally takes a `file` argument. If given, the typemap will be written to the specified file. If not, the typemap is written to the currently stored file name (see ["file"](#file) above, this defaults to the file it was read from if any). as_string --- Generates and returns the string form of the typemap. as_embedded_typemap --- Generates and returns the string form of the typemap with the appropriate prefix around it for verbatim inclusion into an XS file as an embedded typemap. This will return a string like ``` TYPEMAP: <<END_OF_TYPEMAP ... typemap here (see as_string) ... END_OF_TYPEMAP ``` The method takes care not to use a HERE-doc end marker that appears in the typemap string itself. merge --- Merges a given typemap into the object. Note that a failed merge operation leaves the object in an inconsistent state so clone it if necessary. Mandatory named arguments: Either `typemap => $another_typemap_obj` or `file => $path_to_typemap_file` but not both. Optional arguments: `replace => 1` to force replacement of existing typemap entries without warning or `skip => 1` to skip entries that exist already in the typemap. is_empty --- Returns a bool indicating whether this typemap is entirely empty. list_mapped_ctypes --- Returns a list of the C types that are mappable by this typemap object. _get_typemap_hash --- Returns a hash mapping the C types to the XS types: ``` { 'char **' => 'T_PACKEDARRAY', 'bool_t' => 'T_IV', 'AV *' => 'T_AVREF', 'InputStream' => 'T_IN', 'double' => 'T_DOUBLE', # ... } ``` This is documented because it is used by `ExtUtils::ParseXS`, but it's not intended for general consumption. May be removed at any time. _get_inputmap_hash --- Returns a hash mapping the XS types (identifiers) to the corresponding INPUT code: ``` { 'T_CALLBACK' => ' $var = make_perl_cb_$type($arg) ', 'T_OUT' => ' $var = IoOFP(sv_2io($arg)) ', 'T_REF_IV_PTR' => ' if (sv_isa($arg, \\"${ntype}\\")) { # ... } ``` This is documented because it is used by `ExtUtils::ParseXS`, but it's not intended for general consumption. May be removed at any time. _get_outputmap_hash --- Returns a hash mapping the XS types (identifiers) to the corresponding OUTPUT code: ``` { 'T_CALLBACK' => ' sv_setpvn($arg, $var.context.value().chp(), $var.context.value().size()); ', 'T_OUT' => ' { GV *gv = (GV *)sv_newmortal(); gv_init_pvn(gv, gv_stashpvs("$Package",1), "__ANONIO__",10,0); if ( do_open(gv, "+>&", 3, FALSE, 0, 0, $var) ) sv_setsv( $arg, sv_bless(newRV((SV*)gv), gv_stashpv("$Package",1)) ); else $arg = &PL_sv_undef; } ', # ... } ``` This is documented because it is used by `ExtUtils::ParseXS`, but it's not intended for general consumption. May be removed at any time. _get_prototype_hash --- Returns a hash mapping the C types of the typemap to their corresponding prototypes. ``` { 'char **' => '$', 'bool_t' => '$', 'AV *' => '$', 'InputStream' => '$', 'double' => '$', # ... } ``` This is documented because it is used by `ExtUtils::ParseXS`, but it's not intended for general consumption. May be removed at any time. clone --- Creates and returns a clone of a full typemaps object. Takes named parameters: If `shallow` is true, the clone will share the actual individual type/input/outputmap objects, but not share their storage. Use with caution. Without `shallow`, the clone will be fully independent. tidy_type --- Function to (heuristically) canonicalize a C type. Works to some degree with C++ types. ``` $halfway_canonical_type = tidy_type($ctype); ``` Moved from `ExtUtils::ParseXS`. CAVEATS === Inherits some evil code from `ExtUtils::ParseXS`. SEE ALSO === The parser is heavily inspired from the one in [ExtUtils::ParseXS](/pod/distribution/ExtUtils-ParseXS/lib/ExtUtils/ParseXS.pod). For details on typemaps: [perlxstut](/pod/perlxstut), [perlxs](/pod/perlxs). AUTHOR === <NAME> `<<EMAIL>`COPYRIGHT & LICENSE === Copyright 2009, 2010, 2011, 2012, 2013 <NAME> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [EXPORTED FUNCTIONS](#EXPORTED-FUNCTIONS) + [embeddable_typemap](#embeddable_typemap) * [SEE ALSO](#SEE-ALSO) * [AUTHOR](#AUTHOR) * [COPYRIGHT & LICENSE](#COPYRIGHT-&-LICENSE) NAME === ExtUtils::Typemaps::Cmd - Quick commands for handling typemaps SYNOPSIS === From XS: ``` INCLUDE_COMMAND: $^X -MExtUtils::Typemaps::Cmd \ -e "print embeddable_typemap(q{Excommunicated})" ``` Loads `ExtUtils::Typemaps::Excommunicated`, instantiates an object, and dumps it as an embeddable typemap for use directly in your XS file. DESCRIPTION === This is a helper module for [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) for quick one-liners, specifically for inclusion of shared typemaps that live on CPAN into an XS file (see SYNOPSIS). For this reason, the following functions are exported by default: EXPORTED FUNCTIONS === embeddable_typemap --- Given a list of identifiers, `embeddable_typemap` tries to load typemaps from a file of the given name(s), or from a module that is an `ExtUtils::Typemaps` subclass. Returns a string representation of the merged typemaps that can be included verbatim into XS. Example: ``` print embeddable_typemap( "Excommunicated", "ExtUtils::Typemaps::Basic", "./typemap" ); ``` This will try to load a module `ExtUtils::Typemaps::Excommunicated` and use it as an `ExtUtils::Typemaps` subclass. If that fails, it'll try loading `Excommunicated` as a module, if that fails, it'll try to read a file called *Excommunicated*. It'll work similarly for the second argument, but the third will be loaded as a file first. After loading all typemap files or modules, it will merge them in the specified order and dump the result as an embeddable typemap. SEE ALSO === [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) [perlxs](/pod/perlxs) AUTHOR === <NAME> `<<EMAIL>`COPYRIGHT & LICENSE === Copyright 2012 <NAME> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [METHODS](#METHODS) + [new](#new) + [code](#code) + [xstype](#xstype) + [cleaned_code](#cleaned_code) * [SEE ALSO](#SEE-ALSO) * [AUTHOR](#AUTHOR) * [COPYRIGHT & LICENSE](#COPYRIGHT-&-LICENSE) NAME === ExtUtils::Typemaps::InputMap - Entry in the INPUT section of a typemap SYNOPSIS === ``` use ExtUtils::Typemaps; ... my $input = $typemap->get_input_map('T_NV'); my $code = $input->code(); $input->code("..."); ``` DESCRIPTION === Refer to [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) for details. METHODS === new --- Requires `xstype` and `code` parameters. code --- Returns or sets the INPUT mapping code for this entry. xstype --- Returns the name of the XS type of the INPUT map. cleaned_code --- Returns a cleaned-up copy of the code to which certain transformations have been applied to make it more ANSI compliant. SEE ALSO === [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) AUTHOR === <NAME> `<<EMAIL>`COPYRIGHT & LICENSE === Copyright 2009, 2010, 2011, 2012 <NAME> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [METHODS](#METHODS) + [new](#new) + [code](#code) + [xstype](#xstype) + [cleaned_code](#cleaned_code) + [targetable](#targetable) * [SEE ALSO](#SEE-ALSO) * [AUTHOR](#AUTHOR) * [COPYRIGHT & LICENSE](#COPYRIGHT-&-LICENSE) NAME === ExtUtils::Typemaps::OutputMap - Entry in the OUTPUT section of a typemap SYNOPSIS === ``` use ExtUtils::Typemaps; ... my $output = $typemap->get_output_map('T_NV'); my $code = $output->code(); $output->code("..."); ``` DESCRIPTION === Refer to [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) for details. METHODS === new --- Requires `xstype` and `code` parameters. code --- Returns or sets the OUTPUT mapping code for this entry. xstype --- Returns the name of the XS type of the OUTPUT map. cleaned_code --- Returns a cleaned-up copy of the code to which certain transformations have been applied to make it more ANSI compliant. targetable --- This is an obscure but effective optimization that used to live in `ExtUtils::ParseXS` directly. Not implementing it should never result in incorrect use of typemaps, just less efficient code. In a nutshell, this will check whether the output code involves calling `sv_setiv`, `sv_setuv`, `sv_setnv`, `sv_setpv` or `sv_setpvn` to set the special `$arg` placeholder to a new value **AT THE END OF THE OUTPUT CODE**. If that is the case, the code is eligible for using the `TARG`-related macros to optimize this. Thus the name of the method: `targetable`. If this optimization is applicable, `ExtUtils::ParseXS` will emit a `dXSTARG;` definition at the start of the generated XSUB code, and type (see below) dependent code to set `TARG` and push it on the stack at the end of the generated XSUB code. If the optimization can not be applied, this returns undef. If it can be applied, this method returns a hash reference containing the following information: ``` type: Any of the characters i, u, n, p with_size: Bool indicating whether this is the sv_setpvn variant what: The code that actually evaluates to the output scalar what_size: If "with_size", this has the string length (as code, not constant, including leading comma) ``` SEE ALSO === [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) AUTHOR === <NAME> `<<EMAIL>`COPYRIGHT & LICENSE === Copyright 2009, 2010, 2011, 2012 <NAME> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) * [METHODS](#METHODS) + [new](#new) + [proto](#proto) + [xstype](#xstype) + [ctype](#ctype) + [tidy_ctype](#tidy_ctype) * [SEE ALSO](#SEE-ALSO) * [AUTHOR](#AUTHOR) * [COPYRIGHT & LICENSE](#COPYRIGHT-&-LICENSE) NAME === ExtUtils::Typemaps::Type - Entry in the TYPEMAP section of a typemap SYNOPSIS === ``` use ExtUtils::Typemaps; ... my $type = $typemap->get_type_map('char*'); my $input = $typemap->get_input_map($type->xstype); ``` DESCRIPTION === Refer to [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) for details. Object associates `ctype` with `xstype`, which is the index into the in- and output mapping tables. METHODS === new --- Requires `xstype` and `ctype` parameters. Optionally takes `prototype` parameter. proto --- Returns or sets the prototype. xstype --- Returns the name of the XS type that this C type is associated to. ctype --- Returns the name of the C type as it was set on construction. tidy_ctype --- Returns the canonicalized name of the C type. SEE ALSO === [ExtUtils::Typemaps](/pod/ExtUtils::Typemaps) AUTHOR === <NAME> `<<EMAIL>`COPYRIGHT & LICENSE === Copyright 2009, 2010, 2011, 2012 <NAME> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. × #### Module Install Instructions To install ExtUtils::ParseXS, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm ExtUtils::ParseXS ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install ExtUtils::ParseXS ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) ``` package ExtUtils::ParseXS::CountLines; use strict; our $VERSION = '3.51'; our $SECTION_END_MARKER; sub TIEHANDLE { my ($class, $cfile, $fh) = @_; $cfile =~ s/\\/\\\\/g; $cfile =~ s/"/\\"/g; $SECTION_END_MARKER = qq{#line --- "$cfile"}; return bless { buffer => '', fh => $fh, line_no => 1, }, $class; } sub PRINT { my $self = shift; for (@_) { $self->{buffer} .= $_; while ($self->{buffer} =~ s/^([^\n]*\n)//) { my $line = $1; ++$self->{line_no}; $line =~ s|^\#line\s+---(?=\s)|#line $self->{line_no}|; print {$self->{fh}} $line; } } } sub PRINTF { my $self = shift; my $fmt = shift; $self->PRINT(sprintf($fmt, @_)); } sub DESTROY { # Not necessary if we're careful to end with a "\n" my $self = shift; print {$self->{fh}} $self->{buffer}; } sub UNTIE { # This sub does nothing, but is necessary for references to be released. } sub end_marker { return $SECTION_END_MARKER; } 1; ``` ``` Revision history for Perl extension ExtUtils::ParseXS. 3.51 - Tue May 9 09:32:04 2023 AEST - Initialize $self correctly in EU::PXS::Utilities::death() - C++ builds: avoid generating C<< extern "C" extern "C" >3.50 - Mon May 8 23:12:28 2023 CEST - Silence warnings about unreached code in generated XS code - Correct colon translation of $type in OUTPUT section - Make versions in ExtUtils-ParseXS consistent 3.49 - Wed Nov 16 11:14:55 2022 CET - Disable alias value collision warnings by default 3.48 - Tue Nov 8 17:44:11 2022 CET - handle #else and #endif without blank line prefixes - better support for duplicate ALIASes - allow symbolic alias of default function - add support for elifdef and elifndef 3.47 - Sat Oct 22 10:36:38 2022 CET - fix ExtUtils::ParseXS compatibility with perl < 5.8.8 3.45 - Fri Mar 4 22:42:03 2022 - GH #19320: Fix OVERLOAD and FALLBACK handling. 3.44 - Thu Jan 6 23:49:25 2022 - GH #19054: Always XSprePUSH when producing an output list. - Use more descriptive variable names. - Fix plan/skip in test file 002-more.t. 3.43 - Wed Mar 24 15:44:08 2021 CET - Use PERL_VERSION_LE instead of 5.33+ PERL_VERSION_LT. - Fix error message bug. 3.42 - Tue Nov 24 21:42:05 2020 CET - Restore compatibility with old versions that made use of "errors" function which was renamed to "report_error_count". 3.41 - Wed Aug 12 19:39:04 2020 CET - Use absolute paths in tests on all platforms. - Use PERL_VERSION compare macro. 3.40 - Wed Dec 5 05:35:19 2018 CET - RT #133654: Don't include OUTLIST parameters in the prototype. 3.39 - Mon Mar 5 17:46:41 2018 CET - RT #132935: Correctly check VERSIONs. 3.38 - Fri Feb 9 12:02:34 2018 CET - Correct name of variable 'ALIAS' (not 'Alias') in documentation. - Add PERL_REENTRANT for XS modules (get the reentrant versions of libc functions automatically without declaring as PERL_CORE or PERL_EXT). 3.37 - Mon Dec 11 01:54:44 2017 CET - Update documentation to avoid newGVgen. 3.36 - Tue Nov 14 09:45:55 2017 CET - Make generated code avoid warnings about the "items" variable being unused - Avoid some unused-variable warnings generated by XS code in the test suite 3.35 - Mon Jul 31 17:50:00 CET 2017 - Fix ExtUtils-ParseXS/t/*.t that needed '.' in @INC (<NAME>) - Remove impediment to compiling under C++11 (<NAME>) - Make build reproducinle (<NAME>) - (perl #127834) remove . from the end of @INC if complex modules are loaded (<NAME>) - Replace :: with __ in THIS like it's done for parameters/return values (<NAME>) 3.30 - Mon Aug 31 10:35:00 CET 2015 - Promote to stable CPAN release. 3.29_01 - Mon Aug 10 10:45:00 CET 2015 - Support added for XS handshake API introduced in 5.21.6. - Backported S_croak_xs_usage optimized on threaded builds - Fix INCLUDE_COMMAND $^X for Perl In Space - Remove 'use lib' left over from refactoring - Document + improve ancient optimization in ParseXS - Improve RETVAL code gen 3.24 - Wed Mar 5 18:20:00 CET 2014 - Native Android build fixes - More lenient syntax for embedded TYPEMAP blocks in XS: a trailing semicolon will not be required for the block terminator. - Code cleanup. 3.22 - Thu Aug 29 19:30:00 CET 2013 - Fix parallel testing crashes. - Explicitly require new-enough Exporter. 3.21 - Fri Aug 9 19:08:00 CET 2013 - stop "sv_2mortal(&PL_sv_yes)" and "(void)sv_newmortal()" for immortal typemap entries [perl #116152] - Deterministic C output (fix for hash randomization in 5.18). 3.18_04 - Fri Jun 20 17:47:00 CET 2013 - Fix targetable size detection (& better tests) - Assorted cleanup and refactoring. 3.18_03 - Fri Apr 19 18:40:00 CET 2013 - Heuristic (and flawed) type canonicalization for templated C++ types. - More tests ported from core. 3.18_02 - Mon Apr 15 07:30:00 CET 2013 - ExtUtils::ParseXS no longer uses global state (if using OO interface). - New "real" OO interface. 3.18_01 - Thu Apr 11 19:17:00 CET 2013 - ExtUtils::Typemaps gains a clone method. 3.18 - Mon Nov 19 07:35:00 CET 2012 - Restore portability to Perl 5.6, which was lost at EU-PXS 3.00. - [perl #112776] avoid warning on an initialized non-parameter - Only increment PL_amagic_generation before 5.9 3.15 - Thu Feb 2 08:12:00 CET 2012 - Fix version for PAUSE indexer. 3.14 - Wed Feb 1 18:22:00 CET 2012 - Promote to stable release. - Proper repository and bugtracker info in META.json. 3.13_01 - Sat Jan 29 12:45:00 CET 2012 - ExtUtils::Typemaps: => Embedded typemap dumping: A method which will produce the verbatim string for inclusion in XS. => Introducing ExtUtils::Typemaps::Cmd, a helper module which can produce embedded typemap strings via simple one-liners. Useful for including typemaps from other modules in XS code with INCLUDE_COMMAND. See "perldoc ExtUtils::Typemaps::Cmd". - ExtUtils::ParseXS: => Bugfix: Used to have parsing problems with embedded typemaps occasionally. => Better error messages on typemap-related issues. If a no typemap was found for a given C type, list all mapped C types so that the user hopefully spots his mistake easily. 3.11 - Thu Dec 29 17:55:00 CET 2011 - Version/distribution fixes. 3.09 - Wed Dec 28 18:48:00 CET 2011 - Escape double quotes of file names/commands in #line directives. 3.08 - Mon Dec 19 18:03:00 CET 2011 - Silence undefined-value-in-addition warning (Nothing serious, just happened sometimes when reporting line numbers for errors. But warning during build process.) 3.07 - Wed Dec 7 14:10:00 CET 2011 - Fix inconsistent versioning in 3.06. 3.06 - Fri Dec 2 08:10:00 CET 2011 - Fix Cygwin issues [<NAME>] avoid conflicting static / dllexport on legacy perls too This probably fixes rt.cpan.org 72313 and 71964. (3928a66ad4bd8aee704eda1942b7877c0ff1ab2c in core) - Convert ` to ' [James Keenan] 55bee391aeff3c3b8d22fa4ce5491ee9440028aa and 6dfee1ec62c64d7afe8ced4650596dd9e7f71a63 in core - Avoid some test-time warnings [Zefram] 97bae9c59cd181b3b54061213ec1fdce0ccb30d4 in core 3.05 - Wed Oct 5 08:14:00 CET 2011 - No functional changes, promoted to stable release. 3.04_04 - Mon Sep 12 08:12:00 CET 2011 - Simplify generated XS code by emitting a compatibility version of dVAR. [<NAME>] - Fixed "INCLUDE: $cmd |", CPAN RT #70213 3.04_03 - Sun Sep 4 18:49:00 CET 2011 - By #defining PERL_EUPXS_ALWAYS_EXPORT or PERL_EUPXS_NEVER_EXPORT early in your XS code, you can force ExtUtils::ParseXS to always or never export XSUB symbols. This has no effect on boot_* symbols since those must be exported. 3.04_02 - Sat Sep 3 15:28:00 CET 2011 - Don't put null characters into the generated source file when -except is used; write the '\0' escape sequence properly instead. [<NAME>] 3.04_01 - Sun Aug 28 17:50:00 CET 2011 - The XSUB.h changes to make XS(name) use XS_INTERNAL(name) by default (which were in the 5.15.2 dev release of perl) have been reverted since too many CPAN modules expect to be able to refer to XSUBs declared with XS(name). Instead, ExtUtils::ParseXS will define a copy of the XS_INTERNAL/XS_EXTERNAL macros as necessary going back to perl 5.10.0 (which is the oldest perl I had for testing). By default, ExtUtils::ParseXS will use XS_INTERNAL(name) instead of XS(name). 3.04 - Thu Aug 25 08:20:00 CET 2011 - Stable release based on 3.03_03, no functional changes. 3.03_03 - Wed Aug 24 19:43:00 CET 2011 - Try to fix regression for input-typemap override in XS argument list. (CPAN RT #70448) - Explicit versions in submodules to fail early if something goes wrong. 3.03_02 - Sun Aug 21 13:19:00 CET 2011 - Properly strip trailing semicolons form inputmaps. These could trigger warnings (errors in strict C89 compilers) due to additional semicolons being interpreted as empty statements. [<NAME>, <NAME>, <NAME>] - Now detects and throws a warning if there is a CODE section using RETVAL, but no OUTPUT section. [CPAN RT #69536] - Uses the explicit XS_EXTERNAL macro (from perl 5.15.2 and newer) for XSUBs that need to be exported. Defines XS_EXTERNAL to fall back to XS where that is not available. - Introduces new EXPORT_XSUB_SYMBOLS XS keyword that forces exported XSUB symbols. It's a no-op on perls before 5.15.2. 3.03 - Thu Aug 11 08:24:00 CET 2011 - Test fix: Try all @INC-derived typemap locations. (CPAN RT #70047) [<NAME>] 3.02 - Thu Aug 4 18:19:00 CET 2011 - Test fix: Use File::Spec->catfile instead of catdir where appropriate. 3.01 - Thu Aug 4 17:51:00 CET 2011 - No significant changes from 3.00_05. 3.00_05 - Wed Jul 27 22:54:00 CET 2011 - Define PERL_UNUSED_ARG for pre-3.8.9 perls. This should fix the tests on those perls. 3.00_04 - Wed Jul 27 22:22:00 CET 2011 - Require perl 5.8.1. - Patches from CPAN RT #53938, #61908 Both of these are attempts to fix win32 problems: Bug #61908 for ExtUtils-ParseXS: MSWin compilers and back-slashed paths Bug #53938 for ExtUtils-ParseXS: MinGW Broken after 2.21 3.00_03 - Fri Jul 22 20:13:00 CET 2011 - Add some diagnostics when xsubpp fails to load a current-enough version of ExtUtils::ParseXS. [Steffen Mueller] - Add a check to Makefile.PL that scans @INC to determine whether the new xsubpp will be shadowed by another, existing xsubpp and warn the user vehemently. [Steffen Mueller] 3.00_02 - Thu Jul 14 18:00:00 CET 2011 - Move script/xsubpp back to lib/ExtUtils/xsubpp The original move caused old xsubpp's to be used. 3.00_01 - Tue Jul 12 22:00:00 CET 2011 - Major refactoring of the whole code base. It finally runs under 'use strict' for the first time! [<NAME>, <NAME>ueller] - Typemaps can now be embedded into XS code using a here-doc like syntax and the new "TYPEMAP:" XS keyword. [Steffen Mueller] - Move typemap handling code to ExtUtils::Typemaps with full object-oriented goodness. [Steffen Mueller] - Check API compatibility when loading xs modules. If on a new-enough perl, add the XS_APIVERSION_BOOTCHECK macro to the _boot function of every XS module to compare it against the API version the module has been compiled against. If the versions do not match, an exception is thrown. [<NAME>] - Fixed compiler warnings in XS. [Zefram] - Spell-check [<NAME>] 2.2206 - Sun Jul 4 15:43:21 EDT 2010 Bug fixes: - Make xsubpp accept the _ prototype (RT#57157) [<NAME>-Suarez] - INCLUDE_COMMAND portability fixes for VMS (RT#58181) [Craig Berry] - INCLUDE_COMMAND fixes to detect non-zero exit codes (RT#52873) [Steffen Mueller] 2.2205 - Wed Mar 10 18:15:36 EST 2010 Other: - No longer ships with Build.PL to avoid creating a circular dependency 2.2204 - Wed Mar 10 14:23:52 EST 2010 Other: - Downgraded warnings on using INCLUDE with a command from "deprecated" to "discouraged" and limited it to the case where the command includes "perl" [Steffen Mueller] 2.2203 - Thu Feb 11 14:00:51 EST 2010 Bug fixes: - Build.PL was not including ExtUtils/xsubpp for installation. Fixed by subclassing M::B::find_pm_files to include it [David Golden] 2.2202 - Wed Jan 27 15:04:59 EST 2010 Bug fixes: - The fix to IN/OUT/OUTLIST was itself broken and is now fixed. [Reported by <NAME>; fix suggested by <NAME>z] We apologize for the fault in the regex. Those responsible have been sacked. 2.2201 Mon Jan 25 16:12:05 EST 2010 Bug fixes: - IN/OUT/OUTLIST, etc. were broken due to a bad regexp. [Simon Cozens] 2.22 - Mon Jan 11 15:00:07 EST 2010 No changes from 2.21_02 2.21_02 - Sat Dec 19 10:55:41 EST 2009 Bug fixes: - fixed bugs and added tests for INCLUDE_COMMAND [Steffen Mueller] 2.21_01 - Sat Dec 19 07:22:44 EST 2009 Enhancements: - New 'INCLUDE_COMMAND' directive [Steffen Mueller] Bug fixes: - Workaround for empty newXS macro found in P5NCI [Goro Fuji] 2.21 - Mon Oct 5 11:17:53 EDT 2009 Bug fixes: - Adds full path in INCLUDE #line directives (RT#50198) [patch by "spb"] Other: - Updated copyright and maintainer list 2.20_07 - Sat Oct 3 11:26:55 EDT 2009 Bug fixes: - Use "char* file" for perl < 5.9, not "char[] file"; fixes mod_perl breakage due to prior attempts to fix RT#48104 [<NAME>] 2.20_06 - Fri Oct 2 23:45:45 EDT 2009 Bug fixes: - Added t/typemap to fix broken test on perl 5.6.2 [<NAME>] - More prototype fixes for older perls [Goro Fuji] - Avoid "const char *" in test files as it breaks on 5.6.2 [Goro Fuji] Other: - Merged changes from 2.2004 maintenance branch (see 2.200401 to 2.200403) [<NAME>] 2.20_05 - Sat Aug 22 21:46:56 EDT 2009 Bug fixes: - Fix prototype related bugs [Goro Fuji] - Fix the SCOPE keyword [Goro Fuji] 2.200403 - Fri Oct 2 02:01:58 EDT 2009 Other: - Removed PERL_CORE specific @INC manipulation (no longer needed) [Nicholas Clark] - Changed hard-coded $^H manipulation in favor of "use re 'eval'" [Nicholas Clark] 2.200402 - Fri Oct 2 01:26:40 EDT 2009 Bug fixes: - UNITCHECK subroutines were not being called (detected in ext/XS-APItest in Perl blead) [reported by <NAME>, patched by David Golden] 2.200401 - Mon Sep 14 22:26:03 EDT 2009 - No changes from 2.20_04. 2.20_04 - Mon Aug 10 11:18:47 EDT 2009 Bug fixes: - Changed install_dirs to 'core' for 5.8.9 as well (RT#48474) - Removed t/bugs.t until there is better C++ support in ExtUtils::CBuilder Other: - Updated repository URL in META file 2.20_03 - Thu Jul 23 23:14:50 EDT 2009 Bug fixes: - Fixed "const char *" errors for 5.8.8 (and older) (RT#48104) [Vincent Pit] - Added newline before a preprocessor directive (RT#30673) [patch by hjp] 2.2002 - Sat Jul 18 17:22:27 EDT 2009 Bug fixes: - Fix Makefile.PL installdirs for older perls 2.20_01 - Wed Jul 8 12:12:47 EDT 2009 - Fix XSUsage prototypes for testing [Jan Dubois] 2.20 - Wed Jul 1 13:42:11 EDT 2009 - No changes from 2.19_04 2.19_04 - Mon Jun 29 11:49:12 EDT 2009 - Changed tests to use Test::More and added it to prereqs - Some tests skip if no compiler or if no dynamic loading - INTERFACE keyword tests skipped for perl < 5.8 2.19_03 - Sat Jun 27 22:51:18 EDT 2009 - Released to see updated results from smoke testers - Fix minor doc typo pulled from blead 2.19_02 - Wed Aug 6 22:18:33 2008 - Fix the usage reports to consistently report package name as well as sub name across ALIAS, INTERFACE and regular XSUBS. [Robert May] - Cleaned up a warning with -Wwrite-strings that gets passed into every parsed XS file. [Steve Peters] - Allow (pedantically correct) C pre-processor comments in the code snippets of typemap files. [Nicholas Clark] 2.19 - Sun Feb 17 14:27:40 2008 - Fixed the treatment of the OVERLOAD: keyword, which was causing a C compile error. [Toshiyuki Yamato] 2.18 - Mon Jan 29 20:56:36 2007 - Added some UNITCHECK stuff, which (I think) makes XS code able to do UNITCHECK blocks. [Nicholas Clark] - Changed 'use re "eval";' to 'BEGIN { $^H |= 0x00200000 };' so we can compile re.xs in bleadperl. [Yves Orton] - Fix an undefined-variable warning related to 'inout' parameter processing. 2.17 - Mon Nov 20 17:07:27 2006 - Stacked $filepathname to make #line directives in #INCLUDEs work. [Nicholas Clark] - Sprinked dVAR in with dXSARGS, for God-(Jarkko)-knows-what reason. [Jarkko Hietaniemi] - Use printf-style formats in Perl_croak() for some significant savings in number of distinct constant strings in the linked binaries we create. [<NAME>] - Don't use 'class' as a variable name in the t/XSTest.xs module, since that's a keyword in C++. [Jarkko Hietaniemi] 2.16 Fri Sep 15 22:33:24 CDT 2006 - Fix a problem with PREFIX not working inside INTERFACE sections. [Salvador Fandin~o] 2.15 Mon Oct 10 11:02:13 EDT 2005 - I accidentally left out a README from the distribution. Now it's auto-created from the main documentation in ExtUtils/ParseXS.pm. 2.14 Sat Oct 8 21:49:15 EDT 2005 - The filehandle for the .xs file was never being properly closed, and now it is. This was causing some Win32 problems with Module::Build's tests, which create a .xs file, process it with ParseXS, and then try to remove it. [Spotted by <NAME>] 2.13 Mon Oct 3 21:59:06 CDT 2005 - Integrate a cleanup-related change from bleadperl that somehow never got into this copy. [Steve Hay] 2.12 Wed Aug 24 20:03:09 CDT 2005 - On Win32, there was a DLL file we create during testing that we couldn't delete unless we closed it first, so testing failed when the deletion was attempted. This should now work (provided the version of perl is high enough to have DynaLoader::dl_unload_file() - I'm not sure what will happen otherwise). [Steve Hay] - Fix a spurious warning during testing about a variable that's used before it's initialized. [Steve Hay] 2.11 Mon Jun 13 23:00:23 CDT 2005 - Make some variables global, to avoid some "will not stay shared" warnings at compile time. [<NAME>-Suarez] 2.10 Mon May 30 21:29:44 CDT 2005 - This module is being integrated into the perl core; the regression tests will now work properly when run as part of the core build. [Yitzchak Scott-Thoennes] - Added the ability to create output files with a suffix other than ".c", via the new "csuffix" option. This gets the module working on Symbian. [Jarkko Hietaniemi] - Added the ability to put 'extern "C"' declarations in front of prototypes. [Jarkko Hietaniemi] 2.09 Sun Mar 27 11:11:49 CST 2005 - Integrated change #18270 from the perl core, which fixed a problem in which xsubpp can make nested comments in C code (which is bad). [Nicholas Clark] - When no "MODULE ... PACKAGE ... PREFIX" line is found, it's now still a fatal error for ParseXS, but we exit with status 0, which is what the old xsubpp did and seems to work best with some modules like Win32::NetAdmin. See RT ticket 11472. [Steve Hay] 2.08 Fri Feb 20 21:41:22 CST 2004 - Fixed a problem with backslashes in file paths (e.g. C:\Foo\Bar.xs) disappearing in error messages. [<NAME>, Steve Hay] - Did a little minor internal code cleanup in the ExtUtils::ParseXS::CountLines class, now other classes don't poke around in its package variables. 2.07 Sun Jan 25 17:01:52 CST 2004 - We now use ExtUtils::CBuilder for testing the compile/build phase in the regression tests. It's not necessary to have it for runtime usage, though. - Fixed a minor documentation error (look in 'Changes' for revision history, not 'changes.pod'). [<NAME>] 2.06 Fri Dec 26 09:00:47 CST 2003 - Some fixes in the regression tests for the AIX platform. 2.05 Mon Sep 29 10:33:39 CDT 2003 - We no longer trim the directory portions from the "#line " comments in the generated C code. This helps cooperation with many editors' auto-jump-to-error stuff. [Ross McFarland] - In some cases the PERL_UNUSED_VAR macro is needed to get rid of C compile-time warnings in generated code. Since this eliminates so many warnings, turning on "-Wall -W" (or your platform's equivalent) can once again be helpful. [Ross McFarland] - Did a huge amount of variable-scoping cleanup, and it *still* doesn't compile under 'use strict;'. Much progress was made though, and many scoping issues were fixed. 2.04 Thu Sep 4 13:10:59 CDT 2003 - Added a COPYRIGHT section to the documentation. [Spotted by Ville Skytta] 2.03 Sat Aug 16 17:49:03 CST 2003 - Fixed a warning that occurs if a regular expression (buried deep within the bowels of the code here) fails. [Spotted by Michael Schwern] - Fixed a testing error on Cygwin. [Reini Urban] 2.02 Sun Mar 30 18:20:12 CST 2003 - Now that we know this module doesn't work (yet?) with perl 5.005, put a couple 'use 5.006' statements in the module & Makefile.PL so we're explicit about the dependency. [Richard Clamp] 2.01 Thu Mar 20 08:22:36 CST 2003 - Allow -C++ flag for backward compatibility. It's a no-op, and has been since perl5.003_07. [PodMaster] 2.00 Sun Feb 23 16:40:17 CST 2003 - Tests now function under all three of the supported compilers on Windows environments. [Randy W. Sims] - Will now install to the 'core' perl module directory instead of to 'site_perl' or the like, because this is the only place MakeMaker will look for the xsubpp script. - Explicitly untie and close the output file handle because ParseXS was holding the file handle open, preventing the compiler from opening it on Win32. [Randy W. Sims] - Added an '--output FILENAME' flag to xsubpp and changed ParseXS to use the named file in the #line directives when the output file has an extension other than '.c' (i.e. '.cpp'). [Randy W. Sims] - Added conditional definition of the PERL_UNUSED_VAR macro to the output file in case it's not already defined for backwards compatibility with pre-5.8 versions of perl. (Not sure if this is the best solution.) [Randy W. Sims] 1.99 Wed Feb 5 10:07:47 PST 2003 - Version bump to 1.99 so it doesn't look like a 'beta release' to CPAN.pm. No code changes, since I haven't had any bug reports. - Fixed a minor problem in the regression tests that was creating an XSTest..o file instead of XSTest.o 1.98_01 Mon Dec 9 11:50:41 EST 2002 - Converted from ExtUtils::xsubpp in bleadperl - Basic set of regression tests written ``` ``` Changes lib/ExtUtils/ParseXS.pm lib/ExtUtils/ParseXS.pod lib/ExtUtils/ParseXS/Constants.pm lib/ExtUtils/ParseXS/CountLines.pm lib/ExtUtils/ParseXS/Eval.pm lib/ExtUtils/ParseXS/Utilities.pm lib/ExtUtils/Typemaps.pm lib/ExtUtils/Typemaps/Cmd.pm lib/ExtUtils/Typemaps/InputMap.pm lib/ExtUtils/Typemaps/OutputMap.pm lib/ExtUtils/Typemaps/Type.pm lib/ExtUtils/xsubpp Makefile.PL MANIFEST This list of files t/001-basic.t t/002-more.t t/003-usage.t t/101-standard_typemap_locations.t t/102-trim_whitespace.t t/103-tidy_type.t t/104-map_type.t t/105-valid_proto_string.t t/106-process_typemaps.t t/108-map_type.t t/109-standard_XS_defs.t t/110-assign_func_args.t t/111-analyze_preprocessor_statements.t t/112-set_cond.t t/113-check_cond_preproc_statements.t t/114-blurt_death_Warn.t t/115-avoid-noise.t t/501-t-compile.t t/510-t-bare.t t/511-t-whitespace.t t/512-t-file.t t/513-t-merge.t t/514-t-embed.t t/515-t-cmd.t t/516-t-clone.t t/517-t-targetable.t t/600-t-compat.t t/data/b.typemap t/data/combined.typemap t/data/confl_repl.typemap t/data/confl_skip.typemap t/data/conflicting.typemap t/data/other.typemap t/data/perl.typemap t/data/simple.typemap t/lib/ExtUtils/Typemaps/Test.pm t/lib/IncludeTester.pm t/lib/PrimitiveCapture.pm t/lib/TypemapTest/Foo.pm t/pseudotypemap1 t/typemap t/XSBroken.xs t/XSInclude.xsh t/XSMore.xs t/XSTest.pm t/XSTest.xs t/XSUsage.pm t/XSUsage.xs t/XSWarn.xs META.yml Module YAML meta-data (added by MakeMaker) META.json Module JSON meta-data (added by MakeMaker) ``` ``` { "abstract" : "converts Perl XS code into C code", "author" : [ "<NAME> <<EMAIL>>" ], "dynamic_config" : 1, "generated_by" : "ExtUtils::MakeMaker version 7.70, CPAN::Meta::Converter version 2.150010", "license" : [ "unknown" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : 2 }, "name" : "ExtUtils-ParseXS", "no_index" : { "directory" : [ "t", "inc" ] }, "prereqs" : { "build" : { "requires" : { "ExtUtils::MakeMaker" : "0" } }, "configure" : { "requires" : { "ExtUtils::MakeMaker" : "6.46" } }, "runtime" : { "requires" : { "Carp" : "0", "Cwd" : "0", "DynaLoader" : "0", "Exporter" : "5.57", "ExtUtils::CBuilder" : "0", "ExtUtils::MakeMaker" : "6.46", "File::Basename" : "0", "File::Spec" : "0", "Symbol" : "0", "Test::More" : "0.47" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://github.com/Perl/perl5/issues" }, "homepage" : "https://github.com/Perl/perl5", "repository" : { "url" : "https://github.com/Perl/perl5.git" } }, "version" : "3.51", "x_serialization_backend" : "JSON::PP version 4.16" } ``` ``` --- abstract: 'converts Perl XS code into C code' author: - '<NAME> <<EMAIL>>' build_requires: ExtUtils::MakeMaker: '0' configure_requires: ExtUtils::MakeMaker: '6.46' dynamic_config: 1 generated_by: 'ExtUtils::MakeMaker version 7.70, CPAN::Meta::Converter version 2.150010' license: unknown meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: ExtUtils-ParseXS no_index: directory: - t - inc requires: Carp: '0' Cwd: '0' DynaLoader: '0' Exporter: '5.57' ExtUtils::CBuilder: '0' ExtUtils::MakeMaker: '6.46' File::Basename: '0' File::Spec: '0' Symbol: '0' Test::More: '0.47' resources: bugtracker: https://github.com/Perl/perl5/issues homepage: https://github.com/Perl/perl5 repository: https://github.com/Perl/perl5.git version: '3.51' x_serialization_backend: 'CPAN::Meta::YAML version 0.018' ``` ``` use 5.006001; use strict; use warnings; use ExtUtils::MakeMaker 6.46; use Config '%Config'; use File::Spec; # It's a weirdness in ExtUtils::MakeMaker that, when searching for xsubpp, # it searches @INC for $path/ExtUtils/xsubpp instead of looking for an # executable in the $PATH or whatever. # EU::MM will pick up whatever xsubpp is found first in @INC. # Thus, we must at least warn the user when we're about to install a new # xsubpp to a location that may be shadowed by an old one. my $whereto = ($] > 5.010001 ? 'site' : 'perl'); my $instdir = $whereto eq 'site' ? $Config{installsitelib} : $Config{installprivlib}; $instdir = File::Spec->canonpath($instdir); my $target_xsubpp = File::Spec->catfile($instdir, 'ExtUtils', 'xsubpp'); my @shadowing_xsubpps; foreach my $dir (grep !ref, @INC) { my $cpath = File::Spec->canonpath($dir); my $test_xsubpp = File::Spec->catdir($cpath, 'ExtUtils', 'xsubpp'); last if $cpath eq $instdir or $target_xsubpp eq $test_xsubpp; if (-r $test_xsubpp) { push @shadowing_xsubpps, $test_xsubpp; } } if (@shadowing_xsubpps) { my $problems = join("\n ", @shadowing_xsubpps); warn <<HERE; === WARNING WARNING WARNING WARNING WARNING WARNING WARNING === I detected that an old version of 'xsubpp' will shadow the new, to-be-installed 'xsubpp' (which you need to install XS modules) after installation. This is likely because an old version was installed wrongly or because your vendor patched your perl. You can continue with the installation but afterwards, you may have to remove all copies of 'xsubpp' that shadow this one for future module installations. Failure to do so may result in your being unable to install XS modules. But as long as you keep this in mind, nothing is going to break your system if you do nothing. Problematic copies of 'xsubpp' found: $problems === WARNING WARNING WARNING WARNING WARNING WARNING WARNING === HERE sleep 2; } WriteMakefile( 'NAME' => 'ExtUtils::ParseXS', 'VERSION_FROM' => 'lib/ExtUtils/ParseXS.pm', 'PREREQ_PM' => { 'Carp' => 0, 'Cwd' => 0, 'DynaLoader' => 0, 'Exporter' => '5.57', 'ExtUtils::CBuilder' => 0, 'File::Basename' => 0, 'File::Spec' => 0, 'Symbol' => 0, 'Test::More' => '0.47', 'ExtUtils::MakeMaker' => '6.46', }, CONFIGURE_REQUIRES => { 'ExtUtils::MakeMaker' => '6.46', }, META_MERGE => { resources => { repository => 'https://github.com/Perl/perl5.git', bugtracker => 'https://github.com/Perl/perl5/issues', homepage => "https://github.com/Perl/perl5", }, }, ($] >= 5.005 ? ## Add these new keywords supported since 5.005 (ABSTRACT_FROM => 'lib/ExtUtils/ParseXS.pod', AUTHOR => '<NAME> <<EMAIL>>') : ()), 'INSTALLDIRS' => $whereto, 'EXE_FILES' => ['lib/ExtUtils/xsubpp'], 'PL_FILES' => {} ); ``` * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [COMMANDS](#COMMANDS) * [OPTIONS](#OPTIONS) * [ENVIRONMENT VARIABLES](#ENVIRONMENT-VARIABLES) * [SEE ALSO](#SEE-ALSO) * [COPYRIGHT](#COPYRIGHT) * [AUTHOR](#AUTHOR) NAME === cpanm - get, unpack build and install modules from CPAN SYNOPSIS === ``` cpanm Test::More # install Test::More cpanm MIYAGAWA/Plack-0.99_05.tar.gz # full distribution path cpanm http://example.org/LDS/CGI.pm-3.20.tar.gz # install from URL cpanm ~/dists/MyCompany-Enterprise-1.00.tar.gz # install from a local file cpanm --interactive Task::Kensho # Configure interactively cpanm . # install from local directory cpanm --installdeps . # install all the deps for the current directory cpanm -L extlib Plack # install Plack and all non-core deps into extlib cpanm --mirror http://cpan.cpantesters.org/ DBI # use the fast-syncing mirror cpanm --from https://cpan.metacpan.org/ Plack # use only the HTTPS mirror ``` COMMANDS === (arguments) Command line arguments can be either a module name, distribution file, local file path, HTTP URL or git repository URL. Following commands will all work as you expect. ``` cpanm Plack cpanm Plack/Request.pm cpanm MIYAGAWA/Plack-1.0000.tar.gz cpanm /path/to/Plack-1.0000.tar.gz cpanm http://cpan.metacpan.org/authors/id/M/MI/MIYAGAWA/Plack-0.9990.tar.gz cpanm git://github.com/plack/Plack.git ``` Additionally, you can use the notation using `~` and `@` to specify version for a given module. `~` specifies the version requirement in the [CPAN::Meta::Spec](/pod/CPAN::Meta::Spec) format, while `@` pins the exact version, and is a shortcut for `~"== VERSION"`. ``` cpanm Plack~1.0000 # 1.0000 or later cpanm Plack~">= 1.0000, < 2.0000" # latest of 1.xxxx cpanm Plack@0.9990 # specific version. same as Plack~"== 0.9990" ``` The version query including specific version or range will be sent to [MetaCPAN](/pod/MetaCPAN) to search for previous releases. The query will search for BackPAN archives by default, unless you specify `--dev` option, in which case, archived versions will be filtered out. For a git repository, you can specify a branch, tag, or commit SHA to build. The default is `master` ``` cpanm git://github.com/plack/Plack.git@1.0000 # tag cpanm git://github.com/plack/Plack.git@devel # branch ``` -i, --install Installs the modules. This is a default behavior and this is just a compatibility option to make it work like [cpan](/pod/cpan) or [cpanp](/pod/cpanp). --self-upgrade Upgrades itself. It's just an alias for: ``` cpanm App::cpanminus ``` --info Displays the distribution information in `AUTHOR/Dist-Name-ver.tar.gz` format in the standard out. --installdeps Installs the dependencies of the target distribution but won't build itself. Handy if you want to try the application from a version controlled repository such as git. ``` cpanm --installdeps . ``` --look Download and unpack the distribution and then open the directory with your shell. Handy to poke around the source code or do manual testing. -h, --help Displays the help message. -V, --version Displays the version number. OPTIONS === You can specify the default options in `PERL_CPANM_OPT` environment variable. -f, --force Force install modules even when testing failed. -n, --notest Skip the testing of modules. Use this only when you just want to save time for installing hundreds of distributions to the same perl and architecture you've already tested to make sure it builds fine. Defaults to false, and you can say `--no-notest` to override when it is set in the default options in `PERL_CPANM_OPT`. --test-only Run the tests only, and do not install the specified module or distributions. Handy if you want to verify the new (or even old) releases pass its unit tests without installing the module. Note that if you specify this option with a module or distribution that has dependencies, these dependencies will be installed if you don't currently have them. -S, --sudo Switch to the root user with `sudo` when installing modules. Use this if you want to install modules to the system perl include path. Defaults to false, and you can say `--no-sudo` to override when it is set in the default options in `PERL_CPANM_OPT`. -v, --verbose Makes the output verbose. It also enables the interactive configuration. (See --interactive) -q, --quiet Makes the output even more quiet than the default. It only shows the successful/failed dependencies to the output. -l, --local-lib Sets the [local::lib](/pod/local::lib) compatible path to install modules to. You don't need to set this if you already configure the shell environment variables using [local::lib](/pod/local::lib), but this can be used to override that as well. -L, --local-lib-contained Same with `--local-lib` but with [--self-contained](/pod/--self-contained) set. All non-core dependencies will be installed even if they're already installed. For instance, ``` cpanm -L extlib Plack ``` would install Plack and all of its non-core dependencies into the directory `extlib`, which can be loaded from your application with: ``` use local::lib '/path/to/extlib'; ``` Note that this option does **NOT** reliably work with perl installations supplied by operating system vendors that strips standard modules from perl, such as RHEL, Fedora and CentOS, **UNLESS** you also install packages supplying all the modules that have been stripped. For these systems you will probably want to install the `perl-core` meta-package which does just that. --self-contained When examining the dependencies, assume no non-core modules are installed on the system. Handy if you want to bundle application dependencies in one directory so you can distribute to other machines. --exclude-vendor Don't include modules installed under the 'vendor' paths when searching for core modules when the `--self-contained` flag is in effect. This restores the behaviour from before version 1.7023 --mirror Specifies the base URL for the CPAN mirror to use, such as `http://cpan.cpantesters.org/` (you can omit the trailing slash). You can specify multiple mirror URLs by repeating the command line option. You can use a local directory that has a CPAN mirror structure (created by tools such as [OrePAN](/pod/OrePAN) or [Pinto](/pod/Pinto)) by using a special URL scheme `file://`. If the given URL begins with `/` (without any scheme), it is considered as a file scheme as well. ``` cpanm --mirror file:///path/to/mirror cpanm --mirror ~/minicpan # Because shell expands ~ to /home/user ``` Defaults to `http://www.cpan.org/`. --mirror-only Download the mirror's 02packages.details.txt.gz index file instead of querying the CPAN Meta DB. This will also effectively opt out sending your local perl versions to backend database servers such as CPAN Meta DB and MetaCPAN. Select this option if you are using a local mirror of CPAN, such as minicpan when you're offline, or your own CPAN index (a.k.a darkpan). --from, -M ``` cpanm -M https://cpan.metacpan.org/ cpanm --from https://cpan.metacpan.org/ ``` Use the given mirror URL and its index as the *only* source to search and download modules from. It works similar to `--mirror` and `--mirror-only` combined, with a small difference: unlike `--mirror` which *appends* the URL to the list of mirrors, `--from` (or `-M` for short) uses the specified URL as its *only* source to download index and modules from. This makes the option always override the default mirror, which might have been set via global options such as the one set by `PERL_CPANM_OPT` environment variable. **Tip:** It might be useful if you name these options with your shell aliases, like: ``` alias minicpanm='cpanm --from ~/minicpan' alias darkpan='cpanm --from http://mycompany.example.com/DPAN' ``` --mirror-index **EXPERIMENTAL**: Specifies the file path to `02packages.details.txt` for module search index. --cpanmetadb **EXPERIMENTAL**: Specifies an alternate URI for CPAN MetaDB index lookups. --metacpan Prefers MetaCPAN API over CPAN MetaDB. --cpanfile **EXPERIMENTAL**: Specified an alternate path for cpanfile to search for, when `--installdeps` command is in use. Defaults to `cpanfile`. --prompt Prompts when a test fails so that you can skip, force install, retry or look in the shell to see what's going wrong. It also prompts when one of the dependency failed if you want to proceed the installation. Defaults to false, and you can say `--no-prompt` to override if it's set in the default options in `PERL_CPANM_OPT`. --dev **EXPERIMENTAL**: search for a newer developer release as well. Defaults to false. --reinstall cpanm, when given a module name in the command line (i.e. `cpanm Plack`), checks the locally installed version first and skips if it is already installed. This option makes it skip the check, so: ``` cpanm --reinstall Plack ``` would reinstall [Plack](/pod/Plack) even if your locally installed version is latest, or even newer (which would happen if you install a developer release from version control repositories). Defaults to false. --interactive Makes the configuration (such as `Makefile.PL` and `Build.PL`) interactive, so you can answer questions in the distribution that requires custom configuration or Task:: distributions. Defaults to false, and you can say `--no-interactive` to override when it's set in the default options in `PERL_CPANM_OPT`. --pp, --pureperl Prefer Pure perl build of modules by setting `PUREPERL_ONLY=1` for MakeMaker and `--pureperl-only` for Build.PL based distributions. Note that not all of the CPAN modules support this convention yet. --with-recommends, --with-suggests **EXPERIMENTAL**: Installs dependencies declared as `recommends` and `suggests` respectively, per META spec. When these dependencies fail to install, cpanm continues the installation, since they're just recommendation/suggestion. Enabling this could potentially make a circular dependency for a few modules on CPAN, when `recommends` adds a module that `recommends` back the module in return. There's also `--without-recommend` and `--without-suggests` to override the default decision made earlier in `PERL_CPANM_OPT`. Defaults to false for both. --with-develop **EXPERIMENTAL**: Installs develop phase dependencies in META files or `cpanfile` when used with `--installdeps`. Defaults to false. --with-configure **EXPERIMENTAL**: Installs configure phase dependencies in `cpanfile` when used with `--installdeps`. Defaults to false. --with-feature, --without-feature, --with-all-features **EXPERIMENTAL**: Specifies the feature to enable, if a module supports optional features per META spec 2.0. ``` cpanm --with-feature=opt_csv Spreadsheet::Read ``` the features can also be interactively chosen when `--interactive` option is enabled. `--with-all-features` enables all the optional features, and `--without-feature` can select a feature to disable. --configure-timeout, --build-timeout, --test-timeout Specify the timeout length (in seconds) to wait for the configure, build and test process. Current default values are: 60 for configure, 3600 for build and 1800 for test. --configure-args, --build-args, --test-args, --install-args **EXPERIMENTAL**: Pass arguments for configure/build/test/install commands respectively, for a given module to install. ``` cpanm DBD::mysql --configure-args="--cflags=... --libs=..." ``` The argument is only enabled for the module passed as a command line argument, not dependencies. --scandeps **DEPRECATED**: Scans the depencencies of given modules and output the tree in a text format. (See `--format` below for more options) Because this command doesn't actually install any distributions, it will be useful that by typing: ``` cpanm --scandeps Catalyst::Runtime ``` you can make sure what modules will be installed. This command takes into account which modules you already have installed in your system. If you want to see what modules will be installed against a vanilla perl installation, you might want to combine it with `-L` option. --format **DEPRECATED**: Determines what format to display the scanned dependency tree. Available options are `tree`, `json`, `yaml` and `dists`. tree Displays the tree in a plain text format. This is the default value. json, yaml Outputs the tree in a JSON or YAML format. [JSON](/pod/JSON) and [YAML](/pod/YAML) modules need to be installed respectively. The output tree is represented as a recursive tuple of: ``` [ distribution, dependencies ] ``` and the container is an array containing the root elements. Note that there may be multiple root nodes, since you can give multiple modules to the `--scandeps` command. dists `dists` is a special output format, where it prints the distribution filename in the *depth first order* after the dependency resolution, like: ``` GAAS/MIME-Base64-3.13.tar.gz GAAS/URI-1.58.tar.gz PETDANCE/HTML-Tagset-3.20.tar.gz GAAS/HTML-Parser-3.68.tar.gz GAAS/libwww-perl-5.837.tar.gz ``` which means you can install these distributions in this order without extra dependencies. When combined with `-L` option, it will be useful to replay installations on other machines. --save-dists Specifies the optional directory path to copy downloaded tarballs in the CPAN mirror compatible directory structure i.e. *authors/id/A/AU/AUTHORS/Foo-Bar-version.tar.gz* If the distro tarball did not come from CPAN, for example from a local file or from GitHub, then it will be saved under *vendor/Foo-Bar-version.tar.gz*. --uninst-shadows Uninstalls the shadow files of the distribution that you're installing. This eliminates the confusion if you're trying to install core (dual-life) modules from CPAN against perl 5.10 or older, or modules that used to be XS-based but switched to pure perl at some version. If you run cpanm as root and use `INSTALL_BASE` or equivalent to specify custom installation path, you SHOULD disable this option so you won't accidentally uninstall dual-life modules from the core include path. Defaults to true if your perl version is smaller than 5.12, and you can disable that with `--no-uninst-shadows`. **NOTE**: Since version 1.3000 this flag is turned off by default for perl newer than 5.12, since with 5.12 @INC contains site_perl directory *before* the perl core library path, and uninstalling shadows is not necessary anymore and does more harm by deleting files from the core library path. --uninstall, -U Uninstalls a module from the library path. It finds a packlist for given modules, and removes all the files included in the same distribution. If you enable local::lib, it only removes files from the local::lib directory. If you try to uninstall a module in `perl` directory (i.e. core module), an error will be thrown. A dialog will be prompted to confirm the files to be deleted. If you pass `-f` option as well, the dialog will be skipped and uninstallation will be forced. --cascade-search **EXPERIMENTAL**: Specifies whether to cascade search when you specify multiple mirrors and a mirror doesn't have a module or has a lower version of the module than requested. Defaults to false. --skip-installed Specifies whether a module given in the command line is skipped if its latest version is already installed. Defaults to true. **NOTE**: The `PERL5LIB` environment variable have to be correctly set for this to work with modules installed using [local::lib](/pod/local::lib), unless you always use the `-l` option. --skip-satisfied **EXPERIMENTAL**: Specifies whether a module (and version) given in the command line is skipped if it's already installed. If you run: ``` cpanm --skip-satisfied CGI DBI~1.2 ``` cpanm won't install them if you already have CGI (for whatever versions) or have DBI with version higher than 1.2. It is similar to `--skip-installed` but while `--skip-installed` checks if the *latest* version of CPAN is installed, `--skip-satisfied` checks if a requested version (or not, which means any version) is installed. Defaults to false. --verify Verify the integrity of distribution files retrieved from CPAN using CHECKSUMS file, and SIGNATURES file (if found in the distribution). Defaults to false. Using this option does not verify the integrity of the CHECKSUMS file, and it's unsafe to rely on this option if you're using a CPAN mirror that you do not trust. --report-perl-version Whether it reports the locally installed perl version to the various web server as part of User-Agent. Defaults to true unless CI related environment variables such as `TRAVIS`, `CI` or `AUTOMATED_TESTING` is enabled. You can disable it by using `--no-report-perl-version`. --auto-cleanup Specifies the number of days in which cpanm's work directories expire. Defaults to 7, which means old work directories will be cleaned up in one week. You can set the value to `0` to make cpan never cleanup those directories. --man-pages Generates man pages for executables (man1) and libraries (man3). Defaults to true (man pages generated) unless `-L|--local-lib-contained` option is supplied in which case it's set to false. You can disable it with `--no-man-pages`. --lwp Uses [LWP](/pod/LWP) module to download stuff over HTTP. Defaults to true, and you can say `--no-lwp` to disable using LWP, when you want to upgrade LWP from CPAN on some broken perl systems. --wget Uses GNU Wget (if available) to download stuff. Defaults to true, and you can say `--no-wget` to disable using Wget (versions of Wget older than 1.9 don't support the `--retry-connrefused` option used by cpanm). --curl Uses cURL (if available) to download stuff. Defaults to true, and you can say `--no-curl` to disable using cURL. Normally with `--lwp`, `--wget` and `--curl` options set to true (which is the default) cpanm tries [LWP](/pod/LWP), Wget, cURL and [HTTP::Tiny](/pod/HTTP::Tiny) (in that order) and uses the first one available. ENVIRONMENT VARIABLES === PERL_CPANM_HOME The directory cpanm should use to store downloads and build and test modules. Defaults to the `.cpanm` directory in your user's home directory. PERL_CPANM_OPT If set, adds a set of default options to every cpanm command. These options come first, and so are overridden by command-line options. SEE ALSO === [App::cpanminus](/pod/App::cpanminus) COPYRIGHT === Copyright 2010- <NAME>. AUTHOR === <NAME> × #### Module Install Instructions To install App::cpanminus, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm App::cpanminus ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install App::cpanminus ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#) * [NAME](#NAME) * [SYNOPSIS](#SYNOPSIS) * [DESCRIPTION](#DESCRIPTION) + [CPAN::shell([$prompt, $command]) Starting Interactive Mode](#CPAN::shell(%5B$prompt,-$command%5D)-Starting-Interactive-Mode) + [CPAN::Shell](#CPAN::Shell) + [autobundle](#autobundle) + [hosts](#hosts) + [mkmyconfig](#mkmyconfig) + [r [Module|/Regexp/]...](#r-%5BModule%7C/Regexp/%5D...) + [recent ***EXPERIMENTAL COMMAND***](#recent-***EXPERIMENTAL-COMMAND***) + [recompile](#recompile) + [report Bundle|Distribution|Module](#report-Bundle%7CDistribution%7CModule) + [smoke ***EXPERIMENTAL COMMAND***](#smoke-***EXPERIMENTAL-COMMAND***) + [upgrade [Module|/Regexp/]...](#upgrade-%5BModule%7C/Regexp/%5D...) + [The four CPAN::* Classes: Author, Bundle, Module, Distribution](#The-four-CPAN::*-Classes:-Author,-Bundle,-Module,-Distribution) + [Integrating local directories](#Integrating-local-directories) + [Redirection](#Redirection) + [Plugin support ***EXPERIMENTAL***](#Plugin-support-***EXPERIMENTAL***) * [CONFIGURATION](#CONFIGURATION) + [Config Variables](#Config-Variables) + [CPAN::anycwd($path): Note on config variable getcwd](#CPAN::anycwd($path):-Note-on-config-variable-getcwd) + [Note on the format of the urllist parameter](#Note-on-the-format-of-the-urllist-parameter) + [The urllist parameter has CD-ROM support](#The-urllist-parameter-has-CD-ROM-support) + [Maintaining the urllist parameter](#Maintaining-the-urllist-parameter) + [The requires and build_requires dependency declarations](#The-requires-and-build_requires-dependency-declarations) + [Configuration of the allow_installing_* parameters](#Configuration-of-the-allow_installing_*-parameters) + [Configuration for individual distributions (Distroprefs)](#Configuration-for-individual-distributions-(Distroprefs)) + [Filenames](#Filenames) + [Fallback Data::Dumper and Storable](#Fallback-Data::Dumper-and-Storable) + [Blueprint](#Blueprint) + [Language Specs](#Language-Specs) + [Processing Instructions](#Processing-Instructions) + [Schema verification with Kwalify](#Schema-verification-with-Kwalify) + [Example Distroprefs Files](#Example-Distroprefs-Files) * [PROGRAMMER'S INTERFACE](#PROGRAMMER'S-INTERFACE) + [Methods in the other Classes](#Methods-in-the-other-Classes) + [Cache Manager](#Cache-Manager) + [Bundles](#Bundles) * [PREREQUISITES](#PREREQUISITES) * [UTILITIES](#UTILITIES) + [Finding packages and VERSION](#Finding-packages-and-VERSION) + [Debugging](#Debugging) + [Floppy, Zip, Offline Mode](#Floppy,-Zip,-Offline-Mode) + [Basic Utilities for Programmers](#Basic-Utilities-for-Programmers) * [SECURITY](#SECURITY) + [Cryptographically signed modules](#Cryptographically-signed-modules) * [EXPORT](#EXPORT) * [ENVIRONMENT](#ENVIRONMENT) * [POPULATE AN INSTALLATION WITH LOTS OF MODULES](#POPULATE-AN-INSTALLATION-WITH-LOTS-OF-MODULES) * [WORKING WITH CPAN.pm BEHIND FIREWALLS](#WORKING-WITH-CPAN.pm-BEHIND-FIREWALLS) + [Three basic types of firewalls](#Three-basic-types-of-firewalls) + [Configuring lynx or ncftp for going through a firewall](#Configuring-lynx-or-ncftp-for-going-through-a-firewall) * [FAQ](#FAQ) * [COMPATIBILITY](#COMPATIBILITY) + [OLD PERL VERSIONS](#OLD-PERL-VERSIONS) + [CPANPLUS](#CPANPLUS) + [CPANMINUS](#CPANMINUS) * [SECURITY ADVICE](#SECURITY-ADVICE) * [BUGS](#BUGS) * [AUTHOR](#AUTHOR) * [LICENSE](#LICENSE) * [TRANSLATIONS](#TRANSLATIONS) * [SEE ALSO](#SEE-ALSO) NAME === CPAN - query, download and build perl modules from CPAN sites SYNOPSIS === Interactive mode: ``` perl -MCPAN -e shell ``` --or-- ``` cpan ``` Basic commands: ``` # Modules: cpan> install Acme::Meta # in the shell CPAN::Shell->install("Acme::Meta"); # in perl # Distributions: cpan> install NWCLARK/Acme-Meta-0.02.tar.gz # in the shell CPAN::Shell-> install("NWCLARK/Acme-Meta-0.02.tar.gz"); # in perl # module objects: $mo = CPAN::Shell->expandany($mod); $mo = CPAN::Shell->expand("Module",$mod); # same thing # distribution objects: $do = CPAN::Shell->expand("Module",$mod)->distribution; $do = CPAN::Shell->expandany($distro); # same thing $do = CPAN::Shell->expand("Distribution", $distro); # same thing ``` DESCRIPTION === The CPAN module automates or at least simplifies the make and install of perl modules and extensions. It includes some primitive searching capabilities and knows how to use LWP, HTTP::Tiny, Net::FTP and certain external download clients to fetch distributions from the net. These are fetched from one or more mirrored CPAN (Comprehensive Perl Archive Network) sites and unpacked in a dedicated directory. The CPAN module also supports named and versioned *bundles* of modules. Bundles simplify handling of sets of related modules. See Bundles below. The package contains a session manager and a cache manager. The session manager keeps track of what has been fetched, built, and installed in the current session. The cache manager keeps track of the disk space occupied by the make processes and deletes excess space using a simple FIFO mechanism. All methods provided are accessible in a programmer style and in an interactive shell style. CPAN::shell([$prompt, $command]) Starting Interactive Mode --- Enter interactive mode by running ``` perl -MCPAN -e shell ``` or ``` cpan ``` which puts you into a readline interface. If `Term::ReadKey` and either of `Term::ReadLine::Perl` or `Term::ReadLine::Gnu` are installed, history and command completion are supported. Once at the command line, type `h` for one-page help screen; the rest should be self-explanatory. The function call `shell` takes two optional arguments: one the prompt, the second the default initial command line (the latter only works if a real ReadLine interface module is installed). The most common uses of the interactive modes are Searching for authors, bundles, distribution files and modules There are corresponding one-letter commands `a`, `b`, `d`, and `m` for each of the four categories and another, `i` for any of the mentioned four. Each of the four entities is implemented as a class with slightly differing methods for displaying an object. Arguments to these commands are either strings exactly matching the identification string of an object, or regular expressions matched case-insensitively against various attributes of the objects. The parser only recognizes a regular expression when you enclose it with slashes. The principle is that the number of objects found influences how an item is displayed. If the search finds one item, the result is displayed with the rather verbose method `as_string`, but if more than one is found, each object is displayed with the terse method `as_glimpse`. Examples: ``` cpan> m Acme::MetaSyntactic Module id = Acme::MetaSyntactic CPAN_USERID BOOK (<NAME> (BooK) <[...]>) CPAN_VERSION 0.99 CPAN_FILE B/BO/BOOK/Acme-MetaSyntactic-0.99.tar.gz UPLOAD_DATE 2006-11-06 MANPAGE Acme::MetaSyntactic - Themed metasyntactic variables names INST_FILE /usr/local/lib/perl/5.10.0/Acme/MetaSyntactic.pm INST_VERSION 0.99 cpan> a BOOK Author id = BOOK EMAIL [...] FULLNAME <NAME> (BooK) cpan> d BOOK/Acme-MetaSyntactic-0.99.tar.gz Distribution id = B/BO/BOOK/Acme-MetaSyntactic-0.99.tar.gz CPAN_USERID BOOK (<NAME> (BooK) <[...]>) CONTAINSMODS Acme::MetaSyntactic Acme::MetaSyntactic::Alias [...] UPLOAD_DATE 2006-11-06 cpan> m /lorem/ Module = Acme::MetaSyntactic::loremipsum (BOOK/Acme-MetaSyntactic-0.99.tar.gz) Module Text::Lorem (ADEOLA/Text-Lorem-0.3.tar.gz) Module Text::Lorem::More (RKRIMEN/Text-Lorem-More-0.12.tar.gz) Module Text::Lorem::More::Source (RKRIMEN/Text-Lorem-More-0.12.tar.gz) cpan> i /berlin/ Distribution BEATNIK/Filter-NumberLines-0.02.tar.gz Module = DateTime::TimeZone::Europe::Berlin (DROLSKY/DateTime-TimeZone-0.7904.tar.gz) Module Filter::NumberLines (BEATNIK/Filter-NumberLines-0.02.tar.gz) Author [...] ``` The examples illustrate several aspects: the first three queries target modules, authors, or distros directly and yield exactly one result. The last two use regular expressions and yield several results. The last one targets all of bundles, modules, authors, and distros simultaneously. When more than one result is available, they are printed in one-line format. `get`, `make`, `test`, `install`, `clean` modules or distributions These commands take any number of arguments and investigate what is necessary to perform the action. Argument processing is as follows: ``` known module name in format Foo/Bar.pm module other embedded slash distribution - with trailing slash dot directory enclosing slashes regexp known module name in format Foo::Bar module ``` If the argument is a distribution file name (recognized by embedded slashes), it is processed. If it is a module, CPAN determines the distribution file in which this module is included and processes that, following any dependencies named in the module's META.yml or Makefile.PL (this behavior is controlled by the configuration parameter `prerequisites_policy`). If an argument is enclosed in slashes it is treated as a regular expression: it is expanded and if the result is a single object (distribution, bundle or module), this object is processed. Example: ``` install Dummy::Perl # installs the module install AUXXX/Dummy-Perl-3.14.tar.gz # installs that distribution install /Dummy-Perl-3.14/ # same if the regexp is unambiguous ``` `get` downloads a distribution file and untars or unzips it, `make` builds it, `test` runs the test suite, and `install` installs it. Any `make` or `test` is run unconditionally. An ``` install <distribution_file> ``` is also run unconditionally. But for ``` install <module> ``` CPAN checks whether an install is needed and prints *module up to date* if the distribution file containing the module doesn't need updating. CPAN also keeps track of what it has done within the current session and doesn't try to build a package a second time regardless of whether it succeeded or not. It does not repeat a test run if the test has been run successfully before. Same for install runs. The `force` pragma may precede another command (currently: `get`, `make`, `test`, or `install`) to execute the command from scratch and attempt to continue past certain errors. See the section below on the `force` and the `fforce` pragma. The `notest` pragma skips the test part in the build process. Example: ``` cpan> notest install Tk ``` A `clean` command results in a ``` make clean ``` being executed within the distribution file's working directory. `readme`, `perldoc`, `look` module or distribution `readme` displays the README file of the associated distribution. `Look` gets and untars (if not yet done) the distribution file, changes to the appropriate directory and opens a subshell process in that directory. `perldoc` displays the module's pod documentation in html or plain text format. `ls` author `ls` globbing_expression The first form lists all distribution files in and below an author's CPAN directory as stored in the CHECKSUMS files distributed on CPAN. The listing recurses into subdirectories. The second form limits or expands the output with shell globbing as in the following examples: ``` ls JV/make* ls GSAR/*make* ls */*make* ``` The last example is very slow and outputs extra progress indicators that break the alignment of the result. Note that globbing only lists directories explicitly asked for, for example FOO/* will not list FOO/bar/Acme-Sthg-n.nn.tar.gz. This may be regarded as a bug that may be changed in some future version. `failed` The `failed` command reports all distributions that failed on one of `make`, `test` or `install` for some reason in the currently running shell session. Persistence between sessions If the `YAML` or the `YAML::Syck` module is installed a record of the internal state of all modules is written to disk after each step. The files contain a signature of the currently running perl version for later perusal. If the configurations variable `build_dir_reuse` is set to a true value, then CPAN.pm reads the collected YAML files. If the stored signature matches the currently running perl, the stored state is loaded into memory such that persistence between sessions is effectively established. The `force` and the `fforce` pragma To speed things up in complex installation scenarios, CPAN.pm keeps track of what it has already done and refuses to do some things a second time. A `get`, a `make`, and an `install` are not repeated. A `test` is repeated only if the previous test was unsuccessful. The diagnostic message when CPAN.pm refuses to do something a second time is one of *Has already been* `unwrapped|made|tested successfully` or something similar. Another situation where CPAN refuses to act is an `install` if the corresponding `test` was not successful. In all these cases, the user can override this stubborn behaviour by prepending the command with the word force, for example: ``` cpan> force get Foo cpan> force make AUTHOR/Bar-3.14.tar.gz cpan> force test Baz cpan> force install Acme::Meta ``` Each *forced* command is executed with the corresponding part of its memory erased. The `fforce` pragma is a variant that emulates a `force get` which erases the entire memory followed by the action specified, effectively restarting the whole get/make/test/install procedure from scratch. Lockfile Interactive sessions maintain a lockfile, by default `~/.cpan/.lock`. Batch jobs can run without a lockfile and not disturb each other. The shell offers to run in *downgraded mode* when another process is holding the lockfile. This is an experimental feature that is not yet tested very well. This second shell then does not write the history file, does not use the metadata file, and has a different prompt. Signals CPAN.pm installs signal handlers for SIGINT and SIGTERM. While you are in the cpan-shell, it is intended that you can press `^C` anytime and return to the cpan-shell prompt. A SIGTERM will cause the cpan-shell to clean up and leave the shell loop. You can emulate the effect of a SIGTERM by sending two consecutive SIGINTs, which usually means by pressing `^C` twice. CPAN.pm ignores SIGPIPE. If the user sets `inactivity_timeout`, a SIGALRM is used during the run of the `perl Makefile.PL` or `perl Build.PL` subprocess. A SIGALRM is also used during module version parsing, and is controlled by `version_timeout`. CPAN::Shell --- The commands available in the shell interface are methods in the package CPAN::Shell. If you enter the shell command, your input is split by the Text::ParseWords::shellwords() routine, which acts like most shells do. The first word is interpreted as the method to be invoked, and the rest of the words are treated as the method's arguments. Continuation lines are supported by ending a line with a literal backslash. autobundle --- `autobundle` writes a bundle file into the `$CPAN::Config->{cpan_home}/Bundle` directory. The file contains a list of all modules that are both available from CPAN and currently installed within @INC. Duplicates of each distribution are suppressed. The name of the bundle file is based on the current date and a counter, e.g. *Bundle/Snapshot_2012_05_21_00.pm*. This is installed again by running `cpan Bundle::Snapshot_2012_05_21_00`, or installing `Bundle::Snapshot_2012_05_21_00` from the CPAN shell. Return value: path to the written file. hosts --- Note: this feature is still in alpha state and may change in future versions of CPAN.pm This commands provides a statistical overview over recent download activities. The data for this is collected in the YAML file `FTPstats.yml` in your `cpan_home` directory. If no YAML module is configured or YAML not installed, no stats are provided. install_tested Install all distributions that have been tested successfully but have not yet been installed. See also `is_tested`. is_tested List all build directories of distributions that have been tested successfully but have not yet been installed. See also `install_tested`. mkmyconfig --- mkmyconfig() writes your own CPAN::MyConfig file into your `~/.cpan/` directory so that you can save your own preferences instead of the system-wide ones. r [Module|/Regexp/]... --- scans current perl installation for modules that have a newer version available on CPAN and provides a list of them. If called without argument, all potential upgrades are listed; if called with arguments the list is filtered to the modules and regexps given as arguments. The listing looks something like this: ``` Package namespace installed latest in CPAN file CPAN 1.94_64 1.9600 ANDK/CPAN-1.9600.tar.gz CPAN::Reporter 1.1801 1.1902 DAGOLDEN/CPAN-Reporter-1.1902.tar.gz YAML 0.70 0.73 INGY/YAML-0.73.tar.gz YAML::Syck 1.14 1.17 AVAR/YAML-Syck-1.17.tar.gz YAML::Tiny 1.44 1.50 ADAMK/YAML-Tiny-1.50.tar.gz CGI 3.43 3.55 MARKSTOS/CGI.pm-3.55.tar.gz Module::Build::YAML 1.40 1.41 DAGOLDEN/Module-Build-0.3800.tar.gz TAP::Parser::Result::YAML 3.22 3.23 ANDYA/Test-Harness-3.23.tar.gz YAML::XS 0.34 0.35 INGY/YAML-LibYAML-0.35.tar.gz ``` It suppresses duplicates in the column `in CPAN file` such that distributions with many upgradeable modules are listed only once. Note that the list is not sorted. recent ***EXPERIMENTAL COMMAND*** --- The `recent` command downloads a list of recent uploads to CPAN and displays them *slowly*. While the command is running, a $SIG{INT} exits the loop after displaying the current item. **Note**: This command requires XML::LibXML installed. **Note**: This whole command currently is just a hack and will probably change in future versions of CPAN.pm, but the general approach will likely remain. **Note**: See also [smoke](/pod/smoke) recompile --- recompile() is a special command that takes no argument and runs the make/test/install cycle with brute force over all installed dynamically loadable extensions (a.k.a. XS modules) with 'force' in effect. The primary purpose of this command is to finish a network installation. Imagine you have a common source tree for two different architectures. You decide to do a completely independent fresh installation. You start on one architecture with the help of a Bundle file produced earlier. CPAN installs the whole Bundle for you, but when you try to repeat the job on the second architecture, CPAN responds with a `"Foo up to date"` message for all modules. So you invoke CPAN's recompile on the second architecture and you're done. Another popular use for `recompile` is to act as a rescue in case your perl breaks binary compatibility. If one of the modules that CPAN uses is in turn depending on binary compatibility (so you cannot run CPAN commands), then you should try the CPAN::Nox module for recovery. report Bundle|Distribution|Module --- The `report` command temporarily turns on the `test_report` config variable, then runs the `force test` command with the given arguments. The `force` pragma reruns the tests and repeats every step that might have failed before. smoke ***EXPERIMENTAL COMMAND*** --- ***** WARNING: this command downloads and executes software from CPAN to your computer of completely unknown status. You should never do this with your normal account and better have a dedicated well separated and secured machine to do this. ***** The `smoke` command takes the list of recent uploads to CPAN as provided by the `recent` command and tests them all. While the command is running $SIG{INT} is defined to mean that the current item shall be skipped. **Note**: This whole command currently is just a hack and will probably change in future versions of CPAN.pm, but the general approach will likely remain. **Note**: See also [recent](/pod/recent) upgrade [Module|/Regexp/]... --- The `upgrade` command first runs an `r` command with the given arguments and then installs the newest versions of all modules that were listed by that. The four `CPAN::*` Classes: Author, Bundle, Module, Distribution --- Although it may be considered internal, the class hierarchy does matter for both users and programmer. CPAN.pm deals with the four classes mentioned above, and those classes all share a set of methods. Classical single polymorphism is in effect. A metaclass object registers all objects of all kinds and indexes them with a string. The strings referencing objects have a separated namespace (well, not completely separated): ``` Namespace Class words containing a "/" (slash) Distribution words starting with Bundle:: Bundle everything else Module or Author ``` Modules know their associated Distribution objects. They always refer to the most recent official release. Developers may mark their releases as unstable development versions (by inserting an underscore into the module version number which will also be reflected in the distribution name when you run 'make dist'), so the really hottest and newest distribution is not always the default. If a module Foo circulates on CPAN in both version 1.23 and 1.23_90, CPAN.pm offers a convenient way to install version 1.23 by saying ``` install Foo ``` This would install the complete distribution file (say BAR/Foo-1.23.tar.gz) with all accompanying material. But if you would like to install version 1.23_90, you need to know where the distribution file resides on CPAN relative to the authors/id/ directory. If the author is BAR, this might be BAR/Foo-1.23_90.tar.gz; so you would have to say ``` install BAR/Foo-1.23_90.tar.gz ``` The first example will be driven by an object of the class CPAN::Module, the second by an object of class CPAN::Distribution. Integrating local directories --- Note: this feature is still in alpha state and may change in future versions of CPAN.pm Distribution objects are normally distributions from the CPAN, but there is a slightly degenerate case for Distribution objects, too, of projects held on the local disk. These distribution objects have the same name as the local directory and end with a dot. A dot by itself is also allowed for the current directory at the time CPAN.pm was used. All actions such as `make`, `test`, and `install` are applied directly to that directory. This gives the command `cpan .` an interesting touch: while the normal mantra of installing a CPAN module without CPAN.pm is one of ``` perl Makefile.PL perl Build.PL ( go and get prerequisites ) make ./Build make test ./Build test make install ./Build install ``` the command `cpan .` does all of this at once. It figures out which of the two mantras is appropriate, fetches and installs all prerequisites, takes care of them recursively, and finally finishes the installation of the module in the current directory, be it a CPAN module or not. The typical usage case is for private modules or working copies of projects from remote repositories on the local disk. Redirection --- The usual shell redirection symbols `|` and `>` are recognized by the cpan shell **only when surrounded by whitespace**. So piping to pager or redirecting output into a file works somewhat as in a normal shell, with the stipulation that you must type extra spaces. Plugin support ***EXPERIMENTAL*** --- Plugins are objects that implement any of currently eight methods: ``` pre_get post_get pre_make post_make pre_test post_test pre_install post_install ``` The `plugin_list` configuration parameter holds a list of strings of the form ``` Modulename=arg0,arg1,arg2,arg3,... ``` eg: ``` CPAN::Plugin::Flurb=dir,/opt/pkgs/flurb/raw,verbose,1 ``` At run time, each listed plugin is instantiated as a singleton object by running the equivalent of this pseudo code: ``` my $plugin = <string representation from config>; <generate Modulename and arguments from $plugin>; my $p = $instance{$plugin} ||= Modulename->new($arg0,$arg1,...); ``` The generated singletons are kept around from instantiation until the end of the shell session. <plugin_list> can be reconfigured at any time at run time. While the cpan shell is running, it checks all activated plugins at each of the 8 reference points listed above and runs the respective method if it is implemented for that object. The method is called with the active CPAN::Distribution object passed in as an argument. CONFIGURATION === When the CPAN module is used for the first time, a configuration dialogue tries to determine a couple of site specific options. The result of the dialog is stored in a hash reference `$CPAN::Config` in a file CPAN/Config.pm. Default values defined in the CPAN/Config.pm file can be overridden in a user specific file: CPAN/MyConfig.pm. Such a file is best placed in `$HOME/.cpan/CPAN/MyConfig.pm`, because `$HOME/.cpan` is added to the search path of the CPAN module before the use() or require() statements. The mkmyconfig command writes this file for you. The `o conf` command has various bells and whistles: completion support If you have a ReadLine module installed, you can hit TAB at any point of the commandline and `o conf` will offer you completion for the built-in subcommands and/or config variable names. displaying some help: o conf help Displays a short help displaying current values: o conf [KEY] Displays the current value(s) for this config variable. Without KEY, displays all subcommands and config variables. Example: ``` o conf shell ``` If KEY starts and ends with a slash, the string in between is treated as a regular expression and only keys matching this regexp are displayed Example: ``` o conf /color/ ``` changing of scalar values: o conf KEY VALUE Sets the config variable KEY to VALUE. The empty string can be specified as usual in shells, with `''` or `""` Example: ``` o conf wget /usr/bin/wget ``` changing of list values: o conf KEY SHIFT|UNSHIFT|PUSH|POP|SPLICE|LIST If a config variable name ends with `list`, it is a list. `o conf KEY shift` removes the first element of the list, `o conf KEY pop` removes the last element of the list. `o conf KEYS unshift LIST` prepends a list of values to the list, `o conf KEYS push LIST` appends a list of valued to the list. Likewise, `o conf KEY splice LIST` passes the LIST to the corresponding splice command. Finally, any other list of arguments is taken as a new list value for the KEY variable discarding the previous value. Examples: ``` o conf urllist unshift http://cpan.dev.local/CPAN o conf urllist splice 3 1 o conf urllist http://cpan1.local http://cpan2.local ftp://ftp.perl.org ``` reverting to saved: o conf defaults Reverts all config variables to the state in the saved config file. saving the config: o conf commit Saves all config variables to the current config file (CPAN/Config.pm or CPAN/MyConfig.pm that was loaded at start). The configuration dialog can be started any time later again by issuing the command `o conf init` in the CPAN shell. A subset of the configuration dialog can be run by issuing `o conf init WORD` where WORD is any valid config variable or a regular expression. Config Variables --- The following keys in the hash reference $CPAN::Config are currently defined: ``` allow_installing_module_downgrades allow or disallow installing module downgrades allow_installing_outdated_dists allow or disallow installing modules that are indexed in the cpan index pointing to a distro with a higher distro-version number applypatch path to external prg auto_commit commit all changes to config variables to disk build_cache size of cache for directories to build modules build_dir locally accessible directory to build modules build_dir_reuse boolean if distros in build_dir are persistent build_requires_install_policy to install or not to install when a module is only needed for building. yes|no|ask/yes|ask/no bzip2 path to external prg cache_metadata use serializer to cache metadata check_sigs if signatures should be verified cleanup_after_install remove build directory immediately after a successful install and remember that for the duration of the session colorize_debug Term::ANSIColor attributes for debugging output colorize_output boolean if Term::ANSIColor should colorize output colorize_print Term::ANSIColor attributes for normal output colorize_warn Term::ANSIColor attributes for warnings commandnumber_in_prompt boolean if you want to see current command number commands_quote preferred character to use for quoting external commands when running them. Defaults to double quote on Windows, single tick everywhere else; can be set to space to disable quoting connect_to_internet_ok whether to ask if opening a connection is ok before urllist is specified cpan_home local directory reserved for this package curl path to external prg dontload_hash DEPRECATED dontload_list arrayref: modules in the list will not be loaded by the CPAN::has_inst() routine ftp path to external prg ftp_passive if set, the environment variable FTP_PASSIVE is set for downloads ftp_proxy proxy host for ftp requests ftpstats_period max number of days to keep download statistics ftpstats_size max number of items to keep in the download statistics getcwd see below gpg path to external prg gzip location of external program gzip halt_on_failure stop processing after the first failure of queued items or dependencies histfile file to maintain history between sessions histsize maximum number of lines to keep in histfile http_proxy proxy host for http requests inactivity_timeout breaks interactive Makefile.PLs or Build.PLs after this many seconds inactivity. Set to 0 to disable timeouts. index_expire refetch index files after this many days inhibit_startup_message if true, suppress the startup message keep_source_where directory in which to keep the source (if we do) load_module_verbosity report loading of optional modules used by CPAN.pm lynx path to external prg make location of external make program make_arg arguments that should always be passed to 'make' make_install_make_command the make command for running 'make install', for example 'sudo make' make_install_arg same as make_arg for 'make install' makepl_arg arguments passed to 'perl Makefile.PL' mbuild_arg arguments passed to './Build' mbuild_install_arg arguments passed to './Build install' mbuild_install_build_command command to use instead of './Build' when we are in the install stage, for example 'sudo ./Build' mbuildpl_arg arguments passed to 'perl Build.PL' ncftp path to external prg ncftpget path to external prg no_proxy don't proxy to these hosts/domains (comma separated list) pager location of external program more (or any pager) password your password if you CPAN server wants one patch path to external prg patches_dir local directory containing patch files perl5lib_verbosity verbosity level for PERL5LIB additions plugin_list list of active hooks (see Plugin support above and the CPAN::Plugin module) prefer_external_tar per default all untar operations are done with Archive::Tar; by setting this variable to true the external tar command is used if available prefer_installer legal values are MB and EUMM: if a module comes with both a Makefile.PL and a Build.PL, use the former (EUMM) or the latter (MB); if the module comes with only one of the two, that one will be used no matter the setting prerequisites_policy what to do if you are missing module prerequisites ('follow' automatically, 'ask' me, or 'ignore') For 'follow', also sets PERL_AUTOINSTALL and PERL_EXTUTILS_AUTOINSTALL for "--defaultdeps" if not already set prefs_dir local directory to store per-distro build options proxy_user username for accessing an authenticating proxy proxy_pass password for accessing an authenticating proxy pushy_https use https to cpan.org when possible, otherwise use http to cpan.org and issue a warning randomize_urllist add some randomness to the sequence of the urllist recommends_policy whether recommended prerequisites should be included scan_cache controls scanning of cache ('atstart', 'atexit' or 'never') shell your favorite shell show_unparsable_versions boolean if r command tells which modules are versionless show_upload_date boolean if commands should try to determine upload date show_zero_versions boolean if r command tells for which modules $version==0 suggests_policy whether suggested prerequisites should be included tar location of external program tar tar_verbosity verbosity level for the tar command term_is_latin deprecated: if true Unicode is translated to ISO-8859-1 (and nonsense for characters outside latin range) term_ornaments boolean to turn ReadLine ornamenting on/off test_report email test reports (if CPAN::Reporter is installed) trust_test_report_history skip testing when previously tested ok (according to CPAN::Reporter history) unzip location of external program unzip urllist arrayref to nearby CPAN sites (or equivalent locations) urllist_ping_external use external ping command when autoselecting mirrors urllist_ping_verbose increase verbosity when autoselecting mirrors use_prompt_default set PERL_MM_USE_DEFAULT for configure/make/test/install use_sqlite use CPAN::SQLite for metadata storage (fast and lean) username your username if you CPAN server wants one version_timeout stops version parsing after this many seconds. Default is 15 secs. Set to 0 to disable. wait_list arrayref to a wait server to try (See CPAN::WAIT) wget path to external prg yaml_load_code enable YAML code deserialisation via CPAN::DeferredCode yaml_module which module to use to read/write YAML files ``` You can set and query each of these options interactively in the cpan shell with the `o conf` or the `o conf init` command as specified below. `o conf <scalar option>` prints the current value of the *scalar option* `o conf <scalar option> <value>` Sets the value of the *scalar option* to *value* `o conf <list option>` prints the current value of the *list option* in MakeMaker's neatvalue format. `o conf <list option> [shift|pop]` shifts or pops the array in the *list option* variable `o conf <list option> [unshift|push|splice] <list>` works like the corresponding perl commands. interactive editing: o conf init [MATCH|LIST] Runs an interactive configuration dialog for matching variables. Without argument runs the dialog over all supported config variables. To specify a MATCH the argument must be enclosed by slashes. Examples: ``` o conf init ftp_passive ftp_proxy o conf init /color/ ``` Note: this method of setting config variables often provides more explanation about the functioning of a variable than the manpage. CPAN::anycwd($path): Note on config variable getcwd --- CPAN.pm changes the current working directory often and needs to determine its own current working directory. By default it uses Cwd::cwd, but if for some reason this doesn't work on your system, configure alternatives according to the following table: cwd Calls Cwd::cwd getcwd Calls Cwd::getcwd fastcwd Calls Cwd::fastcwd getdcwd Calls Cwd::getdcwd backtickcwd Calls the external command cwd. Note on the format of the urllist parameter --- urllist parameters are URLs according to RFC 1738. We do a little guessing if your URL is not compliant, but if you have problems with `file` URLs, please try the correct format. Either: ``` file://localhost/whatever/ftp/pub/CPAN/ ``` or ``` file:///home/ftp/pub/CPAN/ ``` The urllist parameter has CD-ROM support --- The `urllist` parameter of the configuration table contains a list of URLs used for downloading. If the list contains any `file` URLs, CPAN always tries there first. This feature is disabled for index files. So the recommendation for the owner of a CD-ROM with CPAN contents is: include your local, possibly outdated CD-ROM as a `file` URL at the end of urllist, e.g. ``` o conf urllist push file://localhost/CDROM/CPAN ``` CPAN.pm will then fetch the index files from one of the CPAN sites that come at the beginning of urllist. It will later check for each module to see whether there is a local copy of the most recent version. Another peculiarity of urllist is that the site that we could successfully fetch the last file from automatically gets a preference token and is tried as the first site for the next request. So if you add a new site at runtime it may happen that the previously preferred site will be tried another time. This means that if you want to disallow a site for the next transfer, it must be explicitly removed from urllist. Maintaining the urllist parameter --- If you have YAML.pm (or some other YAML module configured in `yaml_module`) installed, CPAN.pm collects a few statistical data about recent downloads. You can view the statistics with the `hosts` command or inspect them directly by looking into the `FTPstats.yml` file in your `cpan_home` directory. To get some interesting statistics, it is recommended that `randomize_urllist` be set; this introduces some amount of randomness into the URL selection. The `requires` and `build_requires` dependency declarations --- Since CPAN.pm version 1.88_51 modules declared as `build_requires` by a distribution are treated differently depending on the config variable `build_requires_install_policy`. By setting `build_requires_install_policy` to `no`, such a module is not installed. It is only built and tested, and then kept in the list of tested but uninstalled modules. As such, it is available during the build of the dependent module by integrating the path to the `blib/arch` and `blib/lib` directories in the environment variable PERL5LIB. If `build_requires_install_policy` is set to `yes`, then both modules declared as `requires` and those declared as `build_requires` are treated alike. By setting to `ask/yes` or `ask/no`, CPAN.pm asks the user and sets the default accordingly. Configuration of the allow_installing_* parameters --- The `allow_installing_*` parameters are evaluated during the `make` phase. If set to `yes`, they allow the testing and the installation of the current distro and otherwise have no effect. If set to `no`, they may abort the build (preventing testing and installing), depending on the contents of the `blib/` directory. The `blib/` directory is the directory that holds all the files that would usually be installed in the `install` phase. `allow_installing_outdated_dists` compares the `blib/` directory with the CPAN index. If it finds something there that belongs, according to the index, to a different dist, it aborts the current build. `allow_installing_module_downgrades` compares the `blib/` directory with already installed modules, actually their version numbers, as determined by ExtUtils::MakeMaker or equivalent. If a to-be-installed module would downgrade an already installed module, the current build is aborted. An interesting twist occurs when a distroprefs document demands the installation of an outdated dist via goto while `allow_installing_outdated_dists` forbids it. Without additional provisions, this would let the `allow_installing_outdated_dists` win and the distroprefs lose. So the proper arrangement in such a case is to write a second distroprefs document for the distro that `goto` points to and overrule the `cpanconfig` there. E.g.: ``` --- match: distribution: "^MAUKE/Keyword-Simple-0.04.tar.gz" goto: "MAUKE/Keyword-Simple-0.03.tar.gz" --- match: distribution: "^MAUKE/Keyword-Simple-0.03.tar.gz" cpanconfig: allow_installing_outdated_dists: yes ``` Configuration for individual distributions (*Distroprefs*) --- (**Note:** This feature has been introduced in CPAN.pm 1.8854) Distributions on CPAN usually behave according to what we call the CPAN mantra. Or since the advent of Module::Build we should talk about two mantras: ``` perl Makefile.PL perl Build.PL make ./Build make test ./Build test make install ./Build install ``` But some modules cannot be built with this mantra. They try to get some extra data from the user via the environment, extra arguments, or interactively--thus disturbing the installation of large bundles like Phalanx100 or modules with many dependencies like Plagger. The distroprefs system of `CPAN.pm` addresses this problem by allowing the user to specify extra informations and recipes in YAML files to either * pass additional arguments to one of the four commands, * set environment variables * instantiate an Expect object that reads from the console, waits for some regular expressions and enters some answers * temporarily override assorted `CPAN.pm` configuration variables * specify dependencies the original maintainer forgot * disable the installation of an object altogether See the YAML and Data::Dumper files that come with the `CPAN.pm` distribution in the `distroprefs/` directory for examples. Filenames --- The YAML files themselves must have the `.yml` extension; all other files are ignored (for two exceptions see *Fallback Data::Dumper and Storable* below). The containing directory can be specified in `CPAN.pm` in the `prefs_dir` config variable. Try `o conf init prefs_dir` in the CPAN shell to set and activate the distroprefs system. Every YAML file may contain arbitrary documents according to the YAML specification, and every document is treated as an entity that can specify the treatment of a single distribution. Filenames can be picked arbitrarily; `CPAN.pm` always reads all files (in alphabetical order) and takes the key `match` (see below in *Language Specs*) as a hashref containing match criteria that determine if the current distribution matches the YAML document or not. Fallback Data::Dumper and Storable --- If neither your configured `yaml_module` nor YAML.pm is installed, CPAN.pm falls back to using Data::Dumper and Storable and looks for files with the extensions `.dd` or `.st` in the `prefs_dir` directory. These files are expected to contain one or more hashrefs. For Data::Dumper generated files, this is expected to be done with by defining `$VAR1`, `$VAR2`, etc. The YAML shell would produce these with the command ``` ysh < somefile.yml > somefile.dd ``` For Storable files the rule is that they must be constructed such that `Storable::retrieve(file)` returns an array reference and the array elements represent one distropref object each. The conversion from YAML would look like so: ``` perl -MYAML=LoadFile -MStorable=nstore -e ' @y=LoadFile(shift); nstore(\@y, shift)' somefile.yml somefile.st ``` In bootstrapping situations it is usually sufficient to translate only a few YAML files to Data::Dumper for crucial modules like `YAML::Syck`, `YAML.pm` and `Expect.pm`. If you prefer Storable over Data::Dumper, remember to pull out a Storable version that writes an older format than all the other Storable versions that will need to read them. Blueprint --- The following example contains all supported keywords and structures with the exception of `eexpect` which can be used instead of `expect`. ``` --- comment: "Demo" match: module: "Dancing::Queen" distribution: "^CHACHACHA/Dancing-" not_distribution: "\.zip$" perl: "/usr/local/cariba-perl/bin/perl" perlconfig: archname: "freebsd" not_cc: "gcc" env: DANCING_FLOOR: "Shubiduh" disabled: 1 cpanconfig: make: gmake pl: args: - "--somearg=specialcase" env: {} expect: - "Which is your favorite fruit" - "apple\n" make: args: - all - extra-all env: {} expect: [] commandline: "echo SKIPPING make" test: args: [] env: {} expect: [] install: args: [] env: WANT_TO_INSTALL: YES expect: - "Do you really want to install" - "y\n" patches: - "ABCDE/Fedcba-3.14-ABCDE-01.patch" depends: configure_requires: LWP: 5.8 build_requires: Test::Exception: 0.25 requires: Spiffy: 0.30 ``` Language Specs --- Every YAML document represents a single hash reference. The valid keys in this hash are as follows: comment [scalar] A comment cpanconfig [hash] Temporarily override assorted `CPAN.pm` configuration variables. Supported are: `build_requires_install_policy`, `check_sigs`, `make`, `make_install_make_command`, `prefer_installer`, `test_report`. Please report as a bug when you need another one supported. depends [hash] *** EXPERIMENTAL FEATURE *** All three types, namely `configure_requires`, `build_requires`, and `requires` are supported in the way specified in the META.yml specification. The current implementation *merges* the specified dependencies with those declared by the package maintainer. In a future implementation this may be changed to override the original declaration. disabled [boolean] Specifies that this distribution shall not be processed at all. features [array] *** EXPERIMENTAL FEATURE *** Experimental implementation to deal with optional_features from META.yml. Still needs coordination with installer software and currently works only for META.yml declaring `dynamic_config=0`. Use with caution. goto [string] The canonical name of a delegate distribution to install instead. Useful when a new version, although it tests OK itself, breaks something else or a developer release or a fork is already uploaded that is better than the last released version. install [hash] Processing instructions for the `make install` or `./Build install` phase of the CPAN mantra. See below under *Processing Instructions*. make [hash] Processing instructions for the `make` or `./Build` phase of the CPAN mantra. See below under *Processing Instructions*. match [hash] A hashref with one or more of the keys `distribution`, `module`, `perl`, `perlconfig`, and `env` that specify whether a document is targeted at a specific CPAN distribution or installation. Keys prefixed with `not_` negates the corresponding match. The corresponding values are interpreted as regular expressions. The `distribution` related one will be matched against the canonical distribution name, e.g. "AUTHOR/Foo-Bar-3.14.tar.gz". The `module` related one will be matched against *all* modules contained in the distribution until one module matches. The `perl` related one will be matched against `$^X` (but with the absolute path). The value associated with `perlconfig` is itself a hashref that is matched against corresponding values in the `%Config::Config` hash living in the `Config.pm` module. Keys prefixed with `not_` negates the corresponding match. The value associated with `env` is itself a hashref that is matched against corresponding values in the `%ENV` hash. Keys prefixed with `not_` negates the corresponding match. If more than one restriction of `module`, `distribution`, etc. is specified, the results of the separately computed match values must all match. If so, the hashref represented by the YAML document is returned as the preference structure for the current distribution. patches [array] An array of patches on CPAN or on the local disk to be applied in order via an external patch program. If the value for the `-p` parameter is `0` or `1` is determined by reading the patch beforehand. The path to each patch is either an absolute path on the local filesystem or relative to a patch directory specified in the `patches_dir` configuration variable or in the format of a canonical distro name. For examples please consult the distroprefs/ directory in the CPAN.pm distribution (these examples are not installed by default). Note: if the `applypatch` program is installed and `CPAN::Config` knows about it **and** a patch is written by the `makepatch` program, then `CPAN.pm` lets `applypatch` apply the patch. Both `makepatch` and `applypatch` are available from CPAN in the `JV/makepatch-*` distribution. pl [hash] Processing instructions for the `perl Makefile.PL` or `perl Build.PL` phase of the CPAN mantra. See below under *Processing Instructions*. test [hash] Processing instructions for the `make test` or `./Build test` phase of the CPAN mantra. See below under *Processing Instructions*. Processing Instructions --- args [array] Arguments to be added to the command line commandline A full commandline to run via `system()`. During execution, the environment variable PERL is set to $^X (but with an absolute path). If `commandline` is specified, `args` is not used. eexpect [hash] Extended `expect`. This is a hash reference with four allowed keys, `mode`, `timeout`, `reuse`, and `talk`. You must install the `Expect` module to use `eexpect`. CPAN.pm does not install it for you. `mode` may have the values `deterministic` for the case where all questions come in the order written down and `anyorder` for the case where the questions may come in any order. The default mode is `deterministic`. `timeout` denotes a timeout in seconds. Floating-point timeouts are OK. With `mode=deterministic`, the timeout denotes the timeout per question; with `mode=anyorder` it denotes the timeout per byte received from the stream or questions. `talk` is a reference to an array that contains alternating questions and answers. Questions are regular expressions and answers are literal strings. The Expect module watches the stream from the execution of the external program (`perl Makefile.PL`, `perl Build.PL`, `make`, etc.). For `mode=deterministic`, the CPAN.pm injects the corresponding answer as soon as the stream matches the regular expression. For `mode=anyorder` CPAN.pm answers a question as soon as the timeout is reached for the next byte in the input stream. In this mode you can use the `reuse` parameter to decide what will happen with a question-answer pair after it has been used. In the default case (reuse=0) it is removed from the array, avoiding being used again accidentally. If you want to answer the question `Do you really want to do that` several times, then it must be included in the array at least as often as you want this answer to be given. Setting the parameter `reuse` to 1 makes this repetition unnecessary. env [hash] Environment variables to be set during the command expect [array] You must install the `Expect` module to use `expect`. CPAN.pm does not install it for you. `expect: <array>` is a short notation for this `eexpect`: ``` eexpect: mode: deterministic timeout: 15 talk: <array> ``` Schema verification with `Kwalify` --- If you have the `Kwalify` module installed (which is part of the Bundle::CPANxxl), then all your distroprefs files are checked for syntactic correctness. Example Distroprefs Files --- `CPAN.pm` comes with a collection of example YAML files. Note that these are really just examples and should not be used without care because they cannot fit everybody's purpose. After all, the authors of the packages that ask questions had a need to ask, so you should watch their questions and adjust the examples to your environment and your needs. You have been warned:-) PROGRAMMER'S INTERFACE === If you do not enter the shell, shell commands are available both as methods (`CPAN::Shell->install(...)`) and as functions in the calling package (`install(...)`). Before calling low-level commands, it makes sense to initialize components of CPAN you need, e.g.: ``` CPAN::HandleConfig->load; CPAN::Shell::setup_output; CPAN::Index->reload; ``` High-level commands do such initializations automatically. There's currently only one class that has a stable interface - CPAN::Shell. All commands that are available in the CPAN shell are methods of the class CPAN::Shell. The arguments on the commandline are passed as arguments to the method. So if you take for example the shell command ``` notest install A B C ``` the actually executed command is ``` CPAN::Shell->notest("install","A","B","C"); ``` Each of the commands that produce listings of modules (`r`, `autobundle`, `u`) also return a list of the IDs of all modules within the list. expand($type,@things) The IDs of all objects available within a program are strings that can be expanded to the corresponding real objects with the `CPAN::Shell->expand("Module",@things)` method. Expand returns a list of CPAN::Module objects according to the `@things` arguments given. In scalar context, it returns only the first element of the list. expandany(@things) Like expand, but returns objects of the appropriate type, i.e. CPAN::Bundle objects for bundles, CPAN::Module objects for modules, and CPAN::Distribution objects for distributions. Note: it does not expand to CPAN::Author objects. Programming Examples This enables the programmer to do operations that combine functionalities that are available in the shell. ``` # install everything that is outdated on my disk: perl -MCPAN -e 'CPAN::Shell->install(CPAN::Shell->r)' # install my favorite programs if necessary: for $mod (qw(Net::FTP Digest::SHA Data::Dumper)) { CPAN::Shell->install($mod); } # list all modules on my disk that have no VERSION number for $mod (CPAN::Shell->expand("Module","/./")) { next unless $mod->inst_file; # MakeMaker convention for undefined $VERSION: next unless $mod->inst_version eq "undef"; print "No VERSION in ", $mod->id, "\n"; } # find out which distribution on CPAN contains a module: print CPAN::Shell->expand("Module","Apache::Constants")->cpan_file ``` Or if you want to schedule a *cron* job to watch CPAN, you could list all modules that need updating. First a quick and dirty way: ``` perl -e 'use CPAN; CPAN::Shell->r;' ``` If you don't want any output should all modules be up to date, parse the output of above command for the regular expression `/modules are up to date/` and decide to mail the output only if it doesn't match. If you prefer to do it more in a programmerish style in one single process, something like this may better suit you: ``` # list all modules on my disk that have newer versions on CPAN for $mod (CPAN::Shell->expand("Module","/./")) { next unless $mod->inst_file; next if $mod->uptodate; printf "Module %s is installed as %s, could be updated to %s from CPAN\n", $mod->id, $mod->inst_version, $mod->cpan_version; } ``` If that gives too much output every day, you may want to watch only for three modules. You can write ``` for $mod (CPAN::Shell->expand("Module","/Apache|LWP|CGI/")) { ``` as the first line instead. Or you can combine some of the above tricks: ``` # watch only for a new mod_perl module $mod = CPAN::Shell->expand("Module","mod_perl"); exit if $mod->uptodate; # new mod_perl arrived, let me know all update recommendations CPAN::Shell->r; ``` Methods in the other Classes --- CPAN::Author::as_glimpse() Returns a one-line description of the author CPAN::Author::as_string() Returns a multi-line description of the author CPAN::Author::email() Returns the author's email address CPAN::Author::fullname() Returns the author's name CPAN::Author::name() An alias for fullname CPAN::Bundle::as_glimpse() Returns a one-line description of the bundle CPAN::Bundle::as_string() Returns a multi-line description of the bundle CPAN::Bundle::clean() Recursively runs the `clean` method on all items contained in the bundle. CPAN::Bundle::contains() Returns a list of objects' IDs contained in a bundle. The associated objects may be bundles, modules or distributions. CPAN::Bundle::force($method,@args) Forces CPAN to perform a task that it normally would have refused to do. Force takes as arguments a method name to be called and any number of additional arguments that should be passed to the called method. The internals of the object get the needed changes so that CPAN.pm does not refuse to take the action. The `force` is passed recursively to all contained objects. See also the section above on the `force` and the `fforce` pragma. CPAN::Bundle::get() Recursively runs the `get` method on all items contained in the bundle CPAN::Bundle::inst_file() Returns the highest installed version of the bundle in either @INC or `$CPAN::Config->{cpan_home}`. Note that this is different from CPAN::Module::inst_file. CPAN::Bundle::inst_version() Like CPAN::Bundle::inst_file, but returns the $VERSION CPAN::Bundle::uptodate() Returns 1 if the bundle itself and all its members are up-to-date. CPAN::Bundle::install() Recursively runs the `install` method on all items contained in the bundle CPAN::Bundle::make() Recursively runs the `make` method on all items contained in the bundle CPAN::Bundle::readme() Recursively runs the `readme` method on all items contained in the bundle CPAN::Bundle::test() Recursively runs the `test` method on all items contained in the bundle CPAN::Distribution::as_glimpse() Returns a one-line description of the distribution CPAN::Distribution::as_string() Returns a multi-line description of the distribution CPAN::Distribution::author Returns the CPAN::Author object of the maintainer who uploaded this distribution CPAN::Distribution::pretty_id() Returns a string of the form "AUTHORID/TARBALL", where AUTHORID is the author's PAUSE ID and TARBALL is the distribution filename. CPAN::Distribution::base_id() Returns the distribution filename without any archive suffix. E.g "Foo-Bar-0.01" CPAN::Distribution::clean() Changes to the directory where the distribution has been unpacked and runs `make clean` there. CPAN::Distribution::containsmods() Returns a list of IDs of modules contained in a distribution file. Works only for distributions listed in the 02packages.details.txt.gz file. This typically means that just most recent version of a distribution is covered. CPAN::Distribution::cvs_import() Changes to the directory where the distribution has been unpacked and runs something like ``` cvs -d $cvs_root import -m $cvs_log $cvs_dir $userid v$version ``` there. CPAN::Distribution::dir() Returns the directory into which this distribution has been unpacked. CPAN::Distribution::force($method,@args) Forces CPAN to perform a task that it normally would have refused to do. Force takes as arguments a method name to be called and any number of additional arguments that should be passed to the called method. The internals of the object get the needed changes so that CPAN.pm does not refuse to take the action. See also the section above on the `force` and the `fforce` pragma. CPAN::Distribution::get() Downloads the distribution from CPAN and unpacks it. Does nothing if the distribution has already been downloaded and unpacked within the current session. CPAN::Distribution::install() Changes to the directory where the distribution has been unpacked and runs the external command `make install` there. If `make` has not yet been run, it will be run first. A `make test` is issued in any case and if this fails, the install is cancelled. The cancellation can be avoided by letting `force` run the `install` for you. This install method only has the power to install the distribution if there are no dependencies in the way. To install an object along with all its dependencies, use CPAN::Shell->install. Note that install() gives no meaningful return value. See uptodate(). CPAN::Distribution::isa_perl() Returns 1 if this distribution file seems to be a perl distribution. Normally this is derived from the file name only, but the index from CPAN can contain a hint to achieve a return value of true for other filenames too. CPAN::Distribution::look() Changes to the directory where the distribution has been unpacked and opens a subshell there. Exiting the subshell returns. CPAN::Distribution::make() First runs the `get` method to make sure the distribution is downloaded and unpacked. Changes to the directory where the distribution has been unpacked and runs the external commands `perl Makefile.PL` or `perl Build.PL` and `make` there. CPAN::Distribution::perldoc() Downloads the pod documentation of the file associated with a distribution (in HTML format) and runs it through the external command *lynx* specified in `$CPAN::Config->{lynx}`. If *lynx* isn't available, it converts it to plain text with the external command *html2text* and runs it through the pager specified in `$CPAN::Config->{pager}`. CPAN::Distribution::prefs() Returns the hash reference from the first matching YAML file that the user has deposited in the `prefs_dir/` directory. The first succeeding match wins. The files in the `prefs_dir/` are processed alphabetically, and the canonical distro name (e.g. AUTHOR/Foo-Bar-3.14.tar.gz) is matched against the regular expressions stored in the $root->{match}{distribution} attribute value. Additionally all module names contained in a distribution are matched against the regular expressions in the $root->{match}{module} attribute value. The two match values are ANDed together. Each of the two attributes are optional. CPAN::Distribution::prereq_pm() Returns the hash reference that has been announced by a distribution as the `requires` and `build_requires` elements. These can be declared either by the `META.yml` (if authoritative) or can be deposited after the run of `Build.PL` in the file `./_build/prereqs` or after the run of `Makfile.PL` written as the `PREREQ_PM` hash in a comment in the produced `Makefile`. *Note*: this method only works after an attempt has been made to `make` the distribution. Returns undef otherwise. CPAN::Distribution::readme() Downloads the README file associated with a distribution and runs it through the pager specified in `$CPAN::Config->{pager}`. CPAN::Distribution::reports() Downloads report data for this distribution from www.cpantesters.org and displays a subset of them. CPAN::Distribution::read_yaml() Returns the content of the META.yml of this distro as a hashref. Note: works only after an attempt has been made to `make` the distribution. Returns undef otherwise. Also returns undef if the content of META.yml is not authoritative. (The rules about what exactly makes the content authoritative are still in flux.) CPAN::Distribution::test() Changes to the directory where the distribution has been unpacked and runs `make test` there. CPAN::Distribution::uptodate() Returns 1 if all the modules contained in the distribution are up-to-date. Relies on containsmods. CPAN::Index::force_reload() Forces a reload of all indices. CPAN::Index::reload() Reloads all indices if they have not been read for more than `$CPAN::Config->{index_expire}` days. CPAN::InfoObj::dump() CPAN::Author, CPAN::Bundle, CPAN::Module, and CPAN::Distribution inherit this method. It prints the data structure associated with an object. Useful for debugging. Note: the data structure is considered internal and thus subject to change without notice. CPAN::Module::as_glimpse() Returns a one-line description of the module in four columns: The first column contains the word `Module`, the second column consists of one character: an equals sign if this module is already installed and up-to-date, a less-than sign if this module is installed but can be upgraded, and a space if the module is not installed. The third column is the name of the module and the fourth column gives maintainer or distribution information. CPAN::Module::as_string() Returns a multi-line description of the module CPAN::Module::clean() Runs a clean on the distribution associated with this module. CPAN::Module::cpan_file() Returns the filename on CPAN that is associated with the module. CPAN::Module::cpan_version() Returns the latest version of this module available on CPAN. CPAN::Module::cvs_import() Runs a cvs_import on the distribution associated with this module. CPAN::Module::description() Returns a 44 character description of this module. Only available for modules listed in The Module List (CPAN/modules/00modlist.long.html or 00modlist.long.txt.gz) CPAN::Module::distribution() Returns the CPAN::Distribution object that contains the current version of this module. CPAN::Module::dslip_status() Returns a hash reference. The keys of the hash are the letters `D`, `S`, `L`, `I`, and <P>, for development status, support level, language, interface and public licence respectively. The data for the DSLIP status are collected by pause.perl.org when authors register their namespaces. The values of the 5 hash elements are one-character words whose meaning is described in the table below. There are also 5 hash elements `DV`, `SV`, `LV`, `IV`, and <PV> that carry a more verbose value of the 5 status variables. Where the 'DSLIP' characters have the following meanings: ``` D - Development Stage (Note: *NO IMPLIED TIMESCALES*): i - Idea, listed to gain consensus or as a placeholder c - under construction but pre-alpha (not yet released) a/b - Alpha/Beta testing R - Released M - Mature (no rigorous definition) S - Standard, supplied with Perl 5 S - Support Level: m - Mailing-list d - Developer u - Usenet newsgroup comp.lang.perl.modules n - None known, try comp.lang.perl.modules a - abandoned; volunteers welcome to take over maintenance L - Language Used: p - Perl-only, no compiler needed, should be platform independent c - C and perl, a C compiler will be needed h - Hybrid, written in perl with optional C code, no compiler needed + - C++ and perl, a C++ compiler will be needed o - perl and another language other than C or C++ I - Interface Style f - plain Functions, no references used h - hybrid, object and function interfaces available n - no interface at all (huh?) r - some use of unblessed References or ties O - Object oriented using blessed references and/or inheritance P - Public License p - Standard-Perl: user may choose between GPL and Artistic g - GPL: GNU General Public License l - LGPL: "GNU Lesser General Public License" (previously known as "GNU Library General Public License") b - BSD: The BSD License a - Artistic license alone 2 - Artistic license 2.0 or later o - open source: approved by www.opensource.org d - allows distribution without restrictions r - restricted distribution n - no license at all ``` CPAN::Module::force($method,@args) Forces CPAN to perform a task it would normally refuse to do. Force takes as arguments a method name to be invoked and any number of additional arguments to pass that method. The internals of the object get the needed changes so that CPAN.pm does not refuse to take the action. See also the section above on the `force` and the `fforce` pragma. CPAN::Module::get() Runs a get on the distribution associated with this module. CPAN::Module::inst_file() Returns the filename of the module found in @INC. The first file found is reported, just as perl itself stops searching @INC once it finds a module. CPAN::Module::available_file() Returns the filename of the module found in PERL5LIB or @INC. The first file found is reported. The advantage of this method over `inst_file` is that modules that have been tested but not yet installed are included because PERL5LIB keeps track of tested modules. CPAN::Module::inst_version() Returns the version number of the installed module in readable format. CPAN::Module::available_version() Returns the version number of the available module in readable format. CPAN::Module::install() Runs an `install` on the distribution associated with this module. CPAN::Module::look() Changes to the directory where the distribution associated with this module has been unpacked and opens a subshell there. Exiting the subshell returns. CPAN::Module::make() Runs a `make` on the distribution associated with this module. CPAN::Module::manpage_headline() If module is installed, peeks into the module's manpage, reads the headline, and returns it. Moreover, if the module has been downloaded within this session, does the equivalent on the downloaded module even if it hasn't been installed yet. CPAN::Module::perldoc() Runs a `perldoc` on this module. CPAN::Module::readme() Runs a `readme` on the distribution associated with this module. CPAN::Module::reports() Calls the reports() method on the associated distribution object. CPAN::Module::test() Runs a `test` on the distribution associated with this module. CPAN::Module::uptodate() Returns 1 if the module is installed and up-to-date. CPAN::Module::userid() Returns the author's ID of the module. Cache Manager --- Currently the cache manager only keeps track of the build directory ($CPAN::Config->{build_dir}). It is a simple FIFO mechanism that deletes complete directories below `build_dir` as soon as the size of all directories there gets bigger than $CPAN::Config->{build_cache} (in MB). The contents of this cache may be used for later re-installations that you intend to do manually, but will never be trusted by CPAN itself. This is due to the fact that the user might use these directories for building modules on different architectures. There is another directory ($CPAN::Config->{keep_source_where}) where the original distribution files are kept. This directory is not covered by the cache manager and must be controlled by the user. If you choose to have the same directory as build_dir and as keep_source_where directory, then your sources will be deleted with the same fifo mechanism. Bundles --- A bundle is just a perl module in the namespace Bundle:: that does not define any functions or methods. It usually only contains documentation. It starts like a perl module with a package declaration and a $VERSION variable. After that the pod section looks like any other pod with the only difference being that *one special pod section* exists starting with (verbatim): ``` =head1 CONTENTS ``` In this pod section each line obeys the format ``` Module_Name [Version_String] [- optional text] ``` The only required part is the first field, the name of a module (e.g. Foo::Bar, i.e. *not* the name of the distribution file). The rest of the line is optional. The comment part is delimited by a dash just as in the man page header. The distribution of a bundle should follow the same convention as other distributions. Bundles are treated specially in the CPAN package. If you say 'install Bundle::Tkkit' (assuming such a bundle exists), CPAN will install all the modules in the CONTENTS section of the pod. You can install your own Bundles locally by placing a conformant Bundle file somewhere into your @INC path. The autobundle() command which is available in the shell interface does that for you by including all currently installed modules in a snapshot bundle file. PREREQUISITES === The CPAN program is trying to depend on as little as possible so the user can use it in hostile environment. It works better the more goodies the environment provides. For example if you try in the CPAN shell ``` install Bundle::CPAN ``` or ``` install Bundle::CPANxxl ``` you will find the shell more convenient than the bare shell before. If you have a local mirror of CPAN and can access all files with "file:" URLs, then you only need a perl later than perl5.003 to run this module. Otherwise Net::FTP is strongly recommended. LWP may be required for non-UNIX systems, or if your nearest CPAN site is associated with a URL that is not `ftp:`. If you have neither Net::FTP nor LWP, there is a fallback mechanism implemented for an external ftp command or for an external lynx command. UTILITIES === Finding packages and VERSION --- This module presumes that all packages on CPAN * declare their $VERSION variable in an easy to parse manner. This prerequisite can hardly be relaxed because it consumes far too much memory to load all packages into the running program just to determine the $VERSION variable. Currently all programs that are dealing with version use something like this ``` perl -MExtUtils::MakeMaker -le \ 'print MM->parse_version(shift)' filename ``` If you are author of a package and wonder if your $VERSION can be parsed, please try the above method. * come as compressed or gzipped tarfiles or as zip files and contain a `Makefile.PL` or `Build.PL` (well, we try to handle a bit more, but with little enthusiasm). Debugging --- Debugging this module is more than a bit complex due to interference from the software producing the indices on CPAN, the mirroring process on CPAN, packaging, configuration, synchronicity, and even (gasp!) due to bugs within the CPAN.pm module itself. For debugging the code of CPAN.pm itself in interactive mode, some debugging aid can be turned on for most packages within CPAN.pm with one of o debug package... sets debug mode for packages. o debug -package... unsets debug mode for packages. o debug all turns debugging on for all packages. o debug number which sets the debugging packages directly. Note that `o debug 0` turns debugging off. What seems a successful strategy is the combination of `reload cpan` and the debugging switches. Add a new debug statement while running in the shell and then issue a `reload cpan` and see the new debugging messages immediately without losing the current context. `o debug` without an argument lists the valid package names and the current set of packages in debugging mode. `o debug` has built-in completion support. For debugging of CPAN data there is the `dump` command which takes the same arguments as make/test/install and outputs each object's Data::Dumper dump. If an argument looks like a perl variable and contains one of `$`, `@` or `%`, it is eval()ed and fed to Data::Dumper directly. Floppy, Zip, Offline Mode --- CPAN.pm works nicely without network access, too. If you maintain machines that are not networked at all, you should consider working with `file:` URLs. You'll have to collect your modules somewhere first. So you might use CPAN.pm to put together all you need on a networked machine. Then copy the $CPAN::Config->{keep_source_where} (but not $CPAN::Config->{build_dir}) directory on a floppy. This floppy is kind of a personal CPAN. CPAN.pm on the non-networked machines works nicely with this floppy. See also below the paragraph about CD-ROM support. Basic Utilities for Programmers --- has_inst($module) Returns true if the module is installed. Used to load all modules into the running CPAN.pm that are considered optional. The config variable `dontload_list` intercepts the `has_inst()` call such that an optional module is not loaded despite being available. For example, the following command will prevent `YAML.pm` from being loaded: ``` cpan> o conf dontload_list push YAML ``` See the source for details. use_inst($module) Similary to [has_inst()](/pod/has_inst()) tries to load optional library but also dies if library is not available has_usable($module) Returns true if the module is installed and in a usable state. Only useful for a handful of modules that are used internally. See the source for details. instance($module) The constructor for all the singletons used to represent modules, distributions, authors, and bundles. If the object already exists, this method returns the object; otherwise, it calls the constructor. frontend() frontend($new_frontend) Getter/setter for frontend object. Method just allows to subclass CPAN.pm. SECURITY === There's no strong security layer in CPAN.pm. CPAN.pm helps you to install foreign, unmasked, unsigned code on your machine. We compare to a checksum that comes from the net just as the distribution file itself. But we try to make it easy to add security on demand: Cryptographically signed modules --- Since release 1.77, CPAN.pm has been able to verify cryptographically signed module distributions using Module::Signature. The CPAN modules can be signed by their authors, thus giving more security. The simple unsigned MD5 checksums that were used before by CPAN protect mainly against accidental file corruption. You will need to have Module::Signature installed, which in turn requires that you have at least one of Crypt::OpenPGP module or the command-line *gpg* tool installed. You will also need to be able to connect over the Internet to the public key servers, like pgp.mit.edu, and their port 11731 (the HKP protocol). The configuration parameter check_sigs is there to turn signature checking on or off. EXPORT === Most functions in package CPAN are exported by default. The reason for this is that the primary use is intended for the cpan shell or for one-liners. ENVIRONMENT === When the CPAN shell enters a subshell via the look command, it sets the environment CPAN_SHELL_LEVEL to 1, or increments that variable if it is already set. When CPAN runs, it sets the environment variable PERL5_CPAN_IS_RUNNING to the ID of the running process. It also sets PERL5_CPANPLUS_IS_RUNNING to prevent runaway processes which could happen with older versions of Module::Install. When running `perl Makefile.PL`, the environment variable `PERL5_CPAN_IS_EXECUTING` is set to the full path of the `Makefile.PL` that is being executed. This prevents runaway processes with newer versions of Module::Install. When the config variable ftp_passive is set, all downloads will be run with the environment variable FTP_PASSIVE set to this value. This is in general a good idea as it influences both Net::FTP and LWP based connections. The same effect can be achieved by starting the cpan shell with this environment variable set. For Net::FTP alone, one can also always set passive mode by running libnetcfg. POPULATE AN INSTALLATION WITH LOTS OF MODULES === Populating a freshly installed perl with one's favorite modules is pretty easy if you maintain a private bundle definition file. To get a useful blueprint of a bundle definition file, the command autobundle can be used on the CPAN shell command line. This command writes a bundle definition file for all modules installed for the current perl interpreter. It's recommended to run this command once only, and from then on maintain the file manually under a private name, say Bundle/my_bundle.pm. With a clever bundle file you can then simply say ``` cpan> install Bundle::my_bundle ``` then answer a few questions and go out for coffee (possibly even in a different city). Maintaining a bundle definition file means keeping track of two things: dependencies and interactivity. CPAN.pm sometimes fails on calculating dependencies because not all modules define all MakeMaker attributes correctly, so a bundle definition file should specify prerequisites as early as possible. On the other hand, it's annoying that so many distributions need some interactive configuring. So what you can try to accomplish in your private bundle file is to have the packages that need to be configured early in the file and the gentle ones later, so you can go out for coffee after a few minutes and leave CPAN.pm to churn away unattended. WORKING WITH CPAN.pm BEHIND FIREWALLS === Thanks to <NAME> for contributing the following paragraphs about the interaction between perl, and various firewall configurations. For further information on firewalls, it is recommended to consult the documentation that comes with the *ncftp* program. If you are unable to go through the firewall with a simple Perl setup, it is likely that you can configure *ncftp* so that it works through your firewall. Three basic types of firewalls --- Firewalls can be categorized into three basic types. http firewall This is when the firewall machine runs a web server, and to access the outside world, you must do so via that web server. If you set environment variables like http_proxy or ftp_proxy to values beginning with http://, or in your web browser you've proxy information set, then you know you are running behind an http firewall. To access servers outside these types of firewalls with perl (even for ftp), you need LWP or HTTP::Tiny. ftp firewall This where the firewall machine runs an ftp server. This kind of firewall will only let you access ftp servers outside the firewall. This is usually done by connecting to the firewall with ftp, then entering a username like "<EMAIL>". To access servers outside these type of firewalls with perl, you need Net::FTP. One-way visibility One-way visibility means these firewalls try to make themselves invisible to users inside the firewall. An FTP data connection is normally created by sending your IP address to the remote server and then listening for the return connection. But the remote server will not be able to connect to you because of the firewall. For these types of firewall, FTP connections need to be done in a passive mode. There are two that I can think off. SOCKS If you are using a SOCKS firewall, you will need to compile perl and link it with the SOCKS library. This is what is normally called a 'socksified' perl. With this executable you will be able to connect to servers outside the firewall as if it were not there. IP Masquerade This is when the firewall implemented in the kernel (via NAT, or networking address translation), it allows you to hide a complete network behind one IP address. With this firewall no special compiling is needed as you can access hosts directly. For accessing ftp servers behind such firewalls you usually need to set the environment variable `FTP_PASSIVE` or the config variable ftp_passive to a true value. Configuring lynx or ncftp for going through a firewall --- If you can go through your firewall with e.g. lynx, presumably with a command such as ``` /usr/local/bin/lynx -pscott:tiger ``` then you would configure CPAN.pm with the command ``` o conf lynx "/usr/local/bin/lynx -pscott:tiger" ``` That's all. Similarly for ncftp or ftp, you would configure something like ``` o conf ncftp "/usr/bin/ncftp -f /home/scott/ncftplogin.cfg" ``` Your mileage may vary... FAQ === 1) I installed a new version of module X but CPAN keeps saying, I have the old version installed Probably you **do** have the old version installed. This can happen if a module installs itself into a different directory in the @INC path than it was previously installed. This is not really a CPAN.pm problem, you would have the same problem when installing the module manually. The easiest way to prevent this behaviour is to add the argument `UNINST=1` to the `make install` call, and that is why many people add this argument permanently by configuring ``` o conf make_install_arg UNINST=1 ``` 2) So why is UNINST=1 not the default? Because there are people who have their precise expectations about who may install where in the @INC path and who uses which @INC array. In fine tuned environments `UNINST=1` can cause damage. 3) I want to clean up my mess, and install a new perl along with all modules I have. How do I go about it? Run the autobundle command for your old perl and optionally rename the resulting bundle file (e.g. Bundle/mybundle.pm), install the new perl with the Configure option prefix, e.g. ``` ./Configure -Dprefix=/usr/local/perl-5.6.78.9 ``` Install the bundle file you produced in the first step with something like ``` cpan> install Bundle::mybundle ``` and you're done. 4) When I install bundles or multiple modules with one command there is too much output to keep track of. You may want to configure something like ``` o conf make_arg "| tee -ai /root/.cpan/logs/make.out" o conf make_install_arg "| tee -ai /root/.cpan/logs/make_install.out" ``` so that STDOUT is captured in a file for later inspection. 5) I am not root, how can I install a module in a personal directory? As of CPAN 1.9463, if you do not have permission to write the default perl library directories, CPAN's configuration process will ask you whether you want to bootstrap <local::lib>, which makes keeping a personal perl library directory easy. Another thing you should bear in mind is that the UNINST parameter can be dangerous when you are installing into a private area because you might accidentally remove modules that other people depend on that are not using the private area. 6) How to get a package, unwrap it, and make a change before building it? Have a look at the `look` (!) command. 7) I installed a Bundle and had a couple of fails. When I retried, everything resolved nicely. Can this be fixed to work on first try? The reason for this is that CPAN does not know the dependencies of all modules when it starts out. To decide about the additional items to install, it just uses data found in the META.yml file or the generated Makefile. An undetected missing piece breaks the process. But it may well be that your Bundle installs some prerequisite later than some depending item and thus your second try is able to resolve everything. Please note, CPAN.pm does not know the dependency tree in advance and cannot sort the queue of things to install in a topologically correct order. It resolves perfectly well **if** all modules declare the prerequisites correctly with the PREREQ_PM attribute to MakeMaker or the `requires` stanza of Module::Build. For bundles which fail and you need to install often, it is recommended to sort the Bundle definition file manually. 8) In our intranet, we have many modules for internal use. How can I integrate these modules with CPAN.pm but without uploading the modules to CPAN? Have a look at the CPAN::Site module. 9) When I run CPAN's shell, I get an error message about things in my `/etc/inputrc` (or `~/.inputrc`) file. These are readline issues and can only be fixed by studying readline configuration on your architecture and adjusting the referenced file accordingly. Please make a backup of the `/etc/inputrc` or `~/.inputrc` and edit them. Quite often harmless changes like uppercasing or lowercasing some arguments solves the problem. 10) Some authors have strange characters in their names. Internally CPAN.pm uses the UTF-8 charset. If your terminal is expecting ISO-8859-1 charset, a converter can be activated by setting term_is_latin to a true value in your config file. One way of doing so would be ``` cpan> o conf term_is_latin 1 ``` If other charset support is needed, please file a bug report against CPAN.pm at rt.cpan.org and describe your needs. Maybe we can extend the support or maybe UTF-8 terminals become widely available. Note: this config variable is deprecated and will be removed in a future version of CPAN.pm. It will be replaced with the conventions around the family of $LANG and $LC_* environment variables. 11) When an install fails for some reason and then I correct the error condition and retry, CPAN.pm refuses to install the module, saying `Already tried without success`. Use the force pragma like so ``` force install Foo::Bar ``` Or you can use ``` look Foo::Bar ``` and then `make install` directly in the subshell. 12) How do I install a "DEVELOPER RELEASE" of a module? By default, CPAN will install the latest non-developer release of a module. If you want to install a dev release, you have to specify the partial path starting with the author id to the tarball you wish to install, like so: ``` cpan> install KWILLIAMS/Module-Build-0.27_07.tar.gz ``` Note that you can use the `ls` command to get this path listed. 13) How do I install a module and all its dependencies from the commandline, without being prompted for anything, despite my CPAN configuration (or lack thereof)? CPAN uses ExtUtils::MakeMaker's prompt() function to ask its questions, so if you set the PERL_MM_USE_DEFAULT environment variable, you shouldn't be asked any questions at all (assuming the modules you are installing are nice about obeying that variable as well): ``` % PERL_MM_USE_DEFAULT=1 perl -MCPAN -e 'install My::Module' ``` 14) How do I create a Module::Build based Build.PL derived from an ExtUtils::MakeMaker focused Makefile.PL? http://search.cpan.org/dist/Module-Build-Convert/ 15) I'm frequently irritated with the CPAN shell's inability to help me select a good mirror. CPAN can now help you select a "good" mirror, based on which ones have the lowest 'ping' round-trip times. From the shell, use the command 'o conf init urllist' and allow CPAN to automatically select mirrors for you. Beyond that help, the urllist config parameter is yours. You can add and remove sites at will. You should find out which sites have the best up-to-dateness, bandwidth, reliability, etc. and are topologically close to you. Some people prefer fast downloads, others up-to-dateness, others reliability. You decide which to try in which order. <NAME> maintains a site that collects data about CPAN sites: ``` http://mirrors.cpan.org/ ``` Also, feel free to play with experimental features. Run ``` o conf init randomize_urllist ftpstats_period ftpstats_size ``` and choose your favorite parameters. After a few downloads running the `hosts` command will probably assist you in choosing the best mirror sites. 16) Why do I get asked the same questions every time I start the shell? You can make your configuration changes permanent by calling the command `o conf commit`. Alternatively set the `auto_commit` variable to true by running `o conf init auto_commit` and answering the following question with yes. 17) Older versions of CPAN.pm had the original root directory of all tarballs in the build directory. Now there are always random characters appended to these directory names. Why was this done? The random characters are provided by File::Temp and ensure that each module's individual build directory is unique. This makes running CPAN.pm in concurrent processes simultaneously safe. 18) Speaking of the build directory. Do I have to clean it up myself? You have the choice to set the config variable `scan_cache` to `never`. Then you must clean it up yourself. The other possible values, `atstart` and `atexit` clean up the build directory when you start (or more precisely, after the first extraction into the build directory) or exit the CPAN shell, respectively. If you never start up the CPAN shell, you probably also have to clean up the build directory yourself. 19) How can I switch to sudo instead of local::lib? The following 5 environment veriables need to be reset to the previous values: PATH, PERL5LIB, PERL_LOCAL_LIB_ROOT, PERL_MB_OPT, PERL_MM_OPT; and these two CPAN.pm config variables must be reconfigured: make_install_make_command and mbuild_install_build_command. The five env variables have probably been overwritten in your $HOME/.bashrc or some equivalent. You either find them there and delete their traces and logout/login or you override them temporarily, depending on your exact desire. The two cpanpm config variables can be set with: ``` o conf init /install_.*_command/ ``` probably followed by ``` o conf commit ``` COMPATIBILITY === OLD PERL VERSIONS --- CPAN.pm is regularly tested to run under 5.005 and assorted newer versions. It is getting more and more difficult to get the minimal prerequisites working on older perls. It is close to impossible to get the whole Bundle::CPAN working there. If you're in the position to have only these old versions, be advised that CPAN is designed to work fine without the Bundle::CPAN installed. To get things going, note that GBARR/Scalar-List-Utils-1.18.tar.gz is compatible with ancient perls and that File::Temp is listed as a prerequisite but CPAN has reasonable workarounds if it is missing. CPANPLUS --- This module and its competitor, the CPANPLUS module, are both much cooler than the other. CPAN.pm is older. CPANPLUS was designed to be more modular, but it was never intended to be compatible with CPAN.pm. CPANMINUS --- In the year 2010 App::cpanminus was launched as a new approach to a cpan shell with a considerably smaller footprint. Very cool stuff. SECURITY ADVICE === This software enables you to upgrade software on your computer and so is inherently dangerous because the newly installed software may contain bugs and may alter the way your computer works or even make it unusable. Please consider backing up your data before every upgrade. BUGS === Please report bugs via <http://rt.cpan.org/Before submitting a bug, please make sure that the traditional method of building a Perl module package from a shell by following the installation instructions of that package still works in your environment. AUTHOR === <NAME> `<<EMAIL>>` LICENSE === This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See <http://www.perl.com/perl/misc/Artistic.htmlTRANSLATIONS === Kawai,Takanori provides a Japanese translation of a very old version of this manpage at <http://homepage3.nifty.com/hippo2000/perltips/CPAN.htmSEE ALSO === Many people enter the CPAN shell by running the [cpan](/pod/distribution/CPAN/scripts/cpan) utility program which is installed in the same directory as perl itself. So if you have this directory in your PATH variable (or some equivalent in your operating system) then typing `cpan` in a console window will work for you as well. Above that the utility provides several commandline shortcuts. melezhik (Alexey) sent me a link where he published a chef recipe to work with CPAN.pm: http://community.opscode.com/cookbooks/cpan. × #### Module Install Instructions To install CPAN, copy and paste the appropriate command in to your terminal. [cpanm](/dist/App-cpanminus/view/bin/cpanm) ``` cpanm CPAN ``` [CPAN shell](/pod/CPAN) ``` perl -MCPAN -e shell install CPAN ``` For more information on module installation, please visit [the detailed CPAN module installation guide](https://www.cpan.org/modules/INSTALL.html). [Close](#)
web3_rust_wrapper
rust
Rust
Macro web3_rust_wrapper::switch === ``` macro_rules! switch { ($v:expr; $($a:expr => $b:expr,)* _ => $e:expr $(,)?) => { ... }; } ``` Emulates a `switch` statement. The syntax is similar to `match` except that every left-side expression is interpreted as an expression rather than a pattern. The expression to compare against must be at the beginning with a semicolon. A default case is required at the end with a `_`, similar to `match`. Example: ``` use switch_statement::switch; use web3_rust_wrapper::switch; const A: u32 = 1 << 0; const B: u32 = 1 << 1; let n = 3; let val = switch! { n; A => false, // this is a bitwise OR A | B => true, _ => false, }; assert!(val); ``` Struct web3_rust_wrapper::EVMNetwork === ``` pub struct EVMNetwork { pub http_url: String, pub ws_url: String, pub chain_id: Option<u64>, } ``` Fields --- `http_url: String``ws_url: String``chain_id: Option<u64>`Implementations --- ### impl EVMNetwork #### pub fn new(network_id: Network) -> EVMNetwork Trait Implementations --- ### impl Clone for EVMNetwork #### fn clone(&self) -> EVMNetwork Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for EVMNetwork ### impl Send for EVMNetwork ### impl Sync for EVMNetwork ### impl Unpin for EVMNetwork ### impl UnwindSafe for EVMNetwork Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct web3_rust_wrapper::KeyPair === ``` pub struct KeyPair { pub secret_key: String, pub public_key: String, } ``` Fields --- `secret_key: String``public_key: String`Trait Implementations --- ### impl Clone for KeyPair #### fn clone(&self) -> KeyPair Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where    __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for KeyPair ### impl Send for KeyPair ### impl Sync for KeyPair ### impl Unpin for KeyPair ### impl UnwindSafe for KeyPair Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Struct web3_rust_wrapper::Router === ``` pub struct Router { pub address: String, pub factory: String, } ``` Fields --- `address: String``factory: String`Implementations --- ### impl Router #### pub async fn new(network_id: Network) Trait Implementations --- ### impl Clone for Router #### fn clone(&self) -> Router Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Router ### impl Send for Router ### impl Sync for Router ### impl Unpin for Router ### impl UnwindSafe for Router Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct web3_rust_wrapper::Web3Manager === ``` pub struct Web3Manager { pub accounts: Vec<H160>, pub web3http: Web3<Http>, pub web3web_socket: Web3<WebSocket>, /* private fields */ } ``` Fields --- `accounts: Vec<H160>``web3http: Web3<Http>``web3web_socket: Web3<WebSocket>`Implementations --- ### impl Web3Manager #### pub fn get_current_nonce(&self) -> U256 #### pub fn set_current_nonce(&mut self, new_nonce: U256) #### pub async fn instance_contract(    &self,    plain_contract_address: &str,    abi_path: &[u8]) -> Result<Contract<Http>, Box<dyn Error>#### pub fn generate_keypair() -> (SecretKey, PublicKey) #### pub fn public_key_address(public_key: &PublicKey) -> Address #### pub fn generate_keypairs(n: u8) -> Vec<(SecretKey, PublicKey)#### pub async fn get_token_balance(&self, token_address: &str, account: H160) -> U256 #### pub fn generate_deadline(&self) -> U256 #### pub async fn swap_tokens_for_exact_tokens(    &mut self,    account: H160,    router_address: &str,    token_amount: U256,    pairs: &[&str],    slippage: usize) -> Result<H256, Error#### pub async fn get_token_price(    &mut self,    router_address: &str,    pairs: Vec<H160>) -> U256 #### pub async fn swap_exact_tokens_for_tokens_supporting_fee_on_transfer_tokens(    &mut self,    account: H160,    router_address: &str,    token_amount: U256,    pairs: &[&str]) -> Result<H256, Error#### pub async fn swap_eth_for_exact_tokens(    &mut self,    account: H160,    router_address: &str,    token_address: &str,    eth_amount: U256,    slippage: usize) -> Result<H256, Error#### pub async fn get_out_estimated_tokens_for_tokens(    &mut self,    contract_instance: &Contract<Http>,    pair_a: &str,    pair_b: &str,    amount: &str) -> Result<U256, Error#### pub async fn get_eth_balance(&mut self, account: H160) -> U256 #### pub async fn last_nonce(&self, account: H160) -> U256 #### pub async fn load_account(    &mut self,    plain_address: &str,    plain_private_key: &str) -> &mut Web3Manager #### pub async fn new_from_rpc_url(    http_url: &str,    websocket_url: &str,    u64chain_id: u64) -> Web3Manager #### pub async fn new(network_id: Network) -> Web3Manager #### pub async fn gas_price(&self) -> Result<U256, Error#### pub async fn get_block(&self) -> Result<U64, Error#### pub async fn query_contract<P, T>(    &self,    contract_instance: &Contract<Http>,    func: &str,    params: P) -> Result<T, Error>where    P: Tokenize,    T: Detokenize, #### pub async fn sign_transaction(    &mut self,    account: H160,    transact_obj: TransactionParameters) -> SignedTransaction #### pub fn encode_tx_parameters(    &mut self,    nonce: U256,    to: Address,    value: U256,    gas: U256,    gas_price: U256,    data: Bytes) -> TransactionParameters #### pub fn encode_tx_data<P>(    &mut self,    contract: &Contract<Http>,    func: &str,    params: P) -> Byteswhere    P: Tokenize, #### pub async fn estimate_tx_gasV1<P>(    &mut self,    contract: &Contract<Http>,    func: &str,    params: P,    value: &str) -> U256where    P: Tokenize, #### pub fn first_loaded_account(&self) -> H160 #### pub async fn approve_erc20_token(    &mut self,    account: H160,    token_address: &str,    spender: &str,    value: &str) -> Result<H256, Error#### pub async fn sign_and_send_tx<P: Clone>(    &mut self,    account: H160,    contract_instance: &Contract<Http>,    func: &str,    params: &P,    value: U256) -> Result<H256, Error>where    P: Tokenize, #### pub async fn sent_eth(&mut self, account: H160, to: H160, amount: &str) #### pub async fn sent_erc20_token(    &mut self,    account: H160,    contract_instance: Contract<Http>,    to: &str,    token_amount: &str) -> H256 #### pub async fn get_latest_price(    &mut self,    network: impl GetAddress,    pair_address: &str) -> Int #### pub async fn listen_contract_events(&mut self, contract_address: &str) #### pub async fn build_contract_events(    &mut self,    contract_address: &str) -> SubscriptionStream<WebSocket, Log#### pub async fn init_pair(&self, lp_address: &str) -> Contract<Http#### pub async fn init_router_factory(    &mut self,    factory_address: &str) -> Contract<Http#### pub async fn init_router(&mut self, router_address: &str) -> Contract<Http#### pub async fn token_has_liquidity(    &self,    lp_pair_factory_instance: Contract<Http>) -> bool #### pub async fn find_lp_pair(    &mut self,    factory_address: &str,    token_address: &str) -> String #### pub async fn get_token_reserves(    &mut self,    lp_pair_factory_instance: Contract<Http>) -> (U256, U256, U256) Trait Implementations --- ### impl Clone for Web3Manager #### fn clone(&self) -> Web3Manager Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Web3Manager ### impl Send for Web3Manager ### impl Sync for Web3Manager ### impl Unpin for Web3Manager ### impl !UnwindSafe for Web3Manager Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more{"Vec<(SecretKey, PublicKey)>":"<h3>Notable traits for <code><a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a>&lt;<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A&gt;</code></h3><pre class=\"content\"><code><span class=\"where fmt-newline\">impl&lt;A&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/vec/struct.Vec.html\" title=\"struct alloc::vec::Vec\">Vec</a>&lt;<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>, A&gt;<span class=\"where fmt-newline\">where<br>&nbsp;&nbsp;&nbsp;&nbsp;A: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/alloc/trait.Allocator.html\" title=\"trait core::alloc::Allocator\">Allocator</a>,</span></span>"} Enum web3_rust_wrapper::Network === ``` pub enum Network { EthereumMainnet, EthereumGoerli, EthereumSepolia, BSCMainnet, BSCTestnet, AvalancheMainnet, AvalancheTestnet, } ``` Variants --- ### `EthereumMainnet` ### `EthereumGoerli` ### `EthereumSepolia` ### `BSCMainnet` ### `BSCTestnet` ### `AvalancheMainnet` ### `AvalancheTestnet` Auto Trait Implementations --- ### impl RefUnwindSafe for Network ### impl Send for Network ### impl Sync for Network ### impl Unpin for Network ### impl UnwindSafe for Network Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere    V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more
task_bunny
hex
Erlang
Toggle Theme TaskBunny === [![Hex.pm](https://img.shields.io/hexpm/v/task_bunny.svg "Hex")](https://hex.pm/packages/task_bunny) [![Build Status](https://travis-ci.org/shinyscorpion/task_bunny.svg?branch=master)](https://travis-ci.org/shinyscorpion/task_bunny) [![Inline docs](http://inch-ci.org/github/shinyscorpion/task_bunny.svg?branch=master)](http://inch-ci.org/github/shinyscorpion/task_bunny) [![Deps Status](https://beta.hexfaktor.org/badge/all/github/shinyscorpion/task_bunny.svg)](https://beta.hexfaktor.org/github/shinyscorpion/task_bunny) [![Hex.pm](https://img.shields.io/hexpm/l/task_bunny.svg "License")](LICENSE.md) TaskBunny is a background processing application written in Elixir and uses RabbitMQ as a messaging backend. [API Reference](https://hexdocs.pm/task_bunny/) Use cases --- Although TaskBunny provides similar features to popular background processing libraries in other languages such as Resque, Sidekiq, RQ etc., you might not need it for a same reason. Erlang process or GenServer would always be your first choice for background processing in Elixir. However you might want to try out TaskBunny in the following cases: * You want to separate a background processing concern from your phoenix application * You use container based deployment such as Heroku, Docker etc. and each deploy is immutable and disposable * You want to have a control on retry and its interval on background job processing * You want to schedule the job execution time * You want to use a part of functionalities TaskBunny provides to talk to RabbitMQ * You want to enqueue jobs from other system via RabbitMQ * You want to control the concurrency to avoid making too much traffic Getting started --- ### 1. Check requirements * Elixir 1.4+ * RabbitMQ 3.6.0 or greater ### 2. Install TaskBunny Edit `mix.exs` and add `task_bunny` to your list of dependencies and applications: ``` def deps do [{:task_bunny, "~> 0.3.2"}] end def application do [applications: [:task_bunny]] end ``` Then run `mix deps.get`. ### 3. Configure TaskBunny Configure hosts and queues: ``` config :task_bunny, hosts: [ default: [connect_options: "amqp://localhost?heartbeat=30"] ] config :task_bunny, queue: [ namespace: "task_bunny.", queues: [[name: "normal", jobs: :default]] ] ``` ### 4. Define TaskBunny job Use [`TaskBunny.Job`](TaskBunny.Job.html) module in your job module and define `perform/1` that takes a map as an argument. ``` defmodule HelloJob do use TaskBunny.Job require Logger def perform(%{"name" => name}) do Logger.info("Hello #{name}") :ok end end ``` Make sure you return `:ok` or `{:ok, something}` when the job was successfully processed. Otherwise TaskBunny would treat the job as failed and move it to retry queue. ### 5. Enqueueing TaskBunny job Then enqueue a job ``` HelloJob.enqueue!(%{"name" => "Cloud"}) ``` The worker invokes the job with `Hello Cloud` in your logger output. Queues --- #### Worker queue and sub queues TaskBunny declares four queues for each worker queue on RabbitMQ. ``` config :task_bunny, queue: [ namespace: "task_bunny." queues: [ [name: "normal", jobs: :default] ] ] ``` If have a config like above, TaskBunny will define these four queues on RabbitMQ: * task_bunny.normal: main worker queue * task_bunny.normal.retry: queue for retry * task_bunny.normal.rejected: queue that stores jobs failed more than allowed times * task_bunny.normal.delay: queue that stores jobs that are performed in the future #### Reset queues TaskBunny provides a mix task to reset queues. This task deletes the queues and creates them again. Existing messages in the queue will be lost so please be aware of this. ``` % mix task_bunny.queue.reset ``` You need to redefine a queue when you want to change the retry interval for a queue. #### Umbrella app When you use TaskBunny under an umbrella app and each apps needs a different queue definition, you can prefix config key like below so that it doesn’t overwrite the other configuration. ``` config :task_bunny, app_a_queue: [ namespace: "app_a.", queues: [ [name: "normal", jobs: "AppA.*"] ] ] config :task_bunny, app_b_queue: [ namespace: "app_b.", queues: [ [name: "normal", jobs: "AppB.*"] ] ] ``` Enqueue job --- #### Enqueue [`TaskBunny.Job`](TaskBunny.Job.html) will define `enqueue/1` and `enqueue!/1` to your job module. Like other Elixir libraries, `enqueue!/1` is similar to `enqueue/1` but raises an exception when it gets an error during the enqueue. You can also use [`TaskBunny.Job.enqueue/2`](TaskBunny.Job.html#enqueue/2) which takes a module as a first argument. The two examples below will give you the same result. ``` SampleJob.enqueue!() TaskBunny.Job.enqueue!(SampleJob) ``` First expression is concise and preferred but the later expression lets you enqueue the job without defining job module. TaskBunny takes the module just as atom and doesn’t check module existence when it enqueues. It is useful when you have separate applications for enqueueing and performing. #### Schedule job When you don’t want to perform the job immediately you can use `delay` options when you enqueue the job. ``` SampleJob.enqueue!(delay: 10_000) ``` It will enqueue the job to the worker queue in 10 seconds. When you use `delay` option it enqueues the job to the delay queue. The job will be moved to the worker queue after the specific time. The move between those queues will be handled by RabbitMQ so the job will be enqueued safely even if your application dies after the call. #### Enqueue job from other system The message should be encoded in JSON format and set job and payload(argument). For example: ``` { "job": "YourApp.HelloJob", "payload": {"name": "Aerith"} } ``` Then send the message to the worker queue - TaskBunny will process it. #### Select queue TaskBunny looks up config and chooses a right queue for the job. ``` config :task_bunny, queue: [ queues: [ [name: "default", jobs: :default], [name: "fast_track", jobs: [YourApp.RushJob, YourApp.HurryJob], [name: "analytics", jobs: "Analytics.*"] ] ] ``` You can configure it with module name(atom), string (support wildcard) or list of them. If the job matches one of them the queue will be chosen. If the doesn’t match any the queue with :default will be chosen. ``` YourApp.RushJob.enqueue(payload) #=> "fast_track" Analytics.MiningJob.enqueue(payload) #=> "analytics" YourApp.HelloJob.enqueue(payload) #=> "default" ``` If you pass the queue option TaskBunny will use it. ``` YourApp.HelloJob.enqueue(payload, queue: "fast_track") #=> "fast_track" ``` Workers --- #### What is worker? TaskBunny worker is a GenServer that processes jobs and handles errors. A worker listens to a single queue, receives messages(jobs) from it and invokes jobs to perform. #### Concurrency By default a TaskBunny worker runs two jobs concurrently. You can change the concurrency with the config. ``` config :task_bunny, queue: [ namespace: "task_bunny." queues: [ [name: "default", jobs: :default, worker: [concurrency: 1]], [name: "analytics", jobs: "Analytics.*", worker: [concurrency: 10]] ] ] ``` The concurrency is set per an application. If you run your application on five different hosts with above configuration, there can be 55 jobs performing simultaneously in total. #### Disable worker You can disable workers starting with your application by setting `1`, `TRUE` or `YES` to `TASK_BUNNY_DISABLE_WORKER` environment variable. ``` % TASK_BUNNY_DISABLE_WORKER=1 mix phoenix.server ``` You can also disable workers in the config. ``` config :task_bunny, disable_worker: true ``` You can also disable worker running for a specific queue with the config. ``` config :task_bunny, queue: [ namespace: "task_bunny." queues: [ [name: "default", jobs: :default], [name: "analytics", jobs: "Analytics.*", worker: false] ] ] ``` With above, TaskBunny starts only a worker for the default queue. Control job execution --- #### Retry TaskBunny marks the job failed when: * job raises an exception or exits during `perform` * `perform` doesn’t return `:ok` or `{:ok, something}` * `perform` times out. TaskBunny retries the job automatically if the job has failed. By default, it retries 10 times for every 5 minutes. If you want to change it, you can override the value on a job module. ``` defmodule FlakyJob do use TaskBunny.Job require Logger def max_retry, do: 100 def retry_interval(_), do: 10_000 ... end ``` In this example, it will retry 100 times for every 10 seconds. You can also change the retry_interval by the number of failures. ``` def max_retry, do: 5 def retry_interval(failed_count) do # failed_count will be between 1 and 5. # Gradually have longer retry interval [10, 60, 300, 3_600, 7_200] |> Enum.map(&(&1 * 1000)) |> Enum.at(failed_count - 1, 1000) end ``` If a job fails more than `max_retry` times, the payload is sent to `jobs.[job_name].rejected` queue. When a job gets rejected the `on_reject` callback is called. By default it does nothing but you can override it. It’s useful to execute recovery actions when a job fails (like sending an email to a customer for instance) It receives the body containing the payload of the rejected job plus the full error trace. It returns :ok ``` defmodule FlakyJob do use TaskBunny.Job require Logger def on_reject(_body) do ... :ok end ... end ``` #### Immediately Reject TaskBunny can mark a job as rejected without retrying when `perform` returns `:reject` or `{:reject, something}` In this case any `max_retry` config is ignored. #### Timeout By default, jobs timeout after 2 minutes. If job doesn’t respond for more than 2 minutes, worker kills the process and moves it to retry queue. You can change the timeout by overriding `timeout/0` in your job. ``` defmodule SlowJob do use TaskBunny.Job def timeout, do: 300_000 ... end ``` Connection management --- TaskBunny provides an extra layer on top of the [amqp](https://github.com/pma/amqp) connection module. #### Configuration TaskBunny automatically connects to RabbitMQ hosts in the config at the start of the application. TaskBunny forwards `connect_options` to [AMQP.Connection.open/1](https://hexdocs.pm/amqp/AMQP.Connection.html#open/1). ``` config :task_bunny, hosts: [ default: [ connect_options: "amqp://rabbitmq.example.com?heartbeat=30" ], legacy: [ connect_options: [ host: "legacy.example.com", port: 15672, username: "guest", password: "bunny" ] ] ] ``` `:default` host has a special meaning on TaskBunny: TaskBunny would select `:default` host when you didn’t specify the host. ``` assert TaskBunny.Connection.get_connection() == TaskBunny.Connection.get_connection(:default) ``` You can specify the host to the queue: ``` config :task_bunny, queue: [ queues: [ [name: "normal", jobs: "MainApp.*"], # => :default host [name: "normal", jobs: "Legacy.*", host: :legacy] ] ] ``` If you don’t want to start TaskBunny automatically in a specific environment, set `true` to `disable_auto_start` in the config: ``` config :task_bunny, disable_auto_start: true ``` #### Get connection TaskBunny provides two ways to access the connections. Most of time you want to use `Connection.get_connection/1` or `Connection.get_connection!/1` that returns the connection synchronously. ``` conn = TaskBunny.Connection.get_connection() legacy = TaskBunny.Connection.get_connection(:legacy) ``` TaskBunny also provides asynchronous API `Connection.subscribe_connection/1`. See the [API documentation](https://hexdocs.pm/task_bunny/TaskBunny.Connection.html) for more details. #### Reconnection TaskBunny automatically tries reconnecting to RabbitMQ if the connection is gone. All workers will restart automatically once the new connection is established. TaskBunny aims to provide zero hassle and recover automatically regardless how long the host takes to come back and accessible. Failure backends --- By default, when the error occurs during the job execution TaskBunny reports it to Logger. If you want to report the error to different services, you can configure your custom failure backend. ``` config :task_bunny, failure_backend: [YourApp.CustomFailureBackend] ``` You can also report the errors to the multiple backends. For example, if you want to use our default Logger backend with your custom backend you can configure like below: ``` config :task_bunny, failure_backend: [ TaskBunny.FailureBackend.Logger, YourApp.CustomFailureBackend ] ``` Check out the implementation of [TaskBunny.FailureBackend.Logger](https://github.com/shinyscorpion/task_bunny/blob/master/lib/task_bunny/failure_backend/logger.ex) to learn how to write your custom failure backend. #### Implementations * [Rollbar backend](https://github.com/shinyscorpion/task_bunny_rollbar) * [Sentry backend](https://github.com/Homepolish/task_bunny_sentry) (Send us a pull request if you want to add other implementation) Monitoring --- #### RabbitMQ plugins RabbitMQ supports a variety of [plugins](http://www.rabbitmq.com/plugins.html). If you are not familiar with them we recommend you to look into those. The following plugins will help you use RabbitMQ with TaskBunny. * [Management Plugin](http://www.rabbitmq.com/management.html): provides an HTTP-based API for management and monitoring of your RabbitMQ server, along with a browser-based UI and a command line tool, rabbitmqadmin. * [Shovel Plugin](http://www.rabbitmq.com/shovel.html): helps you to move messages(job) from a queue to another queue. #### Wobserver integration TaskBunny automatically integrates with [Wobserver](https://github.com/shinyscorpion/wobserver). All worker and connection information will be added as a page on the web interface. The current amount of job runners and job success, failure, and reject totals are added to the `/metrics` endpoint. Copyright and License --- Copyright (c) 2017, SQUARE ENIX LTD. TaskBunny code is licensed under the [MIT License](LICENSE.md). Toggle Theme TaskBunny v0.3.4 TaskBunny.Config === Handles TaskBunny configuration. [Link to this section](#summary) Summary === [Functions](#functions) --- [auto_start?()](#auto_start?/0) Returns true if auto start is enabled [connect_options(host)](#connect_options/1) Returns connect options for the host [disable_auto_start()](#disable_auto_start/0) Disable auto start manually [disable_worker?()](#disable_worker?/0) Returns true if worker is disabled [failure_backend()](#failure_backend/0) Returns the list of failure backends [host_config(host)](#host_config/1) Returns configuration for the host [hosts()](#hosts/0) Returns list of hosts [publisher_max_overflow()](#publisher_max_overflow/0) Returns the max overflow for the publisher poolboy. 0 by default [publisher_pool_size()](#publisher_pool_size/0) Returns the publisher pool size for poolboy. 15 by default [queue_for_job(job)](#queue_for_job/1) Returns a queue for the given job [queues()](#queues/0) Returns list of queues [workers()](#workers/0) Transforms queue configuration into list of workers for the application to run [Link to this section](#functions) Functions === [Link to this function](#auto_start?/0 "Link to this function") auto_start?() ``` auto_start?() :: [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Returns true if auto start is enabled. [Link to this function](#connect_options/1 "Link to this function") connect_options(host) ``` connect_options(host :: [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Returns connect options for the host. [Link to this function](#disable_auto_start/0 "Link to this function") disable_auto_start() ``` disable_auto_start() :: :ok ``` Disable auto start manually. [Link to this function](#disable_worker?/0 "Link to this function") disable_worker?() ``` disable_worker?() :: [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Returns true if worker is disabled. [Link to this function](#failure_backend/0 "Link to this function") failure_backend() ``` failure_backend() :: [[atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()] ``` Returns the list of failure backends. It returns [`TaskBunny.FailureBackend.Logger`](TaskBunny.FailureBackend.Logger.html) by default. [Link to this function](#host_config/1 "Link to this function") host_config(host) ``` host_config([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | nil ``` Returns configuration for the host. Examples --- ``` iex> host_config(:default) [connection_options: "amqp://localhost?heartbeat=30"] ``` [Link to this function](#hosts/0 "Link to this function") hosts() ``` hosts() :: [[atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()] ``` Returns list of hosts. [Link to this function](#publisher_max_overflow/0 "Link to this function") publisher_max_overflow() ``` publisher_max_overflow() :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Returns the max overflow for the publisher poolboy. 0 by default [Link to this function](#publisher_pool_size/0 "Link to this function") publisher_pool_size() ``` publisher_pool_size() :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Returns the publisher pool size for poolboy. 15 by default [Link to this function](#queue_for_job/1 "Link to this function") queue_for_job(job) ``` queue_for_job([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | nil ``` Returns a queue for the given job. [Link to this function](#queues/0 "Link to this function") queues() ``` queues() :: [[keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()] ``` Returns list of queues. [Link to this function](#workers/0 "Link to this function") workers() ``` workers() :: [[keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()] ``` Transforms queue configuration into list of workers for the application to run. Toggle Theme TaskBunny v0.3.4 TaskBunny.Connection === A GenServer that handles RabbitMQ connection. It provides convenience functions to access RabbitMQ through the GenServer. GenServer --- TaskBunny loads the configurations and automatically starts a GenServer for each host definition. They are supervised by TaskBunny so you don’t have to look after them. Disconnect/Reconnect --- TaskBunny handles disconnection and reconnection. Once the GenServer retrieves the RabbitMQ connection the GenServer monitors it. When it disconnects or dies the GenServer terminates itself. The supervisor restarts the GenServer and it tries to reconnect to the host. If it fails to connect, it retries every five seconds. Access to RabbitMQ connections --- The module provides two ways to retrieve a RabbitMQ connection: 1. Use [`get_connection/1`](#get_connection/1) and it returns the connection synchronously. This will succeed in most cases since TaskBunny tries to establish a connection as soon as the application starts. 2. Use [`subscribe_connection/1`](#subscribe_connection/1) and it sends the connection back asynchronously once the connection is ready. This can be useful when you can’t ensure the caller might start before the connectin is established. Check out the function documentation for more details. [Link to this section](#summary) Summary === [Types](#types) --- [state()](#t:state/0) Represents the state of a connection GenServer [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor [get_connection!(host \\ :default)](#get_connection!/1) Similar to get_connection/1 but raises an exception when connection is not ready [get_connection(host \\ :default)](#get_connection/1) Returns the RabbitMQ connection for the given host. When host argument is not passed it returns the connection for the default host [init(state)](#init/1) Initialises GenServer. Send a request to establish a connection [subscribe_connection!(host \\ :default, listener_pid)](#subscribe_connection!/2) Similar to subscribe_connection/2 but raises an exception when process is not ready. Examples --- [subscribe_connection(host \\ :default, listener_pid)](#subscribe_connection/2) Requests the GenServer to send the connection back asynchronously. Once connection has been established, it will send a message with {:connected, connection} to the given process [Link to this section](#types) Types === [Link to this type](#t:state/0 "Link to this type") state() ``` state() :: {[atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), %AMQP.Connection{pid: [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | nil, [[pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)()]} ``` Represents the state of a connection GenServer. It’s a tuple containing `{host, connection, subscribers}`. [Link to this section](#functions) Functions === [Link to this function](#child_spec/1 "Link to this function") child_spec(arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). [Link to this function](#get_connection!/1 "Link to this function") get_connection!(host \\ :default) ``` get_connection!([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [AMQP.Connection.t](https://hexdocs.pm/amqp/0.3.1/AMQP.Connection.html#t:t/0)() ``` Similar to get_connection/1 but raises an exception when connection is not ready. Examples --- ``` iex> conn = get_connection!() %AMQP.Connection{} ``` [Link to this function](#get_connection/1 "Link to this function") get_connection(host \\ :default) ``` get_connection([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: {:ok, [AMQP.Connection.t](https://hexdocs.pm/amqp/0.3.1/AMQP.Connection.html#t:t/0)()} | {:error, [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Returns the RabbitMQ connection for the given host. When host argument is not passed it returns the connection for the default host. Examples --- ``` case get_connection() do {:ok, conn} -> do_something(conn) {:error, _} -> cry() end ``` [Link to this function](#init/1 "Link to this function") init(state) ``` init([tuple](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Initialises GenServer. Send a request to establish a connection. [Link to this function](#subscribe_connection!/2 "Link to this function") subscribe_connection!(host \\ :default, listener_pid) ``` subscribe_connection!([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok ``` Similar to subscribe_connection/2 but raises an exception when process is not ready. Examples --- ``` subscribe_connection!(self()) receive do {:connected, conn = %AMQP.Connection{}} -> do_something(conn) end ``` [Link to this function](#subscribe_connection/2 "Link to this function") subscribe_connection(host \\ :default, listener_pid) ``` subscribe_connection([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok | {:error, [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Requests the GenServer to send the connection back asynchronously. Once connection has been established, it will send a message with {:connected, connection} to the given process. Examples --- ``` :ok = subscribe_connection(self()) receive do {:connected, conn = %AMQP.Connection{}} -> do_something(conn) end ``` Toggle Theme TaskBunny v0.3.4 TaskBunny.FailureBackend behaviour === A behaviour module to implement the your own failure backend. Note the backend is called only for the errors caught during job processing. Any other errors won’t be reported to the backend. Configuration --- By default, TaskBunny reports the job failures to Logger. If you want to report the error to different services, you can configure your custom failure backend. ``` config :task_bunny, failure_backend: [YourApp.CustomFailureBackend] ``` You can also report the errors to the multiple backends. For example, if you want to use our default Logger backend with your custom backend you can configure like below: ``` config :task_bunny, failure_backend: [ TaskBunny.FailureBackend.Logger, YourApp.CustomFailureBackend ] ``` Example --- See the implmentation of [`TaskBunny.FailureBackend.Logger`](TaskBunny.FailureBackend.Logger.html). Argument --- See [`TaskBunny.JobError`](TaskBunny.JobError.html) for the details. [Link to this section](#summary) Summary === [Callbacks](#callbacks) --- [report_job_error(arg0)](#c:report_job_error/1) Callback to report a job error [Link to this section](#callbacks) Callbacks === [Link to this callback](#c:report_job_error/1 "Link to this callback") report_job_error(arg0) ``` report_job_error([TaskBunny.JobError.t](TaskBunny.JobError.html#t:t/0)()) :: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Callback to report a job error. Toggle Theme TaskBunny v0.3.4 TaskBunny.FailureBackend.Logger === Default failure backend that reports job errors to Logger. [Link to this section](#summary) Summary === [Functions](#functions) --- [get_job_error_message(error)](#get_job_error_message/1) Returns the message content for the job error [report_job_error(error)](#report_job_error/1) Callback to report a job error [Link to this section](#functions) Functions === [Link to this function](#get_job_error_message/1 "Link to this function") get_job_error_message(error) ``` get_job_error_message([TaskBunny.JobError.t](TaskBunny.JobError.html#t:t/0)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Returns the message content for the job error. [Link to this function](#report_job_error/1 "Link to this function") report_job_error(error) Callback to report a job error. Callback implementation for [`TaskBunny.FailureBackend.report_job_error/1`](TaskBunny.FailureBackend.html#c:report_job_error/1). Toggle Theme TaskBunny v0.3.4 TaskBunny.Job behaviour === Behaviour module for implementing a TaskBunny job. TaskBunny job is an asynchronous background job whose execution request is enqueued to RabbitMQ and performed in a worker process. ``` defmodule HelloJob do use TaskBunny.Job def perform(%{"name" => name}) do IO.puts "Hello " <> name :ok end end HelloJob.enqueue(%{"name" => "Cloud"}) ``` Failing --- TaskBunny treats the job as failed when… * the return value of perform is not `:ok` or `{:ok, something}` * the perform timed out * the perform raises an exception while being executed * the perform throws :exit signal while being executed. TaskBunny will retry the failed job later. Timeout --- By default TaskBunny terminates the job when it takes more than 2 minutes. This prevents messages blocking a worker. If your job is expected to take longer than 2 minutes or you want to terminate the job earlier, override `timeout/0`. ``` defmodule SlowJob do use TaskBunny.Job def timeout, do: 300_000 def perform(_) do slow_work() :ok end end ``` Retry === By default TaskBunny retries 10 times every five minutes for a failed job. You can change this by overriding `max_retry/0` and `retry_interval/1`. For example, if you want the job to be retried five times and gradually increase the interval based on failed times, you can write logic like the following: ``` defmodule HttpSyncJob do def max_retry, do: 5 def retry_interval(failed_count) do [1, 5, 10, 30, 60] |> Enum.map(&(&1 * 60_000)) |> Enum.at(failed_count - 1, 1000) end ... end ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [enqueue!(job, payload, options \\ [])](#enqueue!/3) Similar to enqueue/3 but raises an exception on error [enqueue(job, payload, options \\ [])](#enqueue/3) Enqueues a job with payload [Callbacks](#callbacks) --- [max_retry()](#c:max_retry/0) Callback for the max number of retries TaskBunny can make for a failed job [on_reject(any)](#c:on_reject/1) Callback executed when a process gets rejected [perform(any)](#c:perform/1) Callback to process a job [retry_interval(integer)](#c:retry_interval/1) Callback for the retry interval in milliseconds [timeout()](#c:timeout/0) Callback for the timeout in milliseconds for a job execution [Link to this section](#functions) Functions === [Link to this function](#enqueue!/3 "Link to this function") enqueue!(job, payload, options \\ []) ``` enqueue!([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: :ok ``` Similar to enqueue/3 but raises an exception on error. [Link to this function](#enqueue/3 "Link to this function") enqueue(job, payload, options \\ []) ``` enqueue([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: :ok | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Enqueues a job with payload. You might want to use the shorter version if you can access to the job. ``` # Following two calls are exactly same. RegistrationJob.enqueue(payload) TaskBunny.enqueue(RegistrationJob, payload) ``` Options --- * delay: Set time in milliseconds to schedule the job enqueue time. * host: RabbitMQ host. By default it is automatically selected from configuration. * queue: RabbitMQ queue. By default it is automatically selected from configuration. [Link to this section](#callbacks) Callbacks === [Link to this callback](#c:max_retry/0 "Link to this callback") max_retry() ``` max_retry() :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Callback for the max number of retries TaskBunny can make for a failed job. Default value is 10. Override the function if you want to change the value. [Link to this callback](#c:on_reject/1 "Link to this callback") on_reject(any) ``` on_reject([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok ``` Callback executed when a process gets rejected. It receives in input the whole error trace structure plus the orginal payload for inspection and recovery actions. [Link to this callback](#c:perform/1 "Link to this callback") perform(any) ``` perform([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok | {:ok, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} ``` Callback to process a job. It can take any type of argument as long as it can be serialized with Poison, but we recommend you to use map with string keys for a consistency. ``` def perform(name) do IO.puts name <> ", it's not a preferred way" end def perform(%{"name" => name}) do IO.puts name <> ", it's a preferred way :)" end ``` [Link to this callback](#c:retry_interval/1 "Link to this callback") retry_interval(integer) ``` retry_interval([integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Callback for the retry interval in milliseconds. Default value is 300_000 = 5 minutes. Override the function if you want to change the value. TaskBunny will set failed count to the argument. The value will be more than or equal to 1 and less than or equal to max_retry. [Link to this callback](#c:timeout/0 "Link to this callback") timeout() ``` timeout() :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Callback for the timeout in milliseconds for a job execution. Default value is 120_000 = 2 minutes. Override the function if you want to change the value. Toggle Theme TaskBunny v0.3.4 TaskBunny.JobError === A struct that holds an error information occured during the job processing. Attributes --- * job: the job module failed * payload: the payload(arguments) for the job execution * error_type: the type of the error. :exception, :return_value, :timeout or :exit * exception: the inner exception (option) * stacktrace: the stacktrace (only available for the exception) * return_value: the return value from the job (only available for the return value error) * reason: the reason information passed with EXIT signal (only available for exit error) * raw_body: the raw body for the message * meta: the meta data given by RabbitMQ * failed_count: the number of failures for the job processing request * queue: the name of the queue * concurrency: the number of concurrent job processing of the worker * pid: the process ID of the worker * reject: sets true if the job is rejected for the failure (means it won’t be retried again) [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [get_result_info(job_error)](#get_result_info/1) Take information related to the result and make some of them JSON encode safe [Link to this section](#types) Types === [Link to this type](#t:t/0 "Link to this type") t() ``` t() :: %TaskBunny.JobError{ concurrency: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), error_type: :exception | :return_value | :timeout | :exit | nil, exception: [struct](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | nil, failed_count: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), job: [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | nil, meta: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), payload: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), pid: [pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | nil, queue: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), raw_body: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), reason: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), reject: [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), return_value: [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), stacktrace: [[tuple](https://hexdocs.pm/elixir/typespecs.html#basic-types)()] | nil } ``` [Link to this section](#functions) Functions === [Link to this function](#get_result_info/1 "Link to this function") get_result_info(job_error) ``` get_result_info([t](#t:t/0)()) :: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Take information related to the result and make some of them JSON encode safe. Since raw body can be bigger as you retry, you do not want to put the information. Toggle Theme TaskBunny v0.3.4 TaskBunny.Message === Functions that work on TaskBunny messages. It’s a semi private module used by Job or Worker. You shouldn’t have to deal with it normally. However in case you need to encode/decode TaskBunny messages, this module will help. [Link to this section](#summary) Summary === [Functions](#functions) --- [add_error_log(message, error)](#add_error_log/2) Add an error log to message body [decode!(message)](#decode!/1) Similar to decode/1 but raises an exception on error [decode(message)](#decode/1) Decode message body in JSON to map data [encode!(job, payload)](#encode!/2) Similar to encode/2 but raises an exception on error [encode(job, payload)](#encode/2) Encode message body in JSON with job and argument [failed_count(message)](#failed_count/1) Returns a number of errors occurred for the message [Link to this section](#functions) Functions === [Link to this function](#add_error_log/2 "Link to this function") add_error_log(message, error) ``` add_error_log([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() | [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [TaskBunny.JobError.t](TaskBunny.JobError.html#t:t/0)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() | [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Add an error log to message body. [Link to this function](#decode!/1 "Link to this function") decode!(message) ``` decode!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Similar to decode/1 but raises an exception on error. [Link to this function](#decode/1 "Link to this function") decode(message) ``` decode([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: {:ok, [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Decode message body in JSON to map data. [Link to this function](#encode!/2 "Link to this function") encode!(job, payload) ``` encode!([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Similar to encode/2 but raises an exception on error. [Link to this function](#encode/2 "Link to this function") encode(job, payload) ``` encode([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: {:ok, [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()} ``` Encode message body in JSON with job and argument. [Link to this function](#failed_count/1 "Link to this function") failed_count(message) ``` failed_count([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() | [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Returns a number of errors occurred for the message. Toggle Theme TaskBunny v0.3.4 TaskBunny.Publisher === Conviniences for publishing messages to a queue. It’s a semi private module and provides lower level functions. You should use Job.enqueue to enqueue a job from your application. [Link to this section](#summary) Summary === [Functions](#functions) --- [publish!(host, queue, message, options \\ [])](#publish!/4) Similar to publish/4 but raises exception on error. It calls the publisher worker to publish the message on the queue [publish(host, queue, message, options \\ [])](#publish/4) Publish a message to the queue [Link to this section](#functions) Functions === [Link to this function](#publish!/4 "Link to this function") publish!(host, queue, message, options \\ []) ``` publish!([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: :ok ``` Similar to publish/4 but raises exception on error. It calls the publisher worker to publish the message on the queue [Link to this function](#publish/4 "Link to this function") publish(host, queue, message, options \\ []) ``` publish([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [keyword](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: :ok | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Publish a message to the queue. Returns `:ok` when the message has been successfully sent to the server. Otherwise returns `{:error, detail}` Toggle Theme TaskBunny v0.3.4 TaskBunny.PublisherWorker === GenServer worker to publish a message on a queue [Link to this section](#summary) Summary === [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor [handle_call(msg, from, state)](#handle_call/3) Attempt to get a channel for the current connection and publish the message on the specified queue [init(_)](#init/1) Initializes the GenServer [start_link(_)](#start_link/1) Starts the publisher [terminate(arg1, state)](#terminate/2) Closes the AMQP channels opened to publish [Link to this section](#functions) Functions === [Link to this function](#child_spec/1 "Link to this function") child_spec(arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). [Link to this function](#handle_call/3 "Link to this function") handle_call(msg, from, state) ``` handle_call( {:publish, [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ) :: {:reply, :ok, [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Attempt to get a channel for the current connection and publish the message on the specified queue [Link to this function](#init/1 "Link to this function") init(_) ``` init([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: {:ok, [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Initializes the GenServer [Link to this function](#start_link/1 "Link to this function") start_link(_) ``` start_link([list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [GenServer.on_start](https://hexdocs.pm/elixir/GenServer.html#t:on_start/0)() ``` Starts the publisher [Link to this function](#terminate/2 "Link to this function") terminate(arg1, state) ``` terminate([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok ``` Closes the AMQP channels opened to publish Toggle Theme TaskBunny v0.3.4 TaskBunny.Queue === Convenience functions for accessing TaskBunny queues. It’s a semi private module normally wrapped by other modules. Sub Queues --- When TaskBunny creates(declares) a queue on RabbitMQ, it also creates the following sub queues. * [queue-name].scheduled: holds jobs to be executed in the future * [queue-name].retry: holds jobs to be retried * [queue-name].rejected: holds jobs that were rejected (failed more than max retry times or wrong message format) [Link to this section](#summary) Summary === [Functions](#functions) --- [declare_with_subqueues(host, work_queue)](#declare_with_subqueues/2) Declares a queue with sub queues [delete_with_subqueues(host, work_queue)](#delete_with_subqueues/2) Deletes the queue and its subqueues [queue_with_subqueues(work_queue)](#queue_with_subqueues/1) Returns a list that contains the queue and its subqueue [rejected_queue(work_queue)](#rejected_queue/1) Returns a name of rejected queue [retry_queue(work_queue)](#retry_queue/1) Returns a name of retry queue [scheduled_queue(work_queue)](#scheduled_queue/1) Returns a name of scheduled queue [state(host_or_conn \\ :default, queue)](#state/2) Returns the message count and consumer count for the given queue [subqueues(work_queue)](#subqueues/1) Returns all subqueues for the work queue [Link to this section](#functions) Functions === [Link to this function](#declare_with_subqueues/2 "Link to this function") declare_with_subqueues(host, work_queue) ``` declare_with_subqueues(%AMQP.Connection{pid: [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: {[map](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Declares a queue with sub queues. ``` Queue.declare_with_subqueues(:default, "normal_jobs") ``` For this call, the function creates(declares) three queues: * normal_jobs: a queue that holds jobs to process * normal_jobs.scheduled: a queue that holds jobs to process in the future * normal_jobs.retry: a queue that holds jobs failed and waiting to retry * normal_jobs.rejected: a queue that holds jobs failed and won’t be retried [Link to this function](#delete_with_subqueues/2 "Link to this function") delete_with_subqueues(host, work_queue) ``` delete_with_subqueues(%AMQP.Connection{pid: [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: :ok ``` Deletes the queue and its subqueues. [Link to this function](#queue_with_subqueues/1 "Link to this function") queue_with_subqueues(work_queue) ``` queue_with_subqueues([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [[String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()] ``` Returns a list that contains the queue and its subqueue. [Link to this function](#rejected_queue/1 "Link to this function") rejected_queue(work_queue) ``` rejected_queue([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Returns a name of rejected queue. [Link to this function](#retry_queue/1 "Link to this function") retry_queue(work_queue) ``` retry_queue([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Returns a name of retry queue. [Link to this function](#scheduled_queue/1 "Link to this function") scheduled_queue(work_queue) ``` scheduled_queue([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() ``` Returns a name of scheduled queue. [Link to this function](#state/2 "Link to this function") state(host_or_conn \\ :default, queue) ``` state(%AMQP.Connection{pid: [term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Returns the message count and consumer count for the given queue. [Link to this function](#subqueues/1 "Link to this function") subqueues(work_queue) ``` subqueues([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [[String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()] ``` Returns all subqueues for the work queue. Toggle Theme TaskBunny v0.3.4 TaskBunny.Supervisor === Main supervisor for TaskBunny. It supervises Connection and WorkerSupervisor with one_for_all strategy. When Connection crashes it restarts all Worker processes through WorkerSupervisor so workers can always use a re-established connection. You don’t have to call or start the Supervisor explicity. It will be automatically started by application and configure child processes based on configuration file. [Link to this section](#summary) Summary === [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor [Link to this section](#functions) Functions === [Link to this function](#child_spec/1 "Link to this function") child_spec(arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). Toggle Theme TaskBunny v0.3.4 TaskBunny.Worker === A GenServer that listens a queue and consumes messages. You don’t have to call or start worker explicity. TaskBunny loads config and starts workers automatically for you. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) Struct that represents a state of the worker GenServer [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor [stop_consumer(pid)](#stop_consumer/1) Stops consuming messages from queue. Note this doesn’t terminate the process and the jobs currently running will continue so [Link to this section](#types) Types === [Link to this type](#t:t/0 "Link to this type") t() ``` t() :: %TaskBunny.Worker{ channel: [AMQP.Channel.t](https://hexdocs.pm/amqp/0.3.1/AMQP.Channel.html#t:t/0)() | nil, concurrency: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), consumer_tag: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)() | nil, host: [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), job_stats: %{failed: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), succeeded: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), rejected: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()}, queue: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), runners: [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() } ``` Struct that represents a state of the worker GenServer. [Link to this section](#functions) Functions === [Link to this function](#child_spec/1 "Link to this function") child_spec(arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). [Link to this function](#stop_consumer/1 "Link to this function") stop_consumer(pid) ``` stop_consumer([pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok ``` Stops consuming messages from queue. Note this doesn’t terminate the process and the jobs currently running will continue so. Toggle Theme TaskBunny v0.3.4 TaskBunny.WorkerSupervisor === Supervises all TaskBunny workers. You don’t have to call or start the Supervisor explicity. It will be automatically started by application and configure child workers based on configuration file. It also provides [`graceful_halt/1`](#graceful_halt/1) and [`graceful_halt/2`](#graceful_halt/2) that allow you to shutdown the worker processes safely. [Link to this section](#summary) Summary === [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor [graceful_halt(timeout)](#graceful_halt/1) Similar to graceful_halt/2 but gets pid from module name [graceful_halt(pid, timeout)](#graceful_halt/2) Halts the job pocessing on workers gracefully. It makes workers to stop processing new jobs and waits for jobs currently running to finish [Link to this section](#functions) Functions === [Link to this function](#child_spec/1 "Link to this function") child_spec(arg) Returns a specification to start this module under a supervisor. See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). [Link to this function](#graceful_halt/1 "Link to this function") graceful_halt(timeout) ``` graceful_halt([integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Similar to graceful_halt/2 but gets pid from module name. [Link to this function](#graceful_halt/2 "Link to this function") graceful_halt(pid, timeout) ``` graceful_halt([pid](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | nil, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: :ok | {:error, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` Halts the job pocessing on workers gracefully. It makes workers to stop processing new jobs and waits for jobs currently running to finish. Note: It doesn’t terminate any worker processes. The worker and worker supervisor processes will continue existing but won’t consume any new messages. To resume it, terminate the worker supervisor then the main supervisor will start new processes. Toggle Theme TaskBunny v0.3.4 TaskBunny.ConfigError exception === Raised when an error was found on TaskBunny config Toggle Theme TaskBunny v0.3.4 TaskBunny.Connection.ConnectError exception === Raised when failed to retain a connection Toggle Theme TaskBunny v0.3.4 TaskBunny.Job.QueueNotFoundError exception === Raised when failed to find a queue for the job. Toggle Theme TaskBunny v0.3.4 TaskBunny.Message.DecodeError exception === Raised when failed to decode the message. Toggle Theme TaskBunny v0.3.4 TaskBunny.Publisher.PublishError exception === Raised when failed to publish the message. Toggle Theme TaskBunny v0.3.4 mix task_bunny.queue.reset === Mix task to reset all queues. It deletes the queues and creates them again. Therefore all messages in the queues are removed.
LinkedGASP
cran
R
Package ‘LinkedGASP’ October 12, 2022 Type Package Title Linked Emulator of a Coupled System of Simulators Version 1.0 Date 2018-11-24 Author <NAME>, k<EMAIL> Maintainer <NAME> <<EMAIL>> Depends nloptr, spBayes Suggests MASS Description Prototypes for construction of a Gaussian Stochastic Process emulator (GASP) of a com- puter model. This is done within the objective Bayesian implementation of the GASP. The pack- age allows for construction of a linked GASP of the composite computer model. Computa- tional implementation follows the mathematical exposition given in publication: Kse- <NAME>, <NAME>, <NAME>. Coupling computer models through link- ing their statistical emulators. SIAM/ASA Journal on Uncertainty Quantification, 6(3): 1151- 1171, (2018).<DOI:10.1137/17M1157702>. License GPL (>= 3) NeedsCompilation no Repository CRAN Date/Publication 2018-12-09 16:50:03 UTC R topics documented: emp_GASP_plo... 2 eval_GASP_RF... 4 eval_TGAS... 6 eval_type1_GAS... 7 eval_type2_GAS... 8 GASP_plo... 9 lin... 10 NGASPmetric... 12 TGASPmetric... 14 TGASP_plo... 15 emp_GASP_plot Empirical linked GASP plot Description Function plots the empirical true linked emulator in case of one-dimensional input. Usage emp_GASP_plot(em, fun, data, emul_type, exp.ql, exp.qu, labels, ylab, xlab, ylim, col_CI_area, col_points, col_fun, col_mean, points) Arguments em the returned output from the function eval_type1_GASP(...) or eval_type2_GASP(...). fun Simulator function. Currently only one-dimensional input is supported. data Training data and smoothness. The same as supplied to eval_GASP_RFP(...) for construction of the GASP. emul_type A text string which provides description of an emulator. exp.ql Quantile 0.025 exp.qu Quantile 0.975 labels As in standard R plot. ylab As in standard R plot. xlab As in standard R plot. ylim As in standard R plot. col_CI_area Color of a credible area. col_points Color of the training points. col_fun Color of a simulator function. col_mean Color of the emulator of the GASP mean. points Default is FALSE. To plot or not the training points. Value Plot Author(s) <NAME>, <EMAIL> Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## Function f2 is a simulator f2<-function(x){cos(5*x)} ## Function f2(f1) is a simulator of a composite model f2f1 <- function(x){f2(f1(x))} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with the ## assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) GASP_type2_f1 <- eval_type2_GASP(as.matrix(xn),f1_MLEs) par(mfrow = c(1,1)) par(mar = c(6.1, 6.1, 5.1, 2.1)) ylim = c(-1.5,1.5) GASP_plot(GASP_type2_f1,f1,data.f1,"Type 2 GASP",ylab = " f",xlab = "x", ylim = ylim, plot_training = TRUE) s = GASP_type2_f1$mu s.var = diag(GASP_type2_f1$var) x2 = seq(-0.95,0.95,length = 6)#f1(x1) data.f2 <- list(training = x2,fD = f2(x2), smooth = 2) # linking requires this emulator ## to have smoothness parameter equal to 2 f2_MLEs = eval_GASP_RFP(data.f2,list(function(x){x^0},function(x){x^1}),1,FALSE) GASP_type1_f2 <- eval_type1_GASP(as.matrix(seq(-3.5,3.5,.01)),f2_MLEs) GASP_type2_f2 <- eval_type2_GASP(as.matrix(seq(-1,1,.01)),f2_MLEs) TGASP_f2 <- eval_TGASP(as.matrix(seq(-1,1,.01)),f2_MLEs) ylim = c(-1.5,1.5) # labels = c(expression(phantom(x)*phantom(x)*phantom(x)*f(x[1])), # expression(f(x[2])*phantom(x)*phantom(x)*phantom(x)), # expression(f(x[3])),expression(f(x[4])), # expression(f(x[5])),expression(f(x[6]))) par(mar = c(6.1, 6.1, 5.1, 2.1)) GASP_plot(GASP_type2_f2,f2,data.f2, "Type 2 GASP",labels = x2,xlab= "z",ylab = " g", ylim = ylim,plot_training = TRUE) le <- link(f1_MLEs, f2_MLEs, as.matrix(xn)) ## Construct an empirical emulator n.samples = 100 em2.runs<-mat.or.vec(n.samples,length(s)) library(MASS) for(i in 1:n.samples) { GASP = eval_type2_GASP(as.matrix(mvrnorm(1,s,diag(s.var))),f2_MLEs) em2.runs[i,] <- mvrnorm(1,GASP$mu, GASP$var) } ## Plot the empirical GASP emulator data.f2f1 <- list(training = x1,fD = f2f1(x1), smooth = 2) par(mar = c(6.1, 6.1, 5.1, 2.1)) emp_GASP_plot(le$em2,f2f1,data.f2f1,"Linked",apply(em2.runs,2,quantile,probs = 0.025), apply(em2.runs,2,quantile,probs = 0.975), ylab = expression("g" ~ scriptscriptstyle(O) ~ "f"),xlab = "x, input",ylim = ylim) eval_GASP_RFP Evaluation of parameters of a Gaussian stochastic process emulator of a computer model. Description This function evaluates parameters of a Gaussian stochastic process emulator of a computer model based on a few observations which are available from the simulator of a computer model. Usage eval_GASP_RFP(data, basis, corr.cols, nugget) Arguments data list which consists of three objects: training input values (which may be multi- variate, along several dimensions), corresponding output values of a simulator (scalar) and a vector of smoothness parameter(s) along each input direction. basis A set of functions in the mean of a Gaussian process. Typically assumed to be linear in one or several dimensions. corr.cols specifies which input directions must be included in the specification of a corre- lation function. nugget Parameter which accounts for possible small stochastisity in the output of a com- puter model. Default is FALSE. Details See examples which illustrate inputs specification to the function. Value Function returns a list of objects, including estimates of parameters, which is subsequently may be used for construction of a GASP approximation with the estimated parameters and the data involved. delta Estimates of range parameters in the correlation function. eta Estimates of a nugget. sigma.sq Estimates of variance. data Input parameter returned for convenience. nugget Input parameter returned for convenience. basis Input parameter returned for convenience. corr.cols Input parameter returned for convenience. Author(s) <NAME>, kseniak.ucoz.net. References <NAME>, <NAME>, and <NAME>. Coupling computer models through linking their statistical emulators. SIAM/ASA Journal on Uncertainty Quantification, 6(3): 1151- 1171, 2018 <NAME>., <NAME>., <NAME>. et al. (2018) Robust Gaussian stochastic process emulation. The Annals of Statistics, 46, 3038-3066. Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## data.f1 contains the list of data inputs (training) and outputs (fD) together with the assumed ## fixed smoothness of a computer model output. This corresponds to the smoothness in a product ## power exponential correlation function used for construction of the emulator. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) eval_TGASP T-GASP emulator Description This function evaluates the third GASP of a computer model within objective Bayesian (OB) im- plementation of the GASP, resulting in T-GASP. Usage eval_TGASP(input, GASPparams) Arguments input Input values (the same dimension as training input data in the next argument GASPparams) GASPparams The output of the function eval_GASP_RFP. Value Function returns a list of three objects x Inputs. mu Mean of an emulator. var Covariance matrix of an emulator. Author(s) <NAME>, kseniak.ucoz.net. Examples ## Function f2 is a simulator f2<-function(x){cos(5*x)} ## One-dimensional inputs x2 x2 = seq(-0.95,0.95,length = 6) data.f2 <- list(training = x2,fD = f2(x2), smooth = 2) ## Evaluation of GASP parameters f2_MLEs = eval_GASP_RFP(data.f2,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluation of a T-GASP emulator TGASP_f2 <- eval_TGASP(as.matrix(seq(-1,1,.01)),f2_MLEs) eval_type1_GASP The first type of an emulator of a computer model Description This function evaluates the first GASP of a computer model using maximum a posteriori estimates (MAP) of parameters of the GASP. Usage eval_type1_GASP(input, GASPparams) Arguments input input values (the same dimension as training input data in the next argument GASPparams) GASPparams The output of the function eval_GASP_RFP. Details See examples which illustrate inputs specification to the function. Value Function returns a list of three objects x Inputs. mu Mean of an emulator. var Covariance matrix of an emulator. Author(s) <NAME>, kseniak.ucoz.net. Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with the ## assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) GASP_type1_f1 <- eval_type1_GASP(as.matrix(xn),f1_MLEs) eval_type2_GASP The second type of an emulator of a computer model Description This function evaluates the second GASP of a computer model within partial objective Bayesian (POB) implementation of the GASP. Usage eval_type2_GASP(input, GASPparams) Arguments input input values (the same dimension as training input data in the next argument GASPparams) GASPparams The output of the function eval_GASP_RFP. Details See examples which illustrate inputs specification to the function. Value Function returns a list of three objects x Inputs. mu Mean of an emulator. var Covariance matrix of an emulator. Author(s) <NAME>, kseniak.ucoz.net. Examples ## Function f2 is a simulator f2<-function(x){cos(5*x)} ## One-dimensional inputs x2 x2 = seq(-0.95,0.95,length = 6) data.f2 <- list(training = x2,fD = f2(x2), smooth = 2) ## Evaluation of GASP parameters f2_MLEs = eval_GASP_RFP(data.f2,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluation of a second type GASP emulator GASP_type2_f2 <- eval_type2_GASP(as.matrix(seq(-1,1,.01)),f2_MLEs) GASP_plot Plot of the GASP Description Function allows to plot the GASP in case of one-dimensional input. Usage GASP_plot(em, fun, data, emul_type, labels, yax, ylab, xlab,ylim, col_CI_area,col_points,col_fun,col_mean,plot_training = FALSE, plot_fun = TRUE) Arguments em the returned output from the function eval_type1_GASP(...) or eval_type2_GASP(...). fun Simulator function. Currently only one-dimensional input is supported. data Training data and smoothness. The same as supplied to eval_GASP_RFP(...) for construction of the GASP. emul_type A text string which provides description of an emulator. labels As in standard R plot. yax As in standard R plot. ylab As in standard R plot. xlab As in standard R plot. ylim As in standard R plot. col_CI_area Color of a credible area. col_points Color of the training points. col_fun Color of a simulator function. col_mean Color of the emulator of the GASP mean. plot_training (Not) to plot the training points. Default is FALSE. plot_fun (Not) to plot the simulator function. Default is TRUE. Value Plot Note The function requires further development to be automated for visualization along a single dimen- sion out of multiple dimensions and along two dimensions out of multiple dimensions. Author(s) <NAME>, <EMAIL> Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with the ## assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) GASP_type1_f1 <- eval_type1_GASP(as.matrix(xn),f1_MLEs) ## Plot the emulator par(mfrow = c(1,1)) par(mar = c(6.1, 6.1, 5.1, 2.1)) ylim = c(-1.5,1.5) GASP_plot(GASP_type1_f1,fun = f1,data = data.f1,"",ylim = ylim, plot_training = TRUE) link Linking two emulators Description Function constructs a linked GASP emulator of a composite computer model f2(f1). Usage link(f1_MLEs, f2_MLEs, test_input) Arguments f1_MLEs Parameters of the emulator of a simulator f1. f2_MLEs Parameters of the emulator of a simulator f2. test_input Testing inputs. Details See examples which illustrate inputs specification to the function. Value Four types of the linked GASP. em1 Type 1 emulator, which uses MAP estimates of parameters. em2 Type 2 emulator within partial objective Bayesian (POB) implementation. emT T-GASP emulator within objective Bayesian (OB) implementation. em3 Approximated T-GASP emulator with the Gaussian distribution. Author(s) <NAME>, kseniak.ucoz.net References <NAME>, <NAME>, and <NAME>. Coupling computer models through linking their statistical emulators. SIAM/ASA Journal on Uncertainty Quantification, 6(3): 1151- 1171, 2018 Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## Function f2 is a simulator f2<-function(x){cos(5*x)} ## Function f2(f1) is a simulator of a composite model f2f1 <- function(x){f2(f1(x))} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with ## the assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) GASP_type2_f1 <- eval_type2_GASP(as.matrix(xn),f1_MLEs) par(mfrow = c(1,1)) par(mar = c(6.1, 6.1, 5.1, 2.1)) ylim = c(-1.5,1.5) GASP_plot(GASP_type2_f1,f1,data.f1,"Type 2 GASP",ylab = " f",xlab = "x", ylim = ylim, plot_training = TRUE) s = GASP_type2_f1$mu s.var = diag(GASP_type2_f1$var) x2 = seq(-0.95,0.95,length = 6)#f1(x1) data.f2 <- list(training = x2,fD = f2(x2), smooth = 2) # linking requires this emulator # to have smoothness parameter equal to 2 f2_MLEs = eval_GASP_RFP(data.f2,list(function(x){x^0},function(x){x^1}),1,FALSE) GASP_type1_f2 <- eval_type1_GASP(as.matrix(seq(-3.5,3.5,.01)),f2_MLEs) GASP_type2_f2 <- eval_type2_GASP(as.matrix(seq(-1,1,.01)),f2_MLEs) TGASP_f2 <- eval_TGASP(as.matrix(seq(-1,1,.01)),f2_MLEs) ylim = c(-1.5,1.5) # labels = c(expression(phantom(x)*phantom(x)*phantom(x)*f(x[1])), # expression(f(x[2])*phantom(x)*phantom(x)*phantom(x)), # expression(f(x[3])),expression(f(x[4])), # expression(f(x[5])),expression(f(x[6]))) par(mar = c(6.1, 6.1, 5.1, 2.1)) GASP_plot(GASP_type2_f2,f2,data.f2, "Type 2 GASP",labels = x2,xlab= "z",ylab = " g", ylim = ylim,plot_training = TRUE) le <- link(f1_MLEs, f2_MLEs, as.matrix(xn)) ## Plot second type of the linked GASP data.f2f1 <- list(training = x1,fD = f2f1(x1), smooth = 2) par(mar = c(6.1, 6.1, 5.1, 2.1)) GASP_plot(le$em2,f2f1,data.f2f1,"Linked",labels = x1, ylab = expression("g" ~ scriptscriptstyle(O) ~ "f"),xlab = "x",ylim = ylim) NGASPmetrics GASP performance assessment measures Description Evaluates frequentist performance of the GASP. Usage NGASPmetrics(GASP, true_output, ref_output) Arguments GASP GASP emulator. true_output Output from the simulator. ref_output Heuristic emulator output. Value List of performance measures. RMSPE_base Root mean square predictive error with respect to the heuristic emulator output. RMSPE Root mean square predictive error for the emulator output ratio ratio of RMSPE_base to RMSPE. Ratio = RMSPE_base/RMSPE CIs 95% central credible intervals emp_cov 95% empirical coverage within the CIs length_CIs Average lenght of 95% central credible intervals Author(s) <NAME>, ksenia.ucoz.net References <NAME>, <NAME>, and <NAME>. Coupling computer models through linking their statistical emulators. SIAM/ASA Journal on Uncertainty Quantification, 6(3): 1151- 1171, 2018 Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with ## the assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) GASP_type2_f1 <- eval_type2_GASP(as.matrix(xn),f1_MLEs) ## Plot the emulator par(mar = c(6.1, 6.1, 5.1, 2.1)) GASP_plot(GASP_type2_f1,data = data.f1,emul_type = "",ylim = ylim, plot_training = TRUE) ## Measure performance of an emulator NGASPmetrics(GASP_type2_f1,f1(xn),mean(f1(xn))) TGASPmetrics Performance measurement of a T-GASP Description Evaluates frequentist performance of a T-GASP. Usage TGASPmetrics(TGASP, true_output, ref_output) Arguments TGASP TGASP emulator (in the paper this is done within an objective Bayesian imple- mentation - OB emulator.) true_output Output from the simulator. ref_output Heuristic emulator output. Details See examples which illustrate the use of the function. Value List of performance measures. RMSPE_base Root mean square predictive error with respect to the heuristic emulator output. RMSPE Root mean square predictive error for the emulator output ratio ratio of RMSPE_base to RMSPE. Ratio = RMSPE_base/RMSPE CIs 95% central credible intervals emp_cov 95% empirical coverage within the CIs length_CIs Average lenght of 95% central credible intervals Author(s) <NAME>, ksenia.ucoz.net References <NAME>, <NAME>, and <NAME>. Coupling computer models through linking their statistical emulators. SIAM/ASA Journal on Uncertainty Quantification, 6(3): 1151- 1171, 2018 Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with ## the assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) TGASP_f1 <- eval_TGASP(as.matrix(xn),f1_MLEs) ## Plot the emulator par(mfrow = c(1,1)) par(mar = c(6.1, 6.1, 5.1, 2.1)) ylim = c(-1.5,1.5) TGASP_plot(TGASP_f1,f1,data.f1,ylim = ylim) ## Measure the performance of the emulator TGASPmetrics(TGASP_f1,f1(xn),mean(f1(xn))) TGASP_plot T-GASP plot Description Function allows to plot the TGASP in case of one-dimensional input. Black-and-white version. Usage TGASP_plot(tem, fun, data, labels, ylim, points) Arguments tem TGasP emulator. fun Simulator function. data Training data and smoothness. The same as supplied to eval_GASP_RFP(...) for construction of a GASP. labels As in standard R plot. ylim As in standard R plot. points (Not) to plot the training points. Details See examples. Value Plot Note The function requires further development to be automated for visualization along a single dimen- sion out of multiple dimensions and along two dimensions out of multiple dimensions. This function needs to be automated to allow for fast visualization of a single emualtor (with no comparison to the actual simulator function), etc. Author(s) <NAME>, k<EMAIL> Examples ## Function f1 is a simulator f1<-function(x){sin(pi*x)} ## One-dimensional inputs are x1 x1 <- seq(-1,1,.37) ## The following contains the list of data inputs (training) and outputs (fD) together with ## the assumed fixed smoothness of a computer model output. data.f1 <- list(training = x1,fD = f1(x1), smooth = 1.99) ## Evaluation of GASP parameters f1_MLEs = eval_GASP_RFP(data.f1,list(function(x){x^0},function(x){x^1}),1,FALSE) ## Evaluate the emulator xn = seq(-1,1,.01) TGASP_f1 <- eval_TGASP(as.matrix(xn),f1_MLEs) ## Plot the emulator par(mfrow = c(1,1)) par(mar = c(6.1, 6.1, 5.1, 2.1)) ylim = c(-1.5,1.5) TGASP_plot(TGASP_f1,f1,data.f1,ylim = ylim)
mosaico
npm
JavaScript
Mosaico - Responsive Email Template Editor === Mosaico is a JavaScript library (or maybe a single page application) supporting the editing of email templates. The great thing is that Mosaico itself does not define what you can edit or what styles you can change: this is defined by the template. This makes Mosaico very flexible. At this time we provide a single "production ready" template to illustrate some best practice examples: more templates will come soon! Have a look at [Template Language](https://github.com/voidlabs/mosaico/wiki/Template-language) and get in touch with us if you want to make your email html template "Mosaico ready". ### Live demo On <https://mosaico.io> you can see a live demo of Mosaico: the live deploy has a custom backend (you don't see it) and some customization (custom Moxiemanager integration for image editing, customized onboarding slideshow, contextual menu, and some other small bits), but 95% of what you see is provided by this opensource library. You will also see a second working template there (versafluid) that is not part of the opensource distribution. #### News Subscribe to our newsletter to get updates: <https://mosaico.voxmail.it/user/register### More Docs from the Wiki [Mosaico Basics](https://github.com/voidlabs/mosaico/wiki) [Developer Notes](https://github.com/voidlabs/mosaico/wiki/Developers) ### Build/Run with the development backend You need NodeJS v8.0 or higher + NPM 8.3 (because of "overrides" support in package.json you need npm 8.3 if you want to change/upgrade dependencies, but it should work with older npm, too, if you rely on package-lock.json) Download/install the dependencies (run again if you get an error, as it probably is a race issues in npm) ``` npm install ``` if you don't have it, install grunt-cli globally ``` npm install -g grunt-cli ``` compile and run a local webserver (<http://127.0.0.1:9006>) with incremental build and livereload ``` grunt ``` *NOTE* we have reports that default Ubuntu node package have issues with building Mosaico via Grunt. If you see a `Fatal error: watch ENOSPC` then have a look at <https://github.com/voidlabs/mosaico/issues/82### Docker We bundle a Dockerfile based on Alpine linux and another based on Centos 7 to test mosaico with no need to install dependencies. ``` docker build -t mosaico/mosaico . docker run -p 9006:9006 mosaico/mosaico ``` then open a browser to point to the port 9006 of your docker machine IP. ### Deploying Mosaico via Apache PHP or Django or something else? First you have to build it using grunt, then you MUST read [Serving Mosaico](https://github.com/voidlabs/mosaico/wiki/Serving-Mosaico). ### OpenSource projects including/using Mosaico [MailTrain](https://github.com/Mailtrain-org/mailtrain) is a full featured newsletter web application written in Node and support email editing via Mosaico since their 1.23.0 release. [GoodEnough's Mosaico](https://github.com/goodenough/mosaico-backend) born as a Mosaico fork, now have become a full web application product built around Mosaico editing targeting agencies. [CiviCRM](https://civicrm.org) is an open source CRM built by a community of contributors and supporters, and coordinated by the Core Team. CiviCRM is web-based software used by a diverse range of organisations, particularly not-for-profit organizations (nonprofits and civic sector organizations). CiviCRM offers a complete feature set out of the box and can integrate with your website. ### Are you having issues with Mosaico? See the [CONTRIBUTING file](https://github.com/voidlabs/mosaico/blob/master/CONTRIBUTING.md) ### Contact Us Please contact us if you have ideas, suggestions or, even better, you want to collaborate on this project ( feedback at mosaico.io ) or you need COMMERCIAL support ( sales at mosaico.io ) . Please DON'T write to this email to get free support: use Git issues for that, start the issue subject with "[help] " prefix, and write something to let us know you already read the CONTRIBUTING file. Readme --- ### Keywords none
marble
cran
R
Package ‘marble’ May 10, 2023 Type Package Title Robust Marginal Bayesian Variable Selection for Gene-Environment Interactions Version 0.0.2 Date 2023-05-09 Description Recently, multiple marginal variable selection methods have been devel- oped and shown to be effective in Gene-Environment interactions studies. We pro- pose a novel marginal Bayesian variable selection method for Gene-Environment interac- tions studies. In particular, our marginal Bayesian method is robust to data contamina- tion and outliers in the outcome variables. With the incorporation of spike-and-slab pri- ors, we have implemented the Gibbs sampler based on Markov Chain Monte Carlo. The core al- gorithms of the package have been developed in 'C++'. Depends R (>= 3.5.0) License GPL-2 Encoding UTF-8 URL https://github.com/xilustat/marble LazyData true LinkingTo Rcpp, RcppArmadillo Imports Rcpp, stats RoxygenNote 7.2.3 NeedsCompilation yes Repository CRAN Author <NAME> [aut, cre], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Date/Publication 2023-05-10 19:30:02 UTC R topics documented: marble-packag... 2 da... 3 GxESelectio... 4 marbl... 5 print.GxESelectio... 8 print.marbl... 8 marble-package Robust Marginal Bayesian Variable Selection for Gene-Environment Interactions Description In this package, we provide a set of robust marginal Bayesian variable selection methods for gene- environment interaction analysis. A Bayesian formulation of the quantile regression has been adopted to accommodate data contamination and heavy-tailed distributions in the response. The proposed method conducts a robust marginal variable selection by accounting for structural spar- sity. In particular, the spike-and-slab priors are imposed to identify important main and interaction effects. In addition to the default method, users can also choose different structures (robust or non-robust), methods without spike-and-slab priors. Details The user friendly, integrated interface marble() allows users to flexibly choose the fitting methods they prefer. There are two arguments in marble() that control the fitting method: robust: whether to use robust methods; sparse: whether to use the spike-and-slab priors to create sparsity. The function marble() returns a marble object that contains the posterior estimates of each coefficients. Moreover, it also provides a rank list of the genetic factors and gene-environment interactions. Functions GxESelection() and print.marble() are implemented for marble objects. GxESelection() takes a marble object and returns the variable selection results. References <NAME>., <NAME>., <NAME>., and <NAME>. (2021). Identifying Gene–Environment Interactions With Ro- bust Marginal Bayesian Variable Selection. Frontiers in Genetics, 12:667074 doi:10.3389/fgene.2021.667074 <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2020). Robust Bayesian variable selection for gene-environment interactions. doi:10.1111/biom.13670 <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2020). Gene–Environment Interaction: a Variable Selection Perspective. Epistasis. Methods in Molecular Biology. Humana Press (Accepted) https: //arxiv.org/abs/2003.02930 <NAME>., <NAME>., and <NAME>. (2014). Integrative analysis of gene–environment interactions under a multi–response partially linear varying coefficient model. Statistics in Medicine, 33(28), 4988–4998 doi:10.1002/sim.6287 <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014). A penalized robust method for identifying gene–environment interactions. Genetic epidemiology, 38(3), 220-230 doi:10.1002/ gepi.21795 <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2017). Identifying gene-environment interactions for prognosis using a robust approach. Econometrics and statistics, 4, 105-120 doi:10.1016/j.ecosta.2016.10.004 See Also marble dat simulated data for demonstrating the features of marble. Description Simulated gene expression data for demonstrating the features of marble. Usage data("dat") Format dat consists of four components: X, Y, E, clin. Details The data model for generating Y Use subscript i to denote the ith subject. Let (Yi , Xi , Ei , clini ) (i = 1, . . . , n) be independent and identically distributed random vectors. Yi is a continuous response variable representing the phe- notype. Xi is the p–dimensional vector of genetic factors. The environmental factors and clinical factors are denoted as the q-dimensional vector Ei and the m-dimensional vector clini , respectively. The  follows some heavy-tailed distribution. For Xij (j = 1, . . . , p), the measurement of the jth genetic factor on the jth subject, considering the following model: X q Xm X q Yi = α0 + αk Eik + γt clinit + βj Xij + ηjk Xij Eik + i , k=1 t=1 k=1 where α0 is the intercept, αk ’s and γt ’s are the regression coefficients corresponding to effects of environmental and clinical factors, respectively. The βj ’s and ηjk ’s are the regression coefficients of the genetic variants and G×E interactions effects, correspondingly. The G×E interactions effects are defined with Wj = (Xj E1 , . . . , Xj Eq ). With a slight abuse of notation, denote W̃ = Wj . Denote α = (α1 , . . . , αq )T , γ = (γ1 , . . . , γm )T , β = (β1 , . . . , βp )T , η = (η1T , . . . , ηpT )T , W̃ = (W̃1 , . . . , W̃p ). Then model can be written as Yi = Ei α + clini γ + Xij βj + W̃i ηj + i . See Also marble Examples data(dat) dim(X) GxESelection Variable selection for a marble object Description Variable selection for a marble object Usage GxESelection(obj, sparse) Arguments obj marble object. sparse logical flag. If TRUE, spike-and-slab priors will be used to shrink coefficients of irrelevant covariates to zero exactly. Details For class ‘Sparse’, the inclusion probability is used to indicate the importance of predictors. Here we use a binary indicator φ to denote the membership of the non-spike distribution. Take the main effect of the jth genetic factor, Xj , as an example. Suppose we have collected H posterior samples from MCMC after burn-ins. The jth G factor is included in the marginal G×E model (h) at the jth MCMC iteration if the corresponding indicator is 1, i.e., φj = 1. Subsequently, the posterior probability of retaining the jth genetic main effect in the final marginal model is defined as the average of all the indicators for the jth G factor among the H posterior samples. That is, 1 PH (h) pj = π̂(φj = 1|y) = H h=1 φj , j = 1, . . . , p. A larger posterior inclusion probability of jth indicates a stronger empirical evidence that the jth genetic main effect has a non-zero coefficient, i.e., a stronger association with the phenotypic trait. Here, we use 0.5 as a cutting-off point. If pj > 0.5, then the jth genetic main effect is included in the final model. Otherwise, the jth genetic main effect is excluded in the final model. For class ‘NonSparse’, variable selection is based on 95% credible interval. Please check the references for more details about the variable selection. Value an object of class ‘GxESelection’ is returned, which is a list with components: method method used for identifying important effects. effects a list of indicators of selected effects. References <NAME>., <NAME>., <NAME>., and <NAME>. (2021). Identifying Gene–Environment Interactions With Ro- bust Marginal Bayesian Variable Selection. Frontiers in Genetics, 12:667074 doi:10.3389/fgene.2021.667074 See Also marble Examples data(dat) max.steps=5000 ## sparse fit=marble(X, Y, E, clin, max.steps=max.steps) selected=GxESelection(fit,sparse=TRUE) selected ## non-sparse fit=marble(X, Y, E, clin, max.steps=max.steps, sparse=FALSE) selected=GxESelection(fit,sparse=FALSE) selected marble fit a robust Bayesian variable selection model for G×E interactions. Description fit a robust Bayesian variable selection model for G×E interactions. Usage marble( X, Y, E, clin, max.steps = 10000, robust = TRUE, sparse = TRUE, debugging = FALSE ) Arguments X the matrix of predictors (genetic factors). Each row should be an observation vector. Y the continuous response variable. E a matrix of environmental factors. E will be centered. The interaction terms between X (genetic factors) and E will be automatically created and included in the model. clin a matrix of clinical variables. Clinical variables are not subject to penalize. Clinical variables will be centered and a column of 1 will be added to the Clinical matrix as the intercept. max.steps the number of MCMC iterations. robust logical flag. If TRUE, robust methods will be used. sparse logical flag. If TRUE, spike-and-slab priors will be used to shrink coefficients of irrelevant covariates to zero exactly. debugging logical flag. If TRUE, progress will be output to the console and extra informa- tion will be returned. Details Consider the data model described in "dat": X q X m Xq Yi = α0 + αk Eik + γt clinit + βj Xij + ηjk Xij Eik + i , k=1 t=1 k=1 Where α0 is the intercept, αk ’s and γt ’s are the regression coefficients corresponding to effects of environmental and clinical factors. And βj ’s and ηjk ’s are the regression coefficients of the genetic variants and G×E interactions effects, correspondingly. When sparse=TRUE (default), spike–and–slab priors are imposed to identify important main and interaction effects. If sparse=FALSE, Laplacian shrinkage will be used. When robust=TRUE (default), the distribution of i is defined as a Laplace distribution with den- sity f (i |ν) = ν2 exp {−ν|i |}, (i = 1, . . . , n), which leads to a Bayesian formulation of LAD regression. If robust=FALSE, i follows a normal distribution. Here, a rank list of the main and interaction effects is provided. For method incorporating spike- and-slab priors, the inclusion probability is used to indicate the importance of predictors. We use a binary indicator φ to denote the membership of the non-spike distribution. Take the main effect of the jth genetic factor, Xj , as an example. Suppose we have collected H posterior samples from MCMC after burn-ins. The jth G factor is included in the marginal G×E model at the jth (h) MCMC iteration if the corresponding indicator is 1, i.e., φj = 1. Subsequently, the posterior probability of retaining the jth genetic main effect in the final marginal model is defined as the average of all the indicators for the jth G factor among the H posterior samples. That is, pj = 1 PH (h) π̂(φj = 1|y) = H h=1 φj , j = 1, . . . , p. A larger posterior inclusion probability jth indicates a stronger empirical evidence that the jth genetic main effect has a non-zero coefficient, i.e., a stronger association with the phenotypic trait. For method without spike-and-slab priors, variable selection is based on different level of credible intervals. Both X, clin and E will be standardized before the generation of interaction terms to avoid the multicollinearity between main effects and interaction terms. Please check the references for more details about the prior distributions. Value an object of class ‘marble’ is returned, which is a list with component: posterior the posterior samples of coefficients from the MCMC. coefficient the estimated value of coefficients. ranklist the rank list of main and interaction effects. burn.in the total number of burn-ins. iterations the total number of iterations. design the design matrix of all effects. References <NAME>., <NAME>., <NAME>., and <NAME>. (2021). Identifying Gene–Environment Interactions With Ro- bust Marginal Bayesian Variable Selection. Frontiers in Genetics, 12:667074 doi:10.3389/fgene.2021.667074 See Also GxESelection Examples data(dat) ## default method max.steps=5000 fit=marble(X, Y, E, clin, max.steps=max.steps) ## coefficients of parameters fit$coefficient ## Estimated values of main G effects fit$coefficient$G ## Estimated values of interactions effects fit$coefficient$GE ## Rank list of main G effects and interactions fit$ranklist ## alternative: robust selection fit=marble(X, Y, E, clin, max.steps=max.steps, robust=TRUE, sparse=FALSE) fit$coefficient fit$ranklist ## alternative: non-robust sparse selection fit=marble(X, Y, E, clin, max.steps=max.steps, robust=FALSE, sparse=FALSE) fit$coefficient fit$ranklist print.GxESelection print a GxESelection object Description Print a summary of a GxESelection object Usage ## S3 method for class 'GxESelection' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments x GxESelection object. digits significant digits in printout. ... other print arguments. Value No return value, called for side effects. See Also GxESelection print.marble print a marble object Description Print a summary of a marble object Usage ## S3 method for class 'marble' print(x, digits = max(3, getOption("digits") - 3), ...) print.marble 9 Arguments x marble object. digits significant digits in printout. ... other print arguments. Value No return value, called for side effects. See Also marble
@getstation/slack
npm
JavaScript
### A [Slack Web API](https://api.slack.com/methods) client 🌱🙌💕 * Written in modern JavaScript; tested for Node and the browser * Complete support for the [Slack Web API](https://api.slack.com/methods) * Perfect symmetry: JS method signatures match Web API docs * Choose your async adventure: all methods accept either a Node style errback or return a `Promise` * Opt into an OO style class instance that applies `token` to all methods * Well tested, CI, and Apache2 licensed * Only one dependency: `tiny-json-http` * Tiny: **7kb** browserified/minified Install 🌟📦 --- ``` npm i slack ``` Usage ✨🚀 === `slack` mirrors the published API docs exactly because its generated from those docs! The default interface are stateless functions and has remain unchanged since `1.0.0` and that will continue to be the case. ``` var slack = require('slack') // logs {args:{hello:'world'}} slack.api.test({hello:'world'}, console.log) // :new: opt into promises slack.api.test({nice:1}).then(console.log).catch(console.log) ``` Due to popular demand an OO style is supported. For an instance of `Slack` all methods come prebound with the `token` parameter applied. ``` const token = process.env.SLACK_BOT_TOKEN const Slack = require('slack') const bot = new Slack({token}) // logs {args:{hyper:'card'}} bot.api.test({hyper:'card'}).then(console.log) ``` Using `async`/`await` in Node 8.x: ``` let token = process.env.SLACK_BOT_TOKEN let Slack = require('slack') let bot = new Slack({token}) ;(async function main() { // logs {args:{hyper:'card'}} var result = await bot.api.test({hyper:'card'}) console.log(result) })() ``` Choose whichever style works best for your project deployment needs and team preference. ♥️🍺 ### Error Handling Some methods (like [`slack.dialog.open`](https://api.slack.com/methods/dialog.open)) provide additional context for errors through a `response_metadata` object. This will be exposed as a `messages` properties on the errors that are thrown. ``` slack.dialog.open(options).catch(err => { console.log(err.messages) }) ``` ### Specialized Electron Support Electron ships its own HTTP module called `electron.net` which can have better performance and offers more extensive HTTP proxy handling. You can opt into Electron support by passing `useElectronNet:true` to the `Slack` constructor. ``` import {app, BrowserWindow, net} from 'electron' import Slack from 'slack' const slack = new Slack({useElectronNet:true}) ``` You can setup an HTTP authentication proxy logic by passing `login` to the constructor. ``` function login(authInfo, callback) { callback('username', 'password') } const slack = new Slack({useElectronNet:true, login}) ``` [Read more about `electron.net` from the source!](https://github.com/electron/electron/blob/master/docs/api/net.md) ### Test Setup 🔒🔑👈 Clone this repo and create a file called `.env` in the root with the following: ``` SLACK_BOT_TOKEN=xxxx SLACK_CLIENT_ID=xxxx SLACK_CLIENT_SECRET=xxxx ``` You can get a `SLACK_TOKEN` for testing [here](https://api.slack.com/web). You need to register an app for a `SLACK_CLIENT_ID` and `SLACK_CLIENT_SECRET`. The tests require the app to have the `channels:history` scope. You can [read about bot tokens here](https://api.slack.com/docs/token-types#bot). Testing 💚💚💚 --- 👉 In Node: ``` npm test ``` 👉 Or the browser: ``` npm run btest ``` Slack Web API 🎉🐝🚩 === The entire Slack Web API is supported. All method signatures accept a `params` object and either a Node style callback (an errback) or, if absent, it will return a `Promise`. Required params are documented inline below. * [`slack.api.test({})`](https://api.slack.com/methods/api.test) * [`slack.apps.permissions.info({token})`](https://api.slack.com/methods/apps.permissions.info) * [`slack.apps.permissions.request({token, scopes, trigger_id})`](https://api.slack.com/methods/apps.permissions.request) * [`slack.apps.permissions.resources.list({token})`](https://api.slack.com/methods/apps.permissions.resources.list) * [`slack.apps.permissions.scopes.list({token})`](https://api.slack.com/methods/apps.permissions.scopes.list) * [`slack.apps.permissions.users.list({token})`](https://api.slack.com/methods/apps.permissions.users.list) * [`slack.apps.permissions.users.request({token, scopes, trigger_id, user})`](https://api.slack.com/methods/apps.permissions.users.request) * [`slack.apps.uninstall({token, client_id, client_secret})`](https://api.slack.com/methods/apps.uninstall) * [`slack.auth.revoke({token})`](https://api.slack.com/methods/auth.revoke) * [`slack.auth.test({token})`](https://api.slack.com/methods/auth.test) * [`slack.bots.info({token})`](https://api.slack.com/methods/bots.info) * [`slack.channels.archive({token, channel})`](https://api.slack.com/methods/channels.archive) * [`slack.channels.create({token, name})`](https://api.slack.com/methods/channels.create) * [`slack.channels.history({token, channel})`](https://api.slack.com/methods/channels.history) * [`slack.channels.info({token, channel})`](https://api.slack.com/methods/channels.info) * [`slack.channels.invite({token, channel, user})`](https://api.slack.com/methods/channels.invite) * [`slack.channels.join({token, name})`](https://api.slack.com/methods/channels.join) * [`slack.channels.kick({token, channel, user})`](https://api.slack.com/methods/channels.kick) * [`slack.channels.leave({token, channel})`](https://api.slack.com/methods/channels.leave) * [`slack.channels.list({token})`](https://api.slack.com/methods/channels.list) * [`slack.channels.mark({token, channel, ts})`](https://api.slack.com/methods/channels.mark) * [`slack.channels.rename({token, channel, name})`](https://api.slack.com/methods/channels.rename) * [`slack.channels.replies({token, channel, thread_ts})`](https://api.slack.com/methods/channels.replies) * [`slack.channels.setPurpose({token, channel, purpose})`](https://api.slack.com/methods/channels.setPurpose) * [`slack.channels.setTopic({token, channel, topic})`](https://api.slack.com/methods/channels.setTopic) * [`slack.channels.unarchive({token, channel})`](https://api.slack.com/methods/channels.unarchive) * [`slack.chat.delete({token, channel, ts})`](https://api.slack.com/methods/chat.delete) * [`slack.chat.getPermalink({token, channel, message_ts})`](https://api.slack.com/methods/chat.getPermalink) * [`slack.chat.meMessage({token, channel, text})`](https://api.slack.com/methods/chat.meMessage) * [`slack.chat.postEphemeral({token, channel, text, user})`](https://api.slack.com/methods/chat.postEphemeral) * [`slack.chat.postMessage({token, channel, text})`](https://api.slack.com/methods/chat.postMessage) * [`slack.chat.unfurl({token, channel, ts, unfurls})`](https://api.slack.com/methods/chat.unfurl) * [`slack.chat.update({token, channel, text, ts})`](https://api.slack.com/methods/chat.update) * [`slack.conversations.archive({token, channel})`](https://api.slack.com/methods/conversations.archive) * [`slack.conversations.close({token, channel})`](https://api.slack.com/methods/conversations.close) * [`slack.conversations.create({token, name})`](https://api.slack.com/methods/conversations.create) * [`slack.conversations.history({token, channel})`](https://api.slack.com/methods/conversations.history) * [`slack.conversations.info({token, channel})`](https://api.slack.com/methods/conversations.info) * [`slack.conversations.invite({token, channel, users})`](https://api.slack.com/methods/conversations.invite) * [`slack.conversations.join({token, channel})`](https://api.slack.com/methods/conversations.join) * [`slack.conversations.kick({token, channel, user})`](https://api.slack.com/methods/conversations.kick) * [`slack.conversations.leave({token, channel})`](https://api.slack.com/methods/conversations.leave) * [`slack.conversations.list({token})`](https://api.slack.com/methods/conversations.list) * [`slack.conversations.members({token, channel})`](https://api.slack.com/methods/conversations.members) * [`slack.conversations.open({token})`](https://api.slack.com/methods/conversations.open) * [`slack.conversations.rename({token, channel, name})`](https://api.slack.com/methods/conversations.rename) * [`slack.conversations.replies({token, channel, ts})`](https://api.slack.com/methods/conversations.replies) * [`slack.conversations.setPurpose({token, channel, purpose})`](https://api.slack.com/methods/conversations.setPurpose) * [`slack.conversations.setTopic({token, channel, topic})`](https://api.slack.com/methods/conversations.setTopic) * [`slack.conversations.unarchive({token, channel})`](https://api.slack.com/methods/conversations.unarchive) * [`slack.dialog.open({token, dialog, trigger_id})`](https://api.slack.com/methods/dialog.open) * [`slack.dnd.endDnd({token})`](https://api.slack.com/methods/dnd.endDnd) * [`slack.dnd.endSnooze({token})`](https://api.slack.com/methods/dnd.endSnooze) * [`slack.dnd.info({token})`](https://api.slack.com/methods/dnd.info) * [`slack.dnd.setSnooze({token, num_minutes})`](https://api.slack.com/methods/dnd.setSnooze) * [`slack.dnd.teamInfo({token})`](https://api.slack.com/methods/dnd.teamInfo) * [`slack.emoji.list({token})`](https://api.slack.com/methods/emoji.list) * [`slack.files.comments.add({token, comment, file})`](https://api.slack.com/methods/files.comments.add) * [`slack.files.comments.delete({token, file, id})`](https://api.slack.com/methods/files.comments.delete) * [`slack.files.comments.edit({token, comment, file, id})`](https://api.slack.com/methods/files.comments.edit) * [`slack.files.delete({token, file})`](https://api.slack.com/methods/files.delete) * [`slack.files.info({token, file})`](https://api.slack.com/methods/files.info) * [`slack.files.list({token})`](https://api.slack.com/methods/files.list) * [`slack.files.revokePublicURL({token, file})`](https://api.slack.com/methods/files.revokePublicURL) * [`slack.files.sharedPublicURL({token, file})`](https://api.slack.com/methods/files.sharedPublicURL) * [`slack.files.upload({token})`](https://api.slack.com/methods/files.upload) * [`slack.groups.archive({token, channel})`](https://api.slack.com/methods/groups.archive) * [`slack.groups.create({token, name})`](https://api.slack.com/methods/groups.create) * [`slack.groups.createChild({token, channel})`](https://api.slack.com/methods/groups.createChild) * [`slack.groups.history({token, channel})`](https://api.slack.com/methods/groups.history) * [`slack.groups.info({token, channel})`](https://api.slack.com/methods/groups.info) * [`slack.groups.invite({token, channel, user})`](https://api.slack.com/methods/groups.invite) * [`slack.groups.kick({token, channel, user})`](https://api.slack.com/methods/groups.kick) * [`slack.groups.leave({token, channel})`](https://api.slack.com/methods/groups.leave) * [`slack.groups.list({token})`](https://api.slack.com/methods/groups.list) * [`slack.groups.mark({token, channel, ts})`](https://api.slack.com/methods/groups.mark) * [`slack.groups.open({token, channel})`](https://api.slack.com/methods/groups.open) * [`slack.groups.rename({token, channel, name})`](https://api.slack.com/methods/groups.rename) * [`slack.groups.replies({token, channel, thread_ts})`](https://api.slack.com/methods/groups.replies) * [`slack.groups.setPurpose({token, channel, purpose})`](https://api.slack.com/methods/groups.setPurpose) * [`slack.groups.setTopic({token, channel, topic})`](https://api.slack.com/methods/groups.setTopic) * [`slack.groups.unarchive({token, channel})`](https://api.slack.com/methods/groups.unarchive) * [`slack.im.close({token, channel})`](https://api.slack.com/methods/im.close) * [`slack.im.history({token, channel})`](https://api.slack.com/methods/im.history) * [`slack.im.list({token})`](https://api.slack.com/methods/im.list) * [`slack.im.mark({token, channel, ts})`](https://api.slack.com/methods/im.mark) * [`slack.im.open({token, user})`](https://api.slack.com/methods/im.open) * [`slack.im.replies({token, channel, thread_ts})`](https://api.slack.com/methods/im.replies) * [`slack.migration.exchange({token, users})`](https://api.slack.com/methods/migration.exchange) * [`slack.mpim.close({token, channel})`](https://api.slack.com/methods/mpim.close) * [`slack.mpim.history({token, channel})`](https://api.slack.com/methods/mpim.history) * [`slack.mpim.list({token})`](https://api.slack.com/methods/mpim.list) * [`slack.mpim.mark({token, channel, ts})`](https://api.slack.com/methods/mpim.mark) * [`slack.mpim.open({token, users})`](https://api.slack.com/methods/mpim.open) * [`slack.mpim.replies({token, channel, thread_ts})`](https://api.slack.com/methods/mpim.replies) * [`slack.oauth.access({client_id, client_secret, code})`](https://api.slack.com/methods/oauth.access) * [`slack.oauth.token({client_id, client_secret, code})`](https://api.slack.com/methods/oauth.token) * [`slack.pins.add({token, channel})`](https://api.slack.com/methods/pins.add) * [`slack.pins.list({token, channel})`](https://api.slack.com/methods/pins.list) * [`slack.pins.remove({token, channel})`](https://api.slack.com/methods/pins.remove) * [`slack.reactions.add({token, name})`](https://api.slack.com/methods/reactions.add) * [`slack.reactions.get({token})`](https://api.slack.com/methods/reactions.get) * [`slack.reactions.list({token})`](https://api.slack.com/methods/reactions.list) * [`slack.reactions.remove({token, name})`](https://api.slack.com/methods/reactions.remove) * [`slack.reminders.add({token, text, time})`](https://api.slack.com/methods/reminders.add) * [`slack.reminders.complete({token, reminder})`](https://api.slack.com/methods/reminders.complete) * [`slack.reminders.delete({token, reminder})`](https://api.slack.com/methods/reminders.delete) * [`slack.reminders.info({token, reminder})`](https://api.slack.com/methods/reminders.info) * [`slack.reminders.list({token})`](https://api.slack.com/methods/reminders.list) * [`slack.rtm.connect({token})`](https://api.slack.com/methods/rtm.connect) * [`slack.rtm.start({token})`](https://api.slack.com/methods/rtm.start) * [`slack.search.all({token, query})`](https://api.slack.com/methods/search.all) * [`slack.search.files({token, query})`](https://api.slack.com/methods/search.files) * [`slack.search.messages({token, query})`](https://api.slack.com/methods/search.messages) * [`slack.stars.add({token})`](https://api.slack.com/methods/stars.add) * [`slack.stars.list({token})`](https://api.slack.com/methods/stars.list) * [`slack.stars.remove({token})`](https://api.slack.com/methods/stars.remove) * [`slack.team.accessLogs({token})`](https://api.slack.com/methods/team.accessLogs) * [`slack.team.billableInfo({token})`](https://api.slack.com/methods/team.billableInfo) * [`slack.team.info({token})`](https://api.slack.com/methods/team.info) * [`slack.team.integrationLogs({token})`](https://api.slack.com/methods/team.integrationLogs) * [`slack.team.profile.get({token})`](https://api.slack.com/methods/team.profile.get) * [`slack.usergroups.create({token, name})`](https://api.slack.com/methods/usergroups.create) * [`slack.usergroups.disable({token, usergroup})`](https://api.slack.com/methods/usergroups.disable) * [`slack.usergroups.enable({token, usergroup})`](https://api.slack.com/methods/usergroups.enable) * [`slack.usergroups.list({token})`](https://api.slack.com/methods/usergroups.list) * [`slack.usergroups.update({token, usergroup})`](https://api.slack.com/methods/usergroups.update) * [`slack.usergroups.users.list({token, usergroup})`](https://api.slack.com/methods/usergroups.users.list) * [`slack.usergroups.users.update({token, usergroup, users})`](https://api.slack.com/methods/usergroups.users.update) * [`slack.users.conversations({token})`](https://api.slack.com/methods/users.conversations) * [`slack.users.deletePhoto({token})`](https://api.slack.com/methods/users.deletePhoto) * [`slack.users.getPresence({token, user})`](https://api.slack.com/methods/users.getPresence) * [`slack.users.identity({token})`](https://api.slack.com/methods/users.identity) * [`slack.users.info({token, user})`](https://api.slack.com/methods/users.info) * [`slack.users.list({token})`](https://api.slack.com/methods/users.list) * [`slack.users.lookupByEmail({token, email})`](https://api.slack.com/methods/users.lookupByEmail) * [`slack.users.profile.get({token})`](https://api.slack.com/methods/users.profile.get) * [`slack.users.profile.set({token})`](https://api.slack.com/methods/users.profile.set) * [`slack.users.setActive({token})`](https://api.slack.com/methods/users.setActive) * [`slack.users.setPhoto({token, image})`](https://api.slack.com/methods/users.setPhoto) * [`slack.users.setPresence({token, presence})`](https://api.slack.com/methods/users.setPresence) Contributing === The code for the client is generated by scraping the [Slack Web API documentation](https://api.slack.com/methods). Regenerate from the latest Slack documentation by running 🏃: ``` npm run generate ``` Portions of this README are generated as well; to make edits, update `readme.tmpl` and run the same command ☁️☔☀️🌻. Readme --- ### Keywords * slack * api * client
grinpy
readthedoc
Unknown
GrinPy Documentation Release latest <NAME>, <NAME> May 30, 2019 Contents 4.1 Tutoria... 9 4.2 Referenc... 10 4.3 Licens... 38 i ii GrinPy Documentation, Release latest GrinPy is a NetworkX extension for calculating graph invariants. This extension imports all of NetworkX into the same interface as GrinPy for easy of use and provides the following extensions: • extended functional interface for graph properties • calculation of NP-hard invariants such as: independence number, domination number and zero forcing number • calculation of several invariants that are known to be related to the NP-hard invariants, such as the residue, the annihilation number and the sub-domination number Our goal is to provide the most comprehensive list of invariants. We will be continuing to add to this list as time goes on, and we invite others to join us by contributing their own implementations of algorithms for computing new or existing GrinPy invariants. GrinPy Documentation, Release latest 2 Contents CHAPTER 1 Audience We envision GrinPy’s primary audience to be professional mathematicians and students of mathematics. Computer scientists, electrical engineers, physicists, biologists, chemists and social scientists may also find GrinPy’s extensions to the standard NetworkX package useful. GrinPy Documentation, Release latest 4 Chapter 1. Audience CHAPTER 2 History Grinpy was originally created to aid the developers, <NAME> and <NAME>, in creating an ordered tree of graph databases for use in an experimental automated conjecturing program. It quickly became clear that a Python package for calculating graph invariants would be useful. GrinPy was created in November 2017 and is still in its infancy. We look forward to what the future brings! GrinPy Documentation, Release latest 6 Chapter 2. History CHAPTER 3 Free Software GrinPy is free software; you can redistribute it and/or modify it under the terms of the 3-clause BSD license, the same license that NetworkX is released under. We greatly appreciate contributions. Please join us on Github. GrinPy Documentation, Release latest 8 Chapter 3. Free Software CHAPTER 4 Documentation 4.1 Tutorial This guide can help you start working with GrinPy. We assume basic knowledge of NetworkX. For more information on how to use NetworkX, see the NetworkX Documentation. 4.1.1 Calculating the Independence Number For this example we will create a cycle of order 5. >>> import grinpy as gp >>> G = gp.cycle_graph(5) In order to compute the independence number of the cycle, we simply call the independence_number method on the graph: >>> gp.independence_number(G) 2 It’s that simple! Note: In this release (version latest), all methods are defined only for simple graphs. In future releases, we will expand to digraphs and multigraphs. 4.1.2 Get a Maximum Independent Set If we are interested in finding a maximum independent set in the graph: >>> gp.max_independent_set(G) [0, 2] GrinPy Documentation, Release latest 4.1.3 Determine if a Given Set is Independent We may check whether or not a given set is independent: >>> gp.is_independent_set(G, [0, 1]) False >>> gp.is_independent_set(G, [1, 3]) True 4.1.4 General Notes The vast majority of NP-hard invariants will include three methods corresponding to the above examples. That is, for each invariant, there will be three methods: • Calculate the invariant • Get a set of nodes realizing the invariant • Determine whether or not a given set of nodes meets some necessary condition for the invariant. 4.2 Reference Release latest Date May 30, 2019 4.2.1 Classes Release latest Date May 30, 2019 HavelHakimi Overview class grinpy.HavelHakimi(sequence) Class for performing and keeping track of the Havel Hakimi process on a sequence of positive integers. Parameters sequence (input sequence) – The sequence of integers to initialize the Havel Hakimi process. Methods HavelHakimi.__init__(sequence) Initialize self. HavelHakimi.depth() Return the depth of the Havel Hakimi process. HavelHakimi.get_elimination_sequence() Return the elimination sequence of the Havel Hakimi process. HavelHakimi.get_initial_sequence() Return the initial sequence passed to the Havel Hakimi class for initialization. Continued on next page GrinPy Documentation, Release latest Table 1 – continued from previous page HavelHakimi.is_graphic() Return whether or not the initial sequence is graphic. HavelHakimi.get_process() Return the list of sequence produced during the Havel Hakimi process. HavelHakimi.residue() Return the residue of the sequence. grinpy.HavelHakimi.__init__ HavelHakimi.__init__(sequence) Initialize self. See help(type(self)) for accurate signature. grinpy.HavelHakimi.depth HavelHakimi.depth() Return the depth of the Havel Hakimi process. Returns The depth of the Havel Hakimi process. Return type int grinpy.HavelHakimi.get_elimination_sequence HavelHakimi.get_elimination_sequence() Return the elimination sequence of the Havel Hakimi process. Returns The elimination sequence of the Havel Hakimi process. Return type list grinpy.HavelHakimi.get_initial_sequence HavelHakimi.get_initial_sequence() Return the initial sequence passed to the Havel Hakimi class for initialization. Returns The initial sequence passed to the Havel Hakimi class. Return type list grinpy.HavelHakimi.is_graphic HavelHakimi.is_graphic() Return whether or not the initial sequence is graphic. Returns True if the initial sequence is graphic. False otherwise. Return type bool grinpy.HavelHakimi.get_process HavelHakimi.get_process() Return the list of sequence produced during the Havel Hakimi process. The first element in the list is the initial sequence. GrinPy Documentation, Release latest Returns The list of sequences produced by the Havel Hakimi process. Return type list grinpy.HavelHakimi.residue HavelHakimi.residue() Return the residue of the sequence. Returns The residue of the initial sequence. If the sequence is not graphic, this will be None. Return type int 4.2.2 Functions Release latest Date May 30, 2019 Degree Assorted degree related graph utilities. degree_sequence(G) Return the degree sequence of G. min_degree(G) Return the minimum degree of G. max_degree(G) Return the maximum degree of G. average_degree(G) Return the average degree of G. number_of_nodes_of_degree_k(G, k) Return the number of nodes of the graph with degree equal to k. number_of_degree_one_nodes(G) Return the number of nodes of the graph with degree number_of_min_degree_nodes(G) Return the number of nodes of the graph with degree equal to the minimum degree of the graph. number_of_max_degree_nodes(G) Return the number of nodes of the graph with degree equal to the maximum degree of the graph. neighborhood_degree_list(G, nbunch) Return a list of the unique degrees of all neighbors of nodes in nbunch closed_neighborhood_degree_list(G, Return a list of the unique degrees of all nodes in the nbunch) closed neighborhood of the nodes in nbunch. grinpy.functions.degree.degree_sequence grinpy.functions.degree.degree_sequence(G) Return the degree sequence of G. The degree sequence of a graph is the sequence of degrees of the nodes in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The degree sequence of the graph. Return type list GrinPy Documentation, Release latest Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.degree_sequence(G) [1, 2, 1] grinpy.functions.degree.min_degree grinpy.functions.degree.min_degree(G) Return the minimum degree of G. The minimum degree of a graph is the smallest degree of any node in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The minimum degree of the graph. Return type int Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.min_degree(G) 1 grinpy.functions.degree.max_degree grinpy.functions.degree.max_degree(G) Return the maximum degree of G. The maximum degree of a graph is the largest degree of any node in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The maximum degree of the graph. Return type int Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.min_degree(G) 2 grinpy.functions.degree.average_degree grinpy.functions.degree.average_degree(G) Return the average degree of G. The average degree of a graph is the average of the degrees of all nodes in the graph. Parameters G (NetworkX graph) – An undirected graph. GrinPy Documentation, Release latest Returns The average degree of the graph. Return type float Examples >>> G = nx.star_graph(3) # Star on 4 nodes >>> nx.average_degree(G) 1.5 grinpy.functions.degree.number_of_nodes_of_degree_k grinpy.functions.degree.number_of_nodes_of_degree_k(G, k) Return the number of nodes of the graph with degree equal to k. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns The number of nodes in the graph with degree equal to k. Return type int See also: number_of_leaves(), number_of_min_degree_nodes(), number_of_max_degree_nodes() Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.number_of_nodes_of_degree_k(G, 1) 2 grinpy.functions.degree.number_of_degree_one_nodes grinpy.functions.degree.number_of_degree_one_nodes(G) Return the number of nodes of the graph with degree equal to 1. A vertex with degree equal to 1 is also called a leaf. Parameters G (NetworkX graph) – An undirected graph. Returns The number of nodes in the graph with degree equal to 1. Return type int See also: number_of_nodes_of_degree_k(), number_of_min_degree_nodes(), number_of_max_degree_nodes() GrinPy Documentation, Release latest Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.number_of_leaves(G) 2 grinpy.functions.degree.number_of_min_degree_nodes grinpy.functions.degree.number_of_min_degree_nodes(G) Return the number of nodes of the graph with degree equal to the minimum degree of the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The number of nodes in the graph with degree equal to the minimum degree. Return type int See also: number_of_nodes_of_degree_k(), number_of_leaves(), number_of_max_degree_nodes(), min_degree() Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.number_of_min_degree_nodes(G) 2 grinpy.functions.degree.number_of_max_degree_nodes grinpy.functions.degree.number_of_max_degree_nodes(G) Return the number of nodes of the graph with degree equal to the maximum degree of the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The number of nodes in the graph with degree equal to the maximum degree. Return type int See also: number_of_nodes_of_degree_k(), number_of_leaves(), number_of_min_degree_nodes(), max_degree() Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.number_of_max_degree_nodes(G) 1 GrinPy Documentation, Release latest grinpy.functions.degree.neighborhood_degree_list grinpy.functions.degree.neighborhood_degree_list(G, nbunch) Return a list of the unique degrees of all neighbors of nodes in nbunch Parameters • G (NetworkX graph) – An undirected graph. • nbunch (a single node or iterable container of nodes) – Returns A list of the degrees of all nodes in the neighborhood of the nodes in nbunch. Return type list See also: closed_neighborhood_degree_list(), neighborhood() Examples >>> import grinpy as gp >>> G = gp.path_graph(3) # Path on 3 nodes >>> gp.neighborhood_degree_list(G, 1) [1, 2] grinpy.functions.degree.closed_neighborhood_degree_list grinpy.functions.degree.closed_neighborhood_degree_list(G, nbunch) Return a list of the unique degrees of all nodes in the closed neighborhood of the nodes in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • nbunch (a single node or iterable container of nodes) – Returns A list of the degrees of all nodes in the closed neighborhood of the nodes in nbunch. Return type list See also: closed_neighborhood(), neighborhood_degree_list() Examples >>> import grinpy as gp >>> G = gp.path_graph(3) # Path on 3 nodes >>> gp.closed_neighborhood_degree_list(G, 1) [1, 2, 2] Neighborhoods Functions for computing neighborhoods of vertices and sets of vertices. GrinPy Documentation, Release latest are_neighbors(G, v, nbunch) Returns true if v is adjacent to any of the nodes in nbunch. closed_neighborhood(G, nbunch) Return a list of all neighbors of the nodes in nbunch, including the nodes in nbunch. common_neighbors(G, nbunch) Returns a list of all nodes in G that are adjacent to every node in nbunch. neighborhood(G, nbunch) Return a list of all neighbors of the nodes in nbunch. grinpy.functions.neighborhoods.are_neighbors grinpy.functions.neighborhoods.are_neighbors(G, v, nbunch) Returns true if v is adjacent to any of the nodes in nbunch. Otherwise, returns false. Parameters • G (NetworkX graph) – An undirected graph. • v (node) – A node in the graph. • nbunch – A single node or iterable container Returns If nbunch in a single node, True if v in a neighbor that node and False otherwise. If nbunch is an interable, True if v is a neighbor of some node in nbunch and False otherwise. Return type bool Examples >>> G = nx.star_graph(3) # Star on 4 nodes >>> nx.are_neighbors(G, 0, 1) True >>> nx.are_neighbors(G, 1, 2) False >>> nx.are_neighbors(G, 1, [0, 2]) True grinpy.functions.neighborhoods.closed_neighborhood grinpy.functions.neighborhoods.closed_neighborhood(G, nbunch) Return a list of all neighbors of the nodes in nbunch, including the nodes in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container Returns A list containing all nodes that are a neighbor of some node in nbunch together with all nodes in nbunch. Return type list GrinPy Documentation, Release latest See also: neighborhood() Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.closed_neighborhood(G, 1) [0, 1, 2] grinpy.functions.neighborhoods.common_neighbors grinpy.functions.neighborhoods.common_neighbors(G, nbunch) Returns a list of all nodes in G that are adjacent to every node in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container Returns All nodes adjacent to every node in nbunch. If nbunch contains only a single node, that nodes neighborhood is returned. Return type list grinpy.functions.neighborhoods.neighborhood grinpy.functions.neighborhoods.neighborhood(G, nbunch) Return a list of all neighbors of the nodes in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • nbunch (a single node or iterable container) – Returns A list containing all nodes that are a neighbor of some node in nbunch. Return type list See also: closed_neighborhood() Examples >>> G = nx.path_graph(3) # Path on 3 nodes >>> nx.neighborhood(G, 1) [0, 2] 4.2.3 Invariants Release latest Date May 30, 2019 GrinPy Documentation, Release latest Chromatic Number Functions for computing the chromatic number of a graph. chromatic_number(G) Returns the chromatic number of G. grinpy.invariants.chromatic.chromatic_number grinpy.invariants.chromatic.chromatic_number(G) Returns the chromatic number of G. The chromatic number of a graph G is the size of a mininum coloring of the nodes in G such that no two adjacent nodes have the same color. The method for computing the chromatic number is an implementation of the algorithm discovered by Ram and Rama. Parameters G (NetworkX graph) – An undirected graph. Returns The chromatic number of G. Return type int References <NAME>, <NAME>, An alternate method to find the chromatic number of a finite, connected graph, arXiv preprint arXiv:1309.3642, (2013) Clique Number Functions for computing independence related invariants for a graph. clique_number(G[, cliques]) Return the clique number of the graph. grinpy.invariants.clique.clique_number grinpy.invariants.clique.clique_number(G, cliques=None) Return the clique number of the graph. A clique in a graph G is a complete subgraph. The clique number is the size of a largest clique. This function is a wrapper for the NetworkX graph_clique_number() method in net- workx.algorithms.clique. Parameters • G (NetworkX graph) – An undirected graph. • cliques (list) – A list of cliques, each of which is itself a list of nodes. If not speci- fied, the list of all cliques will be computed, as by networkx.algorithms.clique. find_cliques(). Returns The size of a largest clique in G Return type int GrinPy Documentation, Release latest Notes You should provide cliques if you have already computed the list of maximal cliques, in order to avoid an exponential time search for maximal cliques. Disparity Functions for computing disparity related invariants. vertex_disparity(G, v) Return number of distinct degrees of neighbors of v. closed_vertex_disparity(G, v) Return number of distinct degrees of nodes in the closed neighborhood of v. disparity_sequence(G) Return the sequence of disparities of each node in the graph. closed_disparity_sequence(G) Return the sequence of closed disparities of each node in the graph. CW_disparity(G) Return the Caro-Wei disparity of the graph. closed_CW_disparity(G) Return the closed Caro-Wei disparity of the graph. inverse_disparity(G) Return the inverse disparity of the graph. closed_inverse_disparity(G) Return the closed inverse disparity of the graph. average_vertex_disparity(G) Return the average vertex disparity of the graph. average_closed_vertex_disparity(G) Return the average closed vertex disparity of the graph. k_disparity(G, k) Return the k-disparity of the graph. closed_k_disparity(G, k) Return the closed k-disparity of the graph. irregularity(G) Return the irregularity measure of the graph. grinpy.invariants.disparity.vertex_disparity grinpy.invariants.disparity.vertex_disparity(G, v) Return number of distinct degrees of neighbors of v. Parameters • G (NetworkX graph) – An undirected graph. • v (node) – A node in G. Returns The number of distinct degrees of neighbors of v. Return type int See also: closed_vertex_disparity() grinpy.invariants.disparity.closed_vertex_disparity grinpy.invariants.disparity.closed_vertex_disparity(G, v) Return number of distinct degrees of nodes in the closed neighborhood of v. Parameters • G (NetworkX graph) – An undirected graph. • v (node) – A node in G. GrinPy Documentation, Release latest Returns The number of distinct degrees of nodes in the closed neighborhood of v. Return type int See also: vertex_disparity() grinpy.invariants.disparity.disparity_sequence grinpy.invariants.disparity.disparity_sequence(G) Return the sequence of disparities of each node in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The sequence of disparities of each node in the graph. Return type list See also: closed_disparity_sequence(), vertex_disparity() grinpy.invariants.disparity.closed_disparity_sequence grinpy.invariants.disparity.closed_disparity_sequence(G) Return the sequence of closed disparities of each node in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The sequence of closed disparities of each node in the graph. Return type list See also: closed_vertex_disparity(), disparity_sequence() grinpy.invariants.disparity.CW_disparity grinpy.invariants.disparity.CW_disparity(G) Return the Caro-Wei disparity of the graph. The Caro-Wei disparity of a graph is defined as: 𝑣∈𝑉 (𝐺) where V(G) is the set of nodes of G and disp(v) is the disparity of the vertex v. This invariant is inspired by the Caro-Wei bound for the independence number of a graph, hence the name. Parameters G (NetworkX graph) – An undirected graph. Returns The Caro-Wei disparity of the graph. Return type float See also: closed_CW_disparity(), closed_inverse_disparity(), inverse_disparity() GrinPy Documentation, Release latest grinpy.invariants.disparity.closed_CW_disparity grinpy.invariants.disparity.closed_CW_disparity(G) Return the closed Caro-Wei disparity of the graph. The closed Caro-Wei disparity of a graph is defined as: 𝑣∈𝑉 (𝐺) where V(G) is the set of nodes of G and cdisp(v) is the closed disparity of the vertex v. This invariant is inspired by the Caro-Wei bound for the independence number of a graph, hence the name. Parameters G (NetworkX graph) – An undirected graph. Returns The closed Caro-Wei disparity of the graph. Return type float See also: CW_disparity(), closed_inverse_disparity(), inverse_disparity() grinpy.invariants.disparity.inverse_disparity grinpy.invariants.disparity.inverse_disparity(G) Return the inverse disparity of the graph. The inverse disparity of a graph is defined as: 𝑑𝑖𝑠𝑝(𝑣) 𝑣∈𝑉 (𝐺) where V(G) is the set of nodes of G and disp(v) is the disparity of the vertex v. Parameters G (NetworkX graph) – An undirected graph. Returns The inverse disparity of the graph. Return type float See also: CW_disparity(), closed_CW_disparity(), closed_inverse_disparity() grinpy.invariants.disparity.closed_inverse_disparity grinpy.invariants.disparity.closed_inverse_disparity(G) Return the closed inverse disparity of the graph. The closed inverse disparity of a graph is defined as: 𝑐𝑑𝑖𝑠𝑝(𝑣) 𝑣∈𝑉 (𝐺) where V(G) is the set of nodes of G and cdisp(v) is the closed disparity of the vertex v. Parameters G (NetworkX graph) – An undirected graph. GrinPy Documentation, Release latest Returns The closed inverse disparity of the graph. Return type float See also: CW_disparity(), closed_CW_disparity(), inverse_disparity() grinpy.invariants.disparity.average_vertex_disparity grinpy.invariants.disparity.average_vertex_disparity(G) Return the average vertex disparity of the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The average vertex disparity of the graph. Return type int See also: average_closed_vertex_disparity(), vertex_disparity() grinpy.invariants.disparity.average_closed_vertex_disparity grinpy.invariants.disparity.average_closed_vertex_disparity(G) Return the average closed vertex disparity of the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The average closed vertex disparity of the graph. Return type int See also: average_vertex_disparity(), closed_vertex_disparity() grinpy.invariants.disparity.k_disparity grinpy.invariants.disparity.k_disparity(G, k) Return the k-disparity of the graph. The k-disparity of a graph is defined as: 𝑘−𝑖 (𝑘 − 𝑖)𝑑𝑖 where k is a positive integer and d_i is the i-th element in the disparity sequence, ordered in weakly decreasing order. Parameters G (NetworkX graph) – An undirected graph. Returns The k-disparity of the graph. Return type float See also: closed_k_disparity() GrinPy Documentation, Release latest grinpy.invariants.disparity.closed_k_disparity grinpy.invariants.disparity.closed_k_disparity(G, k) Return the closed k-disparity of the graph. The closed k-disparity of a graph is defined as: (𝑘 − 𝑖)𝑑𝑖 where k is a positive integer and d_i is the i-th element in the closed disparity sequence, ordered in weakly decreasing order. Parameters G (NetworkX graph) – An undirected graph. Returns The closed k-disparity of the graph. Return type float See also: k_disparity() grinpy.invariants.disparity.irregularity grinpy.invariants.disparity.irregularity(G) Return the irregularity measure of the graph. The irregularity of an n-vertex graph is defined as: 𝑛−𝑖 (𝑛 − 𝑖)𝑑𝑖 where d_i is the i-th element in the closed disparity sequence, ordered in weakly decreasing order. Parameters G (NetworkX graph) – An undirected graph. Returns The irregularity of the graph. Return type float See also: k_disparity() Domination Functions for computing dominating sets in a graph. is_k_dominating_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise a k-dominating set. is_total_dominating_set(G, nbunch) Return whether or not the nodes in nbunch comprise a total dominating set. min_k_dominating_set(G, k) Return a smallest k-dominating set in the graph. min_dominating_set(G) Return a smallest dominating set in the graph. min_total_dominating_set(G) Return a smallest total dominating set in the graph. Continued on next page GrinPy Documentation, Release latest Table 7 – continued from previous page domination_number(G) Return the domination number the graph. k_domination_number(G, k) Return the k-domination number the graph. total_domination_number(G) Return the total domination number the graph. grinpy.invariants.domination.is_k_dominating_set grinpy.invariants.domination.is_k_dominating_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise a k-dominating set. A k-dominating set is a set of nodes with the property that every node in the graph is either in the set or adjacent at least 1 and at most k nodes in the set. This is a generalization of the well known concept of a dominating set (take k = 1). Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. • k (int) – A positive integer. Returns True if the nodes in nbunch comprise a k-dominating set, and False otherwise. Return type boolean grinpy.invariants.domination.is_total_dominating_set grinpy.invariants.domination.is_total_dominating_set(G, nbunch) Return whether or not the nodes in nbunch comprise a total dominating set. A * total dominating set* is a set of nodes with the property that every node in the graph is adjacent to some node in the set. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. Returns True if the nodes in nbunch comprise a k-dominating set, and False otherwise. Return type boolean grinpy.invariants.domination.min_k_dominating_set grinpy.invariants.domination.min_k_dominating_set(G, k) Return a smallest k-dominating set in the graph. The method to compute the set is brute force except that the subsets searched begin with those whose cardinality is equal to the sub-k-domination number of the graph, which was defined by Amos et al. and shown to be a tractable lower bound for the k-domination number. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. GrinPy Documentation, Release latest Returns A list of nodes in a smallest k-dominating set in the graph. Return type list References <NAME>, <NAME>, and <NAME>, The sub-k-domination number of a graph with applications to k- domination, arXiv preprint arXiv:1611.02379, (2016) grinpy.invariants.domination.min_dominating_set grinpy.invariants.domination.min_dominating_set(G) Return a smallest dominating set in the graph. The method to compute the set is brute force except that the subsets searched begin with those whose cardinality is equal to the sub-domination number of the graph, which was defined by Amos et al. and shown to be a tractable lower bound for the k-domination number. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns A list of nodes in a smallest dominating set in the graph. Return type list See also: min_k_dominating_set() References <NAME>, <NAME>, <NAME> and <NAME>, The sub-k-domination number of a graph with applications to k-domination, arXiv preprint arXiv:1611.02379, (2016) grinpy.invariants.domination.min_total_dominating_set grinpy.invariants.domination.min_total_dominating_set(G) Return a smallest total dominating set in the graph. The method to compute the set is brute force except that the subsets searched begin with those whose cardinality is equal to the sub-total-domination number of the graph, which was defined by Davila and shown to be a tractable lower bound for the k-domination number. Parameters G (NetworkX graph) – An undirected graph. Returns A list of nodes in a smallest total dominating set in the graph. Return type list References <NAME>, A note on sub-total domination in graphs. arXiv preprint arXiv:1701.07811, (2017) GrinPy Documentation, Release latest grinpy.invariants.domination.domination_number grinpy.invariants.domination.domination_number(G) Return the domination number the graph. The domination number of a graph is the cardinality of a smallest dominating set of nodes in the graph. The method to compute this number modified brute force. Parameters G (NetworkX graph) – An undirected graph. Returns The domination number of the graph. Return type int See also: min_dominating_set(), k_domination_number() grinpy.invariants.domination.k_domination_number grinpy.invariants.domination.k_domination_number(G, k) Return the k-domination number the graph. The k-domination number of a graph is the cardinality of a smallest k-dominating set of nodes in the graph. The method to compute this number is modified brute force. Parameters G (NetworkX graph) – An undirected graph. Returns The k-domination number of the graph. Return type int See also: min_k_dominating_set(), domination_number() grinpy.invariants.domination.total_domination_number grinpy.invariants.domination.total_domination_number(G) Return the total domination number the graph. The total domination number of a graph is the cardinality of a smallest total dominating set of nodes in the graph. The method to compute this number is modified brute force. Parameters G (NetworkX graph) – An undirected graph. Returns The total domination number of the graph. Return type int DSI Functions for computing DSI style invariants. sub_k_domination_number(G, k) Return the sub-k-domination number of the graph. Continued on next page GrinPy Documentation, Release latest Table 8 – continued from previous page slater(G) Return the Slater invariant for the graph. sub_total_domination_number(G) Return the sub-total domination number of the graph. annihilation_number(G) Return the annihilation number of the graph. grinpy.invariants.dsi.sub_k_domination_number grinpy.invariants.dsi.sub_k_domination_number(G, k) Return the sub-k-domination number of the graph. The sub-k-domination number of a graph G with n nodes is defined as the smallest positive integer t such that the following relation holds: 𝑡 𝑡+ 𝑑𝑖 ≥ 𝑛 where is the degree sequence of the graph. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns The sub-k-domination number of a graph. Return type int See also: slater() Examples >>> G = nx.cycle_graph(4) >>> nx.sub_k_domination_number(G, 1) True References <NAME>, <NAME>, <NAME> and <NAME>, The sub-k-domination number of a graph with applications to k-domination, arXiv preprint arXiv:1611.02379, (2016) grinpy.invariants.dsi.slater grinpy.invariants.dsi.slater(G) Return the Slater invariant for the graph. GrinPy Documentation, Release latest The Slater invariant of a graph G is a lower bound for the domination number of a graph defined by: ∑︁𝑡 𝑠𝑙(𝐺) = min{𝑡 : 𝑡 + 𝑑𝑖 ≥ 𝑛} where is the degree sequence of the graph ordered in non-increasing order and n is the order of G. Amos et al. rediscovered this invariant and generalized it into what is now known as the sub-k-domination number. Parameters G (NetworkX graph) – An undirected graph. Returns The Slater invariant for the graph. Return type int See also: sub_k_domination_number() References <NAME>, <NAME>, <NAME> and <NAME>, The sub-k-domination number of a graph with applications to k-domination, arXiv preprint arXiv:1611.02379, (2016) <NAME>, Locating dominating sets and locating-dominating set, Graph Theory, Combinatorics and Applica- tions: Proceedings of the 7th Quadrennial International Conference on the Theory and Applications of Graphs, 2: 2073-1079 (1995) grinpy.invariants.dsi.sub_total_domination_number grinpy.invariants.dsi.sub_total_domination_number(G) Return the sub-total domination number of the graph. The sub-total domination number is defined as: ∑︁𝑡 𝑠𝑢𝑏𝑡 (𝐺) = min{𝑡 : 𝑑𝑖 ≥ 𝑛} where is the degree sequence of the graph ordered in non-increasing order and n is the order of the graph. This invariant was defined and investigated by <NAME>. Parameters G (NetworkX graph) – An undirected graph. Returns The sub-total domination number of the graph. Return type int GrinPy Documentation, Release latest References <NAME>, A note on sub-total domination in graphs. arXiv preprint arXiv:1701.07811, (2017) grinpy.invariants.dsi.annihilation_number grinpy.invariants.dsi.annihilation_number(G) Return the annihilation number of the graph. The annihilation number of a graph G is defined as: ∑︁𝑡 𝑎(𝐺) = max{𝑡 : 𝑑𝑖 ≤ 𝑚} where is the degree sequence of the graph ordered in non-decreasing order and m is the number of edges in G. Parameters G (NetworkX graph) – An undirected graph. Returns The annihilation number of the graph. Return type int Independence Functions for computing independence related invariants for a graph. is_independent_set(G, nbunch) Return whether or not the nodes in nbunch comprise an independent set. is_k_independent_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise an a k-independent set. max_k_independent_set(G, k) Return a largest k-independent set of nodes in G. max_independent_set(G) Return a largest independent set of nodes in G. independence_number(G) Return a the independence number of G. k_independence_number(G, k) Return a the k-independence number of G. grinpy.invariants.independence.is_independent_set grinpy.invariants.independence.is_independent_set(G, nbunch) Return whether or not the nodes in nbunch comprise an independent set. An set S of nodes in G is called an independent set if no two nodes in S are neighbors of one another. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. Returns True if the nodes in nbunch comprise an independent set, False otherwise. Return type bool GrinPy Documentation, Release latest See also: is_k_independent_set() grinpy.invariants.independence.is_k_independent_set grinpy.invariants.independence.is_k_independent_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise an a k-independent set. A set S of nodes in G is called a k-independent set it every node in S has at most k-1 neighbors in S. Notice that a 1-independent set is equivalent to an independent set. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. • k (int) – A positive integer. Returns True if the nodes in nbunch comprise a k-independent set, False otherwise. Return type bool See also: is_independent_set() grinpy.invariants.independence.max_k_independent_set grinpy.invariants.independence.max_k_independent_set(G, k) Return a largest k-independent set of nodes in G. The method used is brute force, except when k*=1. In this case, the search starts with subsets of *G with cardinality equal to the annihilation number of G, which was shown by Pepper to be an upper bound for the independence number of a graph, and then continues checking smaller subsets until a maximum independent set is found. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns A list of nodes comprising a largest k-independent set in G. Return type list See also: max_independent_set() grinpy.invariants.independence.max_independent_set grinpy.invariants.independence.max_independent_set(G) Return a largest independent set of nodes in G. The method used is a modified brute force search. The search starts with subsets of G with cardinality equal to the annihilation number of G, which was shown by Pepper to be an upper bound for the independence number of a graph, and then continues checking smaller subsets until a maximum independent set is found. GrinPy Documentation, Release latest Parameters G (NetworkX graph) – An undirected graph. Returns A list of nodes comprising a largest independent set in G. Return type list See also: max_independent_set() grinpy.invariants.independence.independence_number grinpy.invariants.independence.independence_number(G) Return a the independence number of G. The independence number of a graph is the cardinality of a largest independent set of nodes in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The independence number of G. Return type int See also: k_independence_number() grinpy.invariants.independence.k_independence_number grinpy.invariants.independence.k_independence_number(G, k) Return a the k-independence number of G. The k-independence number of a graph is the cardinality of a largest k-independent set of nodes in the graph. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns The k-independence number of G. Return type int See also: independence_number() Matching Functions for computing matching related invariants for a graph. max_matching(G) Return a maximum matching in G. matching_number(G) Return the matching number of G. min_maximal_matching(G) Return a smallest maximal matching in G. min_maximal_matching_number(G) Return the minimum maximal matching number of G. GrinPy Documentation, Release latest grinpy.invariants.matching.max_matching grinpy.invariants.matching.max_matching(G) Return a maximum matching in G. A maximum matching is a largest set of edges such that no two edges in the set have a common endpoint. Parameters G (NetworkX graph) – An undirected graph. Returns A list of edges in a maximum matching. Return type list grinpy.invariants.matching.matching_number grinpy.invariants.matching.matching_number(G) Return the matching number of G. The matching number of a graph G is the cardinality of a maximum matching in G. Parameters G (NetworkX graph) – An undirected graph. Returns The matching number of G. Return type int grinpy.invariants.matching.min_maximal_matching grinpy.invariants.matching.min_maximal_matching(G) Return a smallest maximal matching in G. A maximal matching is a maximal set of edges such that no two edges in the set have a common endpoint. Parameters G (NetworkX graph) – An undirected graph. Returns A list of edges in a smalles maximal matching. Return type list grinpy.invariants.matching.min_maximal_matching_number grinpy.invariants.matching.min_maximal_matching_number(G) Return the minimum maximal matching number of G. The minimum maximal matching number of a graph G is the cardinality of a smallest maximal matching in G. Parameters G (NetworkX graph) – An undirected graph. Returns The minimum maximal matching number of G. Return type int Power Domination Functions for computing power domination related invariants of a graph. GrinPy Documentation, Release latest is_power_dominating_set(G, nbunch) Return whether or not the nodes in nbunch comprise a power dominating set. min_power_dominating_set(G) Return a smallest power dominating set of nodes in G. power_domination_number(G) Return the power domination number of G. grinpy.invariants.power_domination.is_power_dominating_set grinpy.invariants.power_domination.is_power_dominating_set(G, nbunch) Return whether or not the nodes in nbunch comprise a power dominating set. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. Returns True if the nodes in nbunch comprise a power dominating set, False otherwise. Return type boolean grinpy.invariants.power_domination.min_power_dominating_set grinpy.invariants.power_domination.min_power_dominating_set(G) Return a smallest power dominating set of nodes in G. The method used to compute the set is brute force. Parameters G (NetworkX graph) – An undirected graph. Returns A list of nodes in a smallest power dominating set in G. Return type list grinpy.invariants.power_domination.power_domination_number grinpy.invariants.power_domination.power_domination_number(G) Return the power domination number of G. Parameters G (NetworkX graph) – An undirected graph. Returns The power domination number of G. Return type int Residue Functions for computing the residue and related invariants. residue(G) Return the residue of G. k_residue(G, k) Return the k-residue of G. GrinPy Documentation, Release latest grinpy.invariants.residue.residue grinpy.invariants.residue.residue(G) Return the residue of G. The residue of a graph G is the number of zeros obtained in final sequence of the Havel Hakimi process. Parameters G (NetworkX graph) – An undirected graph. Returns The residue of G. Return type int See also: k_residue(), havel_hakimi_process() grinpy.invariants.residue.k_residue grinpy.invariants.residue.k_residue(G, k) Return the k-residue of G. The k-residue of a graph G is defined as follows: (𝑘 − 𝑖)𝑓 (𝑖) where f(i) is the frequency of i in the elmination sequence of the graph. The elimination sequence is the sequence of deletions made during the Havel Hakimi process together with the zeros obtained in the final step. Parameters G (NetworkX graph) – An undirected graph. Returns The k-residue of G. Return type float See also: residue(), havel_hakimi_process(), elimination_sequence() Zero Forcing Functions for computing zero forcing related invariants of a graph. is_k_forcing_vertex(G, v, nbunch, k) Return whether or not v can k-force relative to the set of nodes in nbunch. is_k_forcing_active_set(G, nbunch, k) Return whether or not at least one node in nbunch can k-force. is_k_forcing_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise a k-forcing set in G. min_k_forcing_set(G, k) Return a smallest k-forcing set in G. k_forcing_number(G, k) Return the k-forcing number of G. is_zero_forcing_vertex(G, v, nbunch) Return whether or not v can force relative to the set of nodes in nbunch. is_zero_forcing_active_set(G, nbunch) Return whether or not at least one node in nbunch can force. Continued on next page GrinPy Documentation, Release latest Table 13 – continued from previous page is_zero_forcing_set(G, nbunch) Return whether or not the nodes in nbunch comprise a zero forcing set in G. min_zero_forcing_set(G) Return a smallest zero forcing set in G. zero_forcing_number(G) Return the zero forcing number of G. grinpy.invariants.zero_forcing.is_k_forcing_vertex grinpy.invariants.zero_forcing.is_k_forcing_vertex(G, v, nbunch, k) Return whether or not v can k-force relative to the set of nodes in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • v (node) – A single node in G. • nbunch – A single node or iterable container or nodes. • k (int) – A positive integer. Returns True if v can k-force relative to the nodes in nbunch. False otherwise. Return type boolean grinpy.invariants.zero_forcing.is_k_forcing_active_set grinpy.invariants.zero_forcing.is_k_forcing_active_set(G, nbunch, k) Return whether or not at least one node in nbunch can k-force. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. • k (int) – A positive integer. Returns True if at least one of the nodes in nbunch can k-force. False otherwise. Return type boolean grinpy.invariants.zero_forcing.is_k_forcing_set grinpy.invariants.zero_forcing.is_k_forcing_set(G, nbunch, k) Return whether or not the nodes in nbunch comprise a k-forcing set in G. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. • k (int) – A positive integer. Returns True if the nodes in nbunch comprise a k-forcing set in G. False otherwise. Return type boolean GrinPy Documentation, Release latest grinpy.invariants.zero_forcing.min_k_forcing_set grinpy.invariants.zero_forcing.min_k_forcing_set(G, k) Return a smallest k-forcing set in G. The method used to compute the set is brute force. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns A list of nodes in a smallest k-forcing set in G. Return type list grinpy.invariants.zero_forcing.k_forcing_number grinpy.invariants.zero_forcing.k_forcing_number(G, k) Return the k-forcing number of G. The k-forcing number of a graph is the cardinality of a smallest k-forcing set in the graph. Parameters • G (NetworkX graph) – An undirected graph. • k (int) – A positive integer. Returns The k-forcing number of G. Return type int grinpy.invariants.zero_forcing.is_zero_forcing_vertex grinpy.invariants.zero_forcing.is_zero_forcing_vertex(G, v, nbunch) Return whether or not v can force relative to the set of nodes in nbunch. Parameters • G (NetworkX graph) – An undirected graph. • v (node) – A single node in G. • nbunch – A single node or iterable container or nodes. Returns True if v can force relative to the nodes in nbunch. False otherwise. Return type boolean grinpy.invariants.zero_forcing.is_zero_forcing_active_set grinpy.invariants.zero_forcing.is_zero_forcing_active_set(G, nbunch) Return whether or not at least one node in nbunch can force. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. GrinPy Documentation, Release latest Returns True if at least one of the nodes in nbunch can force. False otherwise. Return type boolean grinpy.invariants.zero_forcing.is_zero_forcing_set grinpy.invariants.zero_forcing.is_zero_forcing_set(G, nbunch) Return whether or not the nodes in nbunch comprise a zero forcing set in G. Parameters • G (NetworkX graph) – An undirected graph. • nbunch – A single node or iterable container or nodes. Returns True if the nodes in nbunch comprise a zero forcing set in G. False otherwise. Return type boolean grinpy.invariants.zero_forcing.min_zero_forcing_set grinpy.invariants.zero_forcing.min_zero_forcing_set(G) Return a smallest zero forcing set in G. The method used to compute the set is brute force. Parameters G (NetworkX graph) – An undirected graph. Returns A list of nodes in a smallest zero forcing set in G. Return type list grinpy.invariants.zero_forcing.zero_forcing_number grinpy.invariants.zero_forcing.zero_forcing_number(G) Return the zero forcing number of G. The zero forcing number of a graph is the cardinality of a smallest zero forcing set in the graph. Parameters G (NetworkX graph) – An undirected graph. Returns The zero forcing number of G. Return type int 4.3 License GrinPy is distributed with the 3-clause BSD license. As an extension of the NetworkX package, we list the pertinent copyright information as requested by the NetworkX authors. GrinPy ------ Copyright (C) 2017, GrinPy Developers <NAME> <<EMAIL>> <NAME> <<EMAIL>> (continues on next page) GrinPy Documentation, Release latest (continued from previous page) NetworkX -------- Copyright (C) 2004-2017, NetworkX Developers <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the NetworkX Developers nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. GrinPy Documentation, Release latest 40 Chapter 4. Documentation CHAPTER 5 Indices and tables • genindex • modindex • search 41 GrinPy Documentation, Release latest 42 Chapter 5. Indices and tables Python Module Index g grinpy.functions.degree, 12 grinpy.functions.neighborhoods, 16 grinpy.invariants.chromatic, 19 grinpy.invariants.clique, 19 grinpy.invariants.disparity, 20 grinpy.invariants.domination, 24 grinpy.invariants.dsi, 27 grinpy.invariants.independence, 30 grinpy.invariants.matching, 32 grinpy.invariants.power_domination, 33 grinpy.invariants.residue, 34 grinpy.invariants.zero_forcing, 35 GrinPy Documentation, Release latest 44 Python Module Index
hpfilter
cran
R
Package ‘hpfilter’ May 28, 2023 Type Package Title The One- And Two-Sided Hodrick-Prescott Filter Version 1.0.2 Author <NAME> Maintainer <NAME> <<EMAIL>> URL https://www.alexandrumonahov.eu.org/projects Description Provides two functions that implement the one-sided and two-sided versions of the Hodrick-Prescott filter. The one-sided version is a Kalman filter-based implementation, whereas the two- sided version uses sparse matrices for improved efficiency. References: <NAME>., and <NAME>. (1997) <doi:10.2307/2953682> <NAME>. (2008) <doi:10.1111/j.1368-423X.2008.00230.x> <NAME>. (2010) <https://ideas.repec.org/c/dge/qmrbcd/181.html> For more references, see the vignette. License CC BY-SA 4.0 Imports Matrix Encoding UTF-8 LazyData true RoxygenNote 7.2.3 Suggests knitr, rmarkdown VignetteBuilder knitr NeedsCompilation no Depends R (>= 3.5.0) Repository CRAN Date/Publication 2023-05-28 16:40:02 UTC R topics documented: GDPE... 2 hp... 3 hp... 5 GDPEU Real Gross Domestic Product for European Union (28 countries) Description Units: Millions of Chained 2010 Euros, Seasonally Adjusted Usage data(GDPEU) Format A dataframe containing: **gdp** the seasonally-adjusted real Gross Domestic Product for 28 Eu- ropean Union countries **date** the date of each observation Details Frequency: Quarterly Eurostat unit ID: CLV10_MNAC Eurostat item ID = B1GQ Eurostat country ID: EU28 Seasonally and calendar adjusted data. For euro area member states, the national currency series are converted into euros using the irre- vocably fixed exchange rate. This preserves the same growth rates than for the previous national currency series. Both series coincide for years after accession to the euro area but differ for earlier years due to market exchange rate movements. European Union (28 countries): Belgium, Denmark, Germany, Ireland, Greece, Spain, France, Italy, Luxembourg, the Netherlands, Portugal, the United Kingdom, Austria, Finland, Sweden, Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovenia, Slovakia, Bul- garia, Romania, and Croatia. Copyright, European Union, http://ec.europa.eu, 1995-2016. Complete terms of use are available at http://ec.europa.eu/geninfo/legal_notices_en.htm#copyright Source Data retrieved from FRED, Federal Reserve Bank of St. Louis. References Eurostat, Real Gross Domestic Product for European Union (28 countries) [CLVMNACSCAB1GQEU28] (Eurostat) Examples # Load the dataset data(GDPEU) # Plot the y series plot(GDPEU$date, GDPEU$y, type="l") # Remove the date column if not needed and store in a df object df <- GDPEU[,-1] hp1 One-Sided HP Filter Description hp1 applies a one-sided Hodrick-Prescott filter derived using the Kalman filter to separate a time series into trend and cyclical components. The smoothing parameter should reflect the cyclical duration or frequency of the data. Usage hp1(y, lambda = 1600, x_user = NA, P_user = NA, discard = 0) Arguments y a dataframe of size Txn, where "T" is the number of observations for each vari- able (number of rows) and "n" - the number of variables in the dataframe (num- ber of columns). lambda the smoothing parameter; a numeric scalar which takes the default value of 1600, if unspecified by the user. x_user user defined initial values of the state estimate for each variable in y. Takes the form of a 2xn matrix. Since the underlying state vector is 2x1, two values are needed for each variable in y. By default: if no values are provided, backwards extrapolations based on the first two observations are used. P_user a structural array with n elements, each of which being a 2x2 matrix of initial MSE estimates for each variable in y. By default: if no values are provided, a matrix with relatively large variances is used. discard the number of discard periods, expressed as a numeric scalar. The user specified amount of values will be discarded from the start of the sample, resulting in output matrices of size (T-discard)xn. By default: if no values are provided, is set to 0. Details The length of the time series should be greater than four and the value of the smoothing parameter greater than zero for the code to function. Of course, having a sufficiently long time series is paramount to achieving meaningful results. Value a (T-discard)xn dataframe, containing the trend data Author(s) <NAME>, <https://www.alexandrumonahov.eu.org/> References <NAME>. (2019). Miscellaneous Time Series Filters ’mFilter’. CRAN R Package Library. <NAME>., and <NAME>. (2018). Why You Should Use the Hodrick-Prescott Filter - at Least to Generate Credit Gaps. BIS Working Paper No. 744. Eurostat (2023), Real Gross Domestic Product for European Union (28 countries) [CLVMNAC- SCAB1GQEU28], National Accounts - GDP. <NAME>. (2017). Why You Should Never Use the Hodrick-Prescott Filter. Working Paper Series. National Bureau of Economic Research, May 2017. <NAME>., and <NAME>. (1997). Postwar U.S. Business Cycles: An Empirical Investiga- tion. Journal of Money, Credit, and Banking 29: 1-16. <NAME>. (2004). "Hodrick-Prescott Filter". Notes, Auburn University. <NAME>. (2008). Exact formulas for the Hodrick-Prescott Filter. Econometrics Journal. 11. 209-217. <NAME>. (2010). Matlab code for one-sided HP-filters. QM&RBC Codes 181, Quantitative Macroeconomics & Real Business Cycles. <NAME>., and <NAME>. (2002). On adjusting the Hodrick-Prescott filter for the frequency of observations, The Review of Economics and Statistics 2002; 84 (2): 371-376. <NAME>. (2021). neverhpfilter: An Alternative to the Hodrick-Prescott Filter. CRAN R Package Library. See Also [hp2()] Examples # Generate the data and plot it set.seed(10) y <- as.data.frame(rev(diffinv(rnorm(100)))[1:100])+30 colnames(y) <- "gdp" plot(y$gdp, type="l") # Apply the HP filter to the data ytrend = hp1(y) ycycle = y - ytrend # Plot the three resulting series plot(y$gdp, type="l", col="black", lty=1, ylim=c(-10,30)) lines(ytrend$gdp, col="#066462") polygon(c(1, seq(ycycle$gdp), length(ycycle$gdp)), c(0, ycycle$gdp, 0), col = "#E0F2F1") legend("bottom", horiz=TRUE, cex=0.75, c("y", "ytrend", "ycycle"), lty = 1, col = c("black", "#066462", "#75bfbd")) hp2 Two-Sided HP Filter Description hp2 applies a standard two-sided Hodrick-Prescott filter using sparse matrices to help reduce the compute time for large datasets. The smoothing parameter should reflect the cyclical duration or frequency of the data. Usage hp2(y, lambda = 1600) Arguments y a dataframe of size Txn, where "T" is the number of observations for each vari- able (number of rows) and "n" - the number of variables in the dataframe (num- ber of columns). lambda the smoothing parameter; a numeric scalar which takes the default value of 1600, if unspecified by the user. Details The length of the time series should be greater than four and the value of the smoothing parameter greater than zero for the code to function. Of course, having a sufficiently long time series is paramount to achieving meaningful results. Value a Txn dataframe, containing the trend data Author(s) <NAME>, <https://www.alexandrumonahov.eu.org/> References <NAME>. (2019). Miscellaneous Time Series Filters ’mFilter’. CRAN R Package Library. <NAME>., and <NAME>. (2018). Why You Should Use the Hodrick-Prescott Filter - at Least to Generate Credit Gaps. BIS Working Paper No. 744. Eurostat (2023), Real Gross Domestic Product for European Union (28 countries) [CLVMNAC- SCAB1GQEU28], National Accounts - GDP. <NAME>. (2017). Why You Should Never Use the Hodrick-Prescott Filter. Working Paper Series. National Bureau of Economic Research, May 2017. <NAME>., and <NAME>. (1997). Postwar U.S. Business Cycles: An Empirical Investiga- tion. Journal of Money, Credit, and Banking 29: 1-16. <NAME>. (2004). "Hodrick-Prescott Filter". Notes, Auburn University. <NAME>. (2008). Exact formulas for the Hodrick-Prescott Filter. Econometrics Journal. 11. 209-217. <NAME>. (2010). Matlab code for one-sided HP-filters. QM&RBC Codes 181, Quantitative Macroeconomics & Real Business Cycles. <NAME>., and <NAME>. (2002). On adjusting the Hodrick-Prescott filter for the frequency of observations, The Review of Economics and Statistics 2002; 84 (2): 371-376. <NAME>. (2021). neverhpfilter: An Alternative to the Hodrick-Prescott Filter. CRAN R Package Library. See Also [hp1()] Examples # Generate the data and plot it set.seed(10) y <- as.data.frame(rev(diffinv(rnorm(100)))[1:100])+30 colnames(y) <- "gdp" plot(y$gdp, type="l") # Apply the HP filter to the data ytrend = hp2(y) ycycle = y - ytrend # Plot the three resulting series plot(y$gdp, type="l", col="black", lty=1, ylim=c(-10,30)) lines(ytrend$gdp, col="#066462") polygon(c(1, seq(ycycle$gdp), length(ycycle$gdp)), c(0, ycycle$gdp, 0), col = "#E0F2F1") legend("bottom", horiz=TRUE, cex=0.75, c("y", "ytrend", "ycycle"), lty = 1, col = c("black", "#066462", "#75bfbd"))
biclustermd
cran
R
Package ‘biclustermd’ October 12, 2022 Type Package Title Biclustering with Missing Data Version 0.2.3 Maintainer <NAME> <<EMAIL>> Description Biclustering is a statistical learning technique that simultaneously partitions and clusters rows and columns of a data matrix. Since the solution space of biclustering is in infeasible to completely search with current computational mechanisms, this package uses a greedy heuristic. The algorithm featured in this package is, to the best our knowledge, the first biclustering algorithm to work on data with missing values. <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2020) Biclustering with Missing Data. Information Sciences, 510, 304–316. URL https://github.com/jreisner/biclustermd BugReports https://github.com/jreisner/biclustermd/issues Depends ggplot2 (>= 3.0.0), R (>= 3.5.0), tidyr (>= 0.8.1) Imports biclust (>= 2.0.1), doParallel (>= 1.0.14), dplyr (>= 0.7.6), foreach (>= 1.4.4), magrittr (>= 1.5), nycflights13 (>= 1.0.0), phyclust (>= 0.1-24) License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 7.1.1 Suggests knitr, rmarkdown, testthat VignetteBuilder knitr NeedsCompilation no Author <NAME> [cre, aut, cph], <NAME> [ctb, cph], <NAME> [ctb, cph] Repository CRAN Date/Publication 2021-06-17 15:10:06 UTC R topics documented: biclustermd-packag... 2 as.Biclus... 3 autoplot.biclusterm... 4 autoplot.biclustermd_si... 5 autoplot.biclustermd_ss... 6 biclusterm... 7 binary_vector_ge... 9 cell_heatma... 10 cell_ms... 11 cluster_iteration_sum_ss... 11 col.name... 12 col.names.biclusterm... 12 col_cluster_name... 13 compare_bicluster... 14 fill_empties_... 14 fill_empties_... 15 format_partitio... 16 gather.biclusterm... 16 jaccard_similarit... 17 mse_heatma... 18 partition_ge... 19 partition_gen_by_... 19 part_matrix_to_vecto... 20 position_finde... 20 print.biclusterm... 21 reorder_biclus... 21 rep_biclusterm... 22 results_heatma... 24 row.names.biclusterm... 25 row_cluster_name... 25 runtime... 26 syntheti... 27 tune_biclusterm... 27 biclustermd-package biclustermd: A package to bicluster data with missing values Description The main function is biclustermd(). Results can be plotted with autoplot() and as.Biclust() converts results to Biclust objects. as.Biclust Convert a biclustermd object to a Biclust object Description Convert a biclustermd object to a Biclust object Usage as.Biclust(object) Arguments object The biclustermd object to convert to a Biclust object Value Returns an object of class Biclust. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc as.Biclust(bc) # biclust::drawHeatmap won't work since it doesn't exclude NAs ## Not run: biclust::drawHeatmap(synthetic, as.Biclust(bc), 6) # bicluster 6 is in the top right-hand corner here: autoplot(bc) # compare with bicust::drawHeatmap2: biclust::drawHeatmap2(synthetic, as.Biclust(bc), 6) # bicluster 3 is in the bottom right-hand corner here: autoplot(bc) # compare with bicust::drawHeatmap2: biclust::drawHeatmap2(synthetic, as.Biclust(bc), 3) autoplot.biclustermd Make a heatmap of sparse biclustering results Description Make a heatmap of sparse biclustering results Usage ## S3 method for class 'biclustermd' autoplot( object, axis.text = NULL, reorder = FALSE, transform_colors = FALSE, c = 1/6, cell_alpha = 1/5, col_clusts = NULL, row_clusts = NULL, ... ) Arguments object An object of class "biclustermd". axis.text A character vector specifying for which axes text should be drawn. Can be any of "x", "col" for columns, "y", "row" for rows, or any combination of the four. By default this is NULL; no axis text is drawn. reorder A logical. If TRUE, heatmap will be sorted according to the cell-average matrix, A. transform_colors If equals TRUE then the data is scaled by c and run through a standard normal cdf before plotting. If FALSE (default), raw data values are used in the heat map. c Value to scale the data by before running it through a standard normal CDF. Default is 1/6. cell_alpha A scalar defining the transparency of shading over a cell and by default this equals 1/5. The color corresponds to the cell mean. col_clusts A vector of column cluster indices to display. If NULL (default), all are displayed. row_clusts A vector of row cluster indices to display. If NULL (default), all are displayed. ... Arguments to be passed to geom_vline() and geom_hline(). Value An object of class ggplot. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc autoplot(bc) autoplot(bc, axis.text = c('x', 'row')) + ggplot2::scale_fill_distiller(palette = "Spectral", na.value = "white") # Complete shading autoplot(bc, axis.text = c('col', 'row'), cell_alpha = 1) # Transformed values and no shading autoplot(bc, transform_colors = TRUE, c = 1/20, cell_alpha = 0) # Focus on row cluster 1 and column cluster 2 autoplot(bc, col_clusts = 2, row_clusts = 1) autoplot.biclustermd_sim Plot similarity measures between two consecutive biclusterings. Description Creates a ggplot of the three similarity measures used in biclustermd::bicluster() for both row and column dimensions. Usage ## S3 method for class 'biclustermd_sim' autoplot(object, similarity = NULL, facet = TRUE, ncol = NULL, ...) Arguments object Object of class "biclustermd_sim" similarity A character vector indicating which similarity measure to plot. Can be any of "Rand", "HA", "Jaccard", or "used". If "used", plot only the measure used as the stopping condition in the algorithm). By default (NULL) all three are plotted. When plotted, the used measure will have an asterisk. facet If TRUE (default), each similarity measure will be in its own plot. if FALSE, all three similarity measures for rows and columns are given in one plot. ncol If faceting, the number of columns to arrange the plots in. ... Arguments to pass to ggplot2::geom_point() Value A ggplot object. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc autoplot(bc$Similarities, ncol = 1) autoplot.biclustermd_sse Plot sums of squared errors (SSEs) consecutive biclustering iterations. Description Creates a ggplot of the decrease in SSE recorded in biclustermd::bicluster(). Usage ## S3 method for class 'biclustermd_sse' autoplot(object, ...) Arguments object Object of class "biclustermd_sse" with columns "Iteration" and "SSE" ... Arguments to pass to ggplot2::geom_point() Value A ggplot object. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc autoplot(bc$SSE) biclustermd Bicluster data with non-random missing values Description Bicluster data with non-random missing values Usage biclustermd( data, row_clusters = floor(sqrt(nrow(data))), col_clusters = floor(sqrt(ncol(data))), miss_val = mean(data, na.rm = TRUE), miss_val_sd = 1, similarity = "Rand", row_min_num = floor(nrow(data)/row_clusters), col_min_num = floor(ncol(data)/col_clusters), row_num_to_move = 1, col_num_to_move = 1, row_shuffles = 1, col_shuffles = 1, max.iter = 100, verbose = FALSE ) Arguments data Dataset to bicluster. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. row_clusters The number of clusters to partition the rows into. The default is floor(sqrt(nrow(data))). col_clusters The number of clusters to partition the columns into. The default is floor(sqrt(ncol(data))). miss_val Value or function to put in empty cells of the prototype matrix. If a value, a random normal variable with sd = miss_val_sd is used each iteration. By default, this equals the mean of data. miss_val_sd Standard deviation of the normal distribution miss_val follows if miss_val is a number. By default this equals 1. similarity The metric used to compare two successive clusterings. Can be "Rand" (default), "HA" for the Hubert and Arabie adjusted Rand index or "Jaccard". See RRand for details. row_min_num Minimum row prototype size in order to be eligible to be chosen when filling an empty row prototype. Default is floor(nrow(data) / row_clusters). col_min_num Minimum column prototype size in order to be eligible to be chosen when filling an empty row prototype. Default is floor(ncol(data) / col_clusters). row_num_to_move Number of rows to remove from the sampled prototype to put in the empty row prototype. Default is 1. col_num_to_move Number of columns to remove from the sampled prototype to put in the empty column prototype. Default is 1. row_shuffles Number of times to shuffle rows in each iteration. Default is 1. col_shuffles Number of times to shuffle columns in each iteration. Default is 1. max.iter Maximum number of iterations to let the algorithm run for. verbose Logical. If TRUE, will report progress. Value A list of class biclustermd: params a list of all arguments passed to the function, including defaults. data the inputted two way table of data. P0 the initial column partition matrix. Q0 the initial row partition matrix. InitialSSE the SSE of the original partitioning. P the final column partition matrix. Q the final row partition matrix. SSE a matrix of class biclustermd_sse detailing the SSE recorded at the end of each iteration. Similarities a data frame of class biclustermd_sim detailing the value of row and column similarity measures recorded at the end of each iteration. Contains information for all three similarity measures. This carries an attribute "used" which provides the similarity measure used as the stopping condition for the algorithm. iteration the number of iterations the algorithm ran for, whether max.iter was reached or convergence was achieved. A the final prototype matrix which gives the average of each bicluster. References <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2020) Biclustering with Missing Data. Information Sciences, 510, 304–316. See Also rep_biclustermd, tune_biclustermd Examples data("synthetic") # default parameters bc <- biclustermd(synthetic) bc autoplot(bc) # providing the true number of row and column clusters bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2) bc autoplot(bc) # an example with the nycflights13::flights dataset library(nycflights13) data("flights") library(dplyr) flights_bcd <- flights %>% select(month, dest, arr_delay) flights_bcd <- flights_bcd %>% group_by(month, dest) %>% summarise(mean_arr_delay = mean(arr_delay, na.rm = TRUE)) %>% spread(dest, mean_arr_delay) %>% as.data.frame() rownames(flights_bcd) <- flights_bcd$month flights_bcd <- as.matrix(flights_bcd[, -1]) flights_bc <- biclustermd(data = flights_bcd, col_clusters = 6, row_clusters = 4, row_min_num = 3, col_min_num = 5, max.iter = 20, verbose = TRUE) flights_bc binary_vector_gen Make a binary vector with all values equal to zero except for one Description Make a binary vector with all values equal to zero except for one Usage binary_vector_gen(n, i) Arguments n Desired vector length. i Index whose value is one. Value A vector cell_heatmap Make a heat map of bicluster cell sizes. Description Make a heat map of bicluster cell sizes. Usage cell_heatmap(x, ...) Arguments x An object of class biclustermd. ... Arguments to pass to geom_tile() Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) cell_heatmap(bc) cell_heatmap(bc) + ggplot2::scale_fill_viridis_c() cell_mse Make a data frame containing the MSE for each bicluster cell Description Make a data frame containing the MSE for each bicluster cell Usage cell_mse(x) Arguments x An object of class biclustermd. Value A data frame giving the row cluster, column cluster, the number of data points in each row and column cluster, the number of data points missing in the cell, and the cell MSE. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) cell_mse(bc) cluster_iteration_sum_sse Calculate the sum cluster SSE in each iteration Description Calculate the sum cluster SSE in each iteration Usage cluster_iteration_sum_sse(data, P, Q) Arguments data The data being biclustered. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. P Matrix for column prototypes. Q Matrix for row prototypes. Value The SSE for the parameters specified. col.names A generic to gather column names Description A generic to gather column names Usage col.names(x) Arguments x an object to retrieve column names from col.names.biclustermd Get data matrix column names and their corresponding column cluster membership Description Get data matrix column names and their corresponding column cluster membership Usage ## S3 method for class 'biclustermd' col.names(x) Arguments x and object of class biclustermd Value a data frame with column names of the shuffled matrix and corresponding column cluster names. Examples data("synthetic") # default parameters bc <- biclustermd(synthetic) bc col.names(bc) # this is a simplified version of the output for gather(bc): library(dplyr) gather(bc) %>% distinct(col_cluster, col_name) col_cluster_names Get column names in each column cluster Description Get column names in each column cluster Usage col_cluster_names(x, data) Arguments x Biclustering object to extract column cluster designation from data Data that contains the column names Value A data frame with two columns: cluster corresponds to the column cluster and name gives the column names in each cluster. Examples data("synthetic") rownames(synthetic) <- letters[1:nrow(synthetic)] colnames(synthetic) <- letters[1:ncol(synthetic)] bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc compare_biclusters Compare two biclusterings or a pair of partition matrices Description Compare two biclusterings or a pair of partition matrices Usage compare_biclusters(bc1, bc2) Arguments bc1 the first biclustering or partition matrix. Must be either of class biclustermd or matrix. bc2 the second biclustering or partition matrix. Must be either of class biclustermd or matrix. Value If comparing a pair of biclusterings, a list containing the column similarity indices and the row similarity indices, in that order. If a pair of matrices, a vector of similarity indices. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2) bc2 <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2) # compare the two biclusterings compare_biclusters(bc, bc2) # determine the similarity between initial and final row clusterings compare_biclusters(bc$Q0, bc$Q) fill_empties_P Randomly select a column prototype to fill an empty column prototype with Description Randomly select a column prototype to fill an empty column prototype with Usage fill_empties_P(data, obj, col_min_num = 10, col_num_to_move = 5) Arguments data The data being biclustered. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. obj A matrix for column clusters, typically named P. col_min_num Minimum column prototype size in order to be eligible to be chosen when filling an empty column prototype. Default is 10. col_num_to_move Number of columns to remove from the sampled prototype to put in the empty column prototype. Default is 5. Value A matrix for column clusters, i.e., a P matrix. fill_empties_Q Randomly select a row prototype to fill an empty row prototype with Description Randomly select a row prototype to fill an empty row prototype with Usage fill_empties_Q(data, obj, row_min_num = 10, row_num_to_move = 5) Arguments data The data being biclustered. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. obj A matrix for row clusters, typically named Q row_min_num Minimum row prototype size in order to be eligible to be chosen when filling an empty row prototype. Default is 10. row_num_to_move Number of rows to remove from the sampled prototype to put in the empty row prototype. Default is 5. Value A matrix for row clusters, i.e., a Q matrix. format_partition Format a partition matrix Description Formats a partition matrix so that subsets in a partition will be ordered by the value of the smallest in each subset Usage format_partition(P1) Arguments P1 A partition matrix. Value A formatted partition matrix. gather.biclustermd Gather a biclustermd object Description Gather a biclustermd object Usage ## S3 method for class 'biclustermd' gather( data, key = NULL, value = NULL, ..., na.rm = FALSE, convert = FALSE, factor_key = FALSE ) Arguments data a biclustermd object to gather. key unused; included for consistency with tidyr generic value unused; included for consistency with tidyr generic ... unused; included for consistency with tidyr generic na.rm unused; included for consistency with tidyr generic convert unused; included for consistency with tidyr generic factor_key unused; included for consistency with tidyr generic Value A data frame containing the row names and column names of both the two-way table of data biclus- tered and the cell-average matrix. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) gather(bc) # bicluster 6 is in the top right-hand corner here: autoplot(bc) # bicluster 3 is in the bottom right-hand corner here: autoplot(bc) jaccard_similarity Compute the Jaccard similarity coefficient for two clusterings Description Compute the Jaccard similarity coefficient for two clusterings Usage jaccard_similarity(clus1, clus2) Arguments clus1 vector giving the first set of clusters clus2 vector giving the second set of clusters Value a numeric References <NAME>. and <NAME>. (1986) A study of the comparability of external criteria for hierarchical cluster analysis. Multivariate Behavioral Research, 21, 441-458. mse_heatmap Make a heatmap of cell MSEs Description Make a heatmap of cell MSEs Usage mse_heatmap(x, ...) Arguments x An object of class biclustermd. ... Arguments to pass to geom_tile() Value A ggplot object. Examples data("synthetic") bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) mse_heatmap(bc) mse_heatmap(bc) + ggplot2::scale_fill_viridis_c() partition_gen Generate an intial, random partition matrix with N objects into K sub- sets/groups. Description This function is used to randomly generate a partition matrix and assign rows or columns to proto- types. Must be the case that N > K. Usage partition_gen(N, K) Arguments N Number of objects/rows in a partition matrix K Desired number of partitions Value A partition matrix. partition_gen_by_p Create a partition matrix with a partition vector p Description Create a partition matrix with a partition vector p Usage partition_gen_by_p(N, K, p) Arguments N Rows in a partition matrix K Number of prototypes to create p Integer vector containing the cluster each row in a partition matrix is to be as- signed to. Value A partition matrix. part_matrix_to_vector Convert a partition matrix to a vector Description For each row in a partition matrix, this function gets the column index for which the row is equal to one. That is, for row i, this function returns the index of the row entry that is equal to one. Usage part_matrix_to_vector(P0) Arguments P0 A partition matrix Value An integer vector position_finder Find the index of the first nonzero value in a vector Description Find the index of the first nonzero value in a vector Usage position_finder(vec) Arguments vec A binary vector Value Position of the first nonzero value in a vector. print.biclustermd Print an object of class biclustermd Description Print an object of class biclustermd Usage ## S3 method for class 'biclustermd' print(x, ...) Arguments x a biclustermd object. ... arguments passed to or from other methods reorder_biclust Reorder a bicluster object for making a heat map Description Reorder a bicluster object for making a heat map Usage reorder_biclust(x) Arguments x A bicluster object. Value A list containing the two partition matrices used by gg_bicluster. rep_biclustermd Repeat a biclustering to achieve a minimum SSE solution Description Repeat a biclustering to achieve a minimum SSE solution Usage rep_biclustermd( data, nrep = 10, parallel = FALSE, ncores = 2, col_clusters = floor(sqrt(ncol(data))), row_clusters = floor(sqrt(nrow(data))), miss_val = mean(data, na.rm = TRUE), miss_val_sd = 1, similarity = "Rand", row_min_num = 5, col_min_num = 5, row_num_to_move = 1, col_num_to_move = 1, row_shuffles = 1, col_shuffles = 1, max.iter = 100 ) Arguments data Dataset to bicluster. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. nrep The number of times to repeat the biclustering. Default 10. parallel Logical indicating if the user would like to utilize the foreach parallel backend. Default is FALSE. ncores The number of cores to use if parallel computing. Default 2. col_clusters The number of clusters to partition the columns into. row_clusters The number of clusters to partition the rows into. miss_val Value or function to put in empty cells of the prototype matrix. If a value, a random normal variable with sd = miss_val_sd is used each iteration. miss_val_sd Standard deviation of the normal distribution miss_val follows if miss_val is a number. By default this equals 1. similarity The metric used to compare two successive clusterings. Can be "Rand" (default), "HA" for the Hubert and Arabie adjusted Rand index or "Jaccard". See RRand and for details. row_min_num Minimum row prototype size in order to be eligible to be chosen when filling an empty row prototype. Default is 5. col_min_num Minimum column prototype size in order to be eligible to be chosen when filling an empty row prototype. Default is 5. row_num_to_move Number of rows to remove from the sampled prototype to put in the empty row prototype. Default is 1. col_num_to_move Number of columns to remove from the sampled prototype to put in the empty column prototype. Default is 1. row_shuffles Number of times to shuffle rows in each iteration. Default is 1. col_shuffles Number of times to shuffle columns in each iteration. Default is 1. max.iter Maximum number of iterations to let the algorithm run for. Value A list of the minimum SSE biclustering, a vector containing the final SSE of each repeat, and the time it took the function to run. References <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2019) Biclustering for Missing Data. Information Sciences, Submitted See Also biclustermd, tune_biclustermd Examples data("synthetic") # 20 repeats without parallelization repeat_bc <- rep_biclustermd(synthetic, nrep = 20, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) repeat_bc autoplot(repeat_bc$best_bc) plot(repeat_bc$rep_sse, type = 'b', pch = 20) repeat_bc$runtime # 20 repeats with parallelization over 2 cores repeat_bc <- rep_biclustermd(synthetic, nrep = 20, parallel = TRUE, ncores = 2, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) repeat_bc$runtime results_heatmap Make a heatmap of sparse biclustering results Description Make a heatmap of sparse biclustering results Usage results_heatmap( x, reorder = FALSE, transform_colors = FALSE, c = 1/6, cell_alpha = 1/5, col_clusts = NULL, row_clusts = NULL, ... ) Arguments x A biclustermd object. reorder A logical. If TRUE, heatmap will be sorted according to the cell-average matrix, A. transform_colors If equals TRUE then the data is scaled by c and run through a standard normal cdf before plotting. If FALSE (default), raw data values are used in the heat map. c Value to scale the data by before running it through a standard normal CDF. Default is 1/6. cell_alpha A scalar defining the transparency of shading over a cell and by default this equals 1/5. The color corresponds to the cell mean. col_clusts A vector of column cluster indices to display. If NULL (default), all are dis- played. row_clusts A vector of row cluster indices to display. If NULL (default), all are displayed. ... Arguments to be passed to geom_vline() and geom_hline(). Value An object of class ggplot. row.names.biclustermd Get data matrix row names and their corresponding row cluster mem- bership Description Get data matrix row names and their corresponding row cluster membership Usage ## S3 method for class 'biclustermd' row.names(x) Arguments x and object of class biclustermd Value a data frame with row names of the shuffled matrix and corresponding row cluster names. Examples data("synthetic") # default parameters bc <- biclustermd(synthetic) bc row.names(bc) # this is a simplified version of the output for gather(bc): library(dplyr) gather(bc) %>% distinct(row_cluster, row_name) row_cluster_names Get row names in each row cluster Description Get row names in each row cluster Usage row_cluster_names(x, data) Arguments x Biclustering object to extract row cluster designation from data Data that contains the row names Value A data frame with two columns: cluster corresponds to the row cluster and name gives the row names in each cluster. Examples data("synthetic") rownames(synthetic) <- letters[1:nrow(synthetic)] colnames(synthetic) <- letters[1:ncol(synthetic)] bc <- biclustermd(synthetic, col_clusters = 3, row_clusters = 2, miss_val = mean(synthetic, na.rm = TRUE), miss_val_sd = sd(synthetic, na.rm = TRUE), col_min_num = 2, row_min_num = 2, col_num_to_move = 1, row_num_to_move = 1, max.iter = 10) bc runtimes Algorithm run time data Description This dataset stems from the R journal article introducing biclustermd to R users. It describes the data attributes and run time for varying data sizes and structures. Usage runtimes Format An object of class data.frame with 2400 rows and 13 columns. Details A data frame of 2400 rows and 13 variables (defined range, inclusive): combination_no Unique identifier of a combination of parameters. rows Number of rows in the data matrix. (50, 1500) cols Number of columns in the data matrix. (50, 1500) N Product of the dimensions of the data. (2500, 2250000) row_clusts Number of clusters to partition the rows into. (4, 300) col_clusts Number of clusters to partition the columns into. (4, 300) avg_row_clust_size Average row cluster size. rows / row_clusts avg_col_clust_size Average column cluster size. cols / col_clusts sparsity Percent of data values which are missing. user.self CPU time used executing instructions to calls (from ?proc.time. sys.self CPU time used executing calls (from ?proc.time. elapsed Amount of time in seconds it took the algorithm to converge. iterations Number of iterations to convergence. synthetic Synthetic data for examples. Description This simple dataset allows users to use data that are easy to understand while learning biclustermd. This is a matrix with 6 rows and 12 columns. 50% of values are missing. Usage synthetic Format An object of class matrix with 6 rows and 12 columns. tune_biclustermd Bicluster data over a grid of tuning parameters Description Bicluster data over a grid of tuning parameters Usage tune_biclustermd( data, nrep = 10, parallel = FALSE, ncores = 2, tune_grid = NULL ) Arguments data Dataset to bicluster. Must to be a data matrix with only numbers and missing values in the data set. It should have row names and column names. nrep The number of times to repeat the biclustering for each set of parameters. De- fault 10. parallel Logical indicating if the user would like to utilize the foreach parallel backend. Default is FALSE. ncores The number of cores to use if parallel computing. Default 2. tune_grid A data frame of parameters to tune over. The column names of this must match the arguments passed to biclustermd(). Value A list of: best_combn The best combination of parameters, best_bc The minimum SSE biclustering using the parameters in best_combn, grid tune_grid with columns giving the minimum, mean, and standard deviation of the final SSE for each parameter combination, and runtime CPU runtime & elapsed time. References <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2019) Biclustering for Missing Data. Information Sciences, Submitted See Also biclustermd, rep_biclustermd Examples library(dplyr) library(ggplot2) data("synthetic") tg <- expand.grid( miss_val = fivenum(synthetic), similarity = c("Rand", "HA", "Jaccard"), col_min_num = 2, row_min_num = 2, col_clusters = 3:5, row_clusters = 2 ) tg # in parallel: two cores: tbc <- tune_biclustermd(synthetic, nrep = 2, parallel = TRUE, ncores = 2, tune_grid = tg) tbc tbc$grid %>% group_by(miss_val, col_clusters) %>% summarise(avg_sd = mean(sd_sse)) %>% ggplot(aes(miss_val, avg_sd, color = col_clusters, group = col_clusters)) + geom_line() + geom_point() tbc <- tune_biclustermd(synthetic, nrep = 2, tune_grid = tg) tbc boxplot(tbc$grid$mean_sse ~ tbc$grid$similarity) boxplot(tbc$grid$sd_sse ~ tbc$grid$similarity) # nycflights13::flights dataset library(nycflights13) data("flights") library(dplyr) flights_bcd <- flights %>% select(month, dest, arr_delay) flights_bcd <- flights_bcd %>% group_by(month, dest) %>% summarise(mean_arr_delay = mean(arr_delay, na.rm = TRUE)) %>% spread(dest, mean_arr_delay) %>% as.data.frame() # months as rows rownames(flights_bcd) <- flights_bcd$month flights_bcd <- as.matrix(flights_bcd[, -1]) flights_grid <- expand.grid( row_clusters = 4, col_clusters = c(6, 9, 12), miss_val = fivenum(flights_bcd), similarity = c("Rand", "Jaccard") ) # RUN TIME: approximately 40 seconds across two cores. flights_tune <- tune_biclustermd( flights_bcd, nrep = 10, parallel = TRUE, ncores = 2, tune_grid = flights_grid ) flights_tune
equateMultiple
cran
R
Package ‘equateMultiple’ October 17, 2022 Type Package Title Equating of Multiple Forms Version 0.1.1 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Equating of multiple forms using Item Response Theory (IRT) methods (Bat- tauz M. (2017) <doi:10.1007/s11336-016-9517-x> and Haber- man S. J. (2009) <doi:10.1002/j.2333-8504.2009.tb02197.x>). License GPL-3 Imports stats, graphics, numDeriv, statmod, Rcpp (>= 0.12.11) Depends equateIRT(>= 2.0-4) LinkingTo Rcpp, RcppArmadillo Suggests knitr, rmarkdown, ltm VignetteBuilder knitr NeedsCompilation yes Repository CRAN Date/Publication 2022-10-17 09:52:32 UTC R topics documented: EquateMultiple-packag... 2 eqc.mlteq... 3 itm.mlteq... 4 multie... 5 score.mlteq... 7 summary.mlteq... 9 EquateMultiple-package Equating of Multiple Forms Description The EquateMultiple package implements IRT-based methods to equate simultaneously many forms calibrated separately. This package estimates the equating coefficients to convert the item parame- ters and the ability values to the scale of the base form. It can be applied to a large number of test forms, as well as to 2 forms. The computation of the equated scores is also implemented. Details This package implements the methods proposed in Haberman (2009) and Battauz (2017). Function multiec computes the equating coefficients to convert the item parameters and the ability values to the scale of the base form. The methods implemented are: multiple mean-geometric mean (Haber- man, 2009), multiple mean-mean, multiple item response function, and multiple test response func- tion (Battauz, 2017). The function provides the equating coefficients, the synthetic item parameters and the standard errors of the equating coefficients and the synthetic item parameters. Equated scores can be computed using true score equating and observed score equating methods. Standard errors of equated scores are also provided. Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>. (2017). Multiple equating of separate IRT calibrations. Psychometrika, 82, 610–636. doi:10.1007/s11336-016-9517-x. <NAME>. (2009). Linking parameter estimates derived from an item response model through separate calibrations. ETS Research Report Series, 2009, i-9. doi:10.1002/j.2333-8504.2009.tb02197.x. See Also equateIRT Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple mean-mean method eqMM <- multiec(mods = mods, base = 1, method = "mean-mean") summary(eqMM) # Estimation of the equating coefficients with the # multiple mean-geometric mean method (Haberman, 2009) eqMGM <- multiec(mods = mods, base = 1, method = "mean-gmean") summary(eqMGM) # Estimation of the equating coefficients with the multiple item response function method eqIRF <- multiec(mods = mods, base = 1, method = "irf") summary(eqIRF) # Estimation of the equating coefficients with the multiple item response function method # using as initial values the estimates obtained with the multiple mean-geometric mean method eqMGM <- multiec(mods = mods, base = 1, method = "mean-gmean", se = FALSE) eqIRF <- multiec(mods = mods, base = 1, method = "irf", start = eqMGM) summary(eqIRF) # Estimation of the equating coefficients with the multiple test response function method eqTRF <- multiec(mods = mods, base = 1, method = "trf") summary(eqTRF) # scoring using the true score equating method and equating coefficients # obtained with the multiple item response function method score(eqIRF) eqc.mlteqc Extract Equating Coefficients of Multiple Forms Description eqc is a generic function which extracts the equating coefficients. Usage ## S3 method for class 'mlteqc' eqc(x, ...) Arguments x object of the class mlteqc returned by function multiec ... further arguments passed to or from other methods. Value A data frame containing the equating coefficients. Author(s) <NAME> See Also multiec Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple item response function method eqIRF <- multiec(mods = mods, base = 1, method = "irf") # extract equating coefficients eqc(eqIRF) itm.mlteqc Extract Item Parameters Description itm is a generic function which extracts a data frame containing the item parameters of multiple forms being equated in the original scale and the item parameters converted to the scale of the base form. Usage ## S3 method for class 'mlteqc' itm(x, ...) Arguments x object of the class mlteqc returned by function multiec ... further arguments passed to or from other methods. Value A data frame containing item names (Item), item parameters of all the forms (e.g. T1, . . . , T3), and item parameters of all the forms converted in the scale of the base form (e.g. T3.as.T1). Author(s) <NAME> See Also multiec Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple item response function method eqIRF <- multiec(mods = mods, base = 1, method = "irf") # extract item parameters itm(eqIRF) multiec Multiple Equating Coefficients Description Calculates the equating coefficients between multiple forms. Usage multiec(mods, base = 1, method = "mean-mean", se = TRUE, nq = 30, start = NULL, eval.max = 100000) Arguments mods an object of the class modIRT containing item parameter coefficients and their covariance matrix of the forms to be equated. base integer value indicating the base form. method the method used to compute the equating coefficients. This should be one of "mean-mean", "mean-gmean", "irf" or "trf" (see details). se logical; if TRUE the standard errors of the equating coefficients and the synthetic item parameters are computed. nq number of quadrature points used for the Gauss-Hermite quadrature for methods "irf" or "trf". start initial values. This can be a vector containing the A and B equating coeffi- cients excluding the base form, or an object of class mlteqc returned by function multiec. Used only with methods "irf" and "trf". eval.max maximum number of evaluations of the objective function allowed. Used only with methods "irf" and "trf". Details The methods implemented for the computation of the multiple equating coefficients are the multiple mean-mean method ("mean-mean"), the multiple mean-geometric mean method ("mean-gmean"), the multiple item response function method ("irf") and the multiple test response function method ("trf"). Value An object of class mlteqc with components A A equating coefficients. B B equating coefficients. se.A standard errors of A equating coefficients. se.B standard errors of B equating coefficients. varAB covariance matrix of equating coefficients. as synthetic discrimination parameters â∗j . bs synthetic difficulty parameters b̂∗j . se.as standard errors of synthetic discrimination parameters. se.bs standard errors of synthetic difficulty parameters. tab data frame containing item names (Item), item parameters of all the forms (e.g. T1, . . . , T3), and item parameters of all the forms converted in the scale of the base form (e.g. T3.as.T1). varFull list of covariance matrices of the item parameters of every form. partial partial derivatives of equating coefficients with respect to the item parameters. itmp number of item parameters of the IRT model. method the equating method used. basename the name of the base form. convergence An integer code. 0 indicates successful convergence. Returned only with meth- ods "irf" and "trf". Author(s) <NAME> References <NAME>. (2017). Multiple equating of separate IRT calibrations. Psychometrika, 82, 610–636. <NAME>. (2009). Linking parameter estimates derived from an item response model through separate calibrations. ETS Research Report Series, 2009, i-9. See Also modIRT, score.mlteqc Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple mean-mean method eqMM <- multiec(mods = mods, base = 1, method = "mean-mean") summary(eqMM) # Estimation of the equating coefficients with the # multiple mean-geometric mean method (Haberman, 2009) eqMGM <- multiec(mods = mods, base = 1, method = "mean-gmean") summary(eqMGM) # Estimation of the equating coefficients with the multiple item response function method eqIRF <- multiec(mods = mods, base = 1, method = "irf") summary(eqIRF) # Estimation of the equating coefficients with the multiple item response function method # using as initial values the estimates obtained with the multiple mean-geometric mean method eqMGM <- multiec(mods = mods, base = 1, method = "mean-gmean", se = FALSE) eqIRF <- multiec(mods = mods, base = 1, method = "irf", start = eqMGM) summary(eqIRF) # Estimation of the equating coefficients with the multiple test response function method eqTRF <- multiec(mods = mods, base = 1, method = "trf") summary(eqTRF) score.mlteqc Scoring of multiple forms Description Relates number-correct scores on multiple forms. Usage ## S3 method for class 'mlteqc' score(obj, method="TSE", D=1, scores=NULL, se=TRUE, nq=30, w=0.5, theta=NULL, weights=NULL, ...) Arguments obj object of the class mlteqc returned by function multiec. method the scoring method to be used. This should be one of "TSE" (the default) for true score equating or "OSE" for observed score equating. D constant D of the IRT model used to estimate item parameters. scores integer values to be converted. se logical; is TRUE standard errors of equated scores are computed. nq number of quadrature points used to approximate integrals with observed score equating. Used only if arguments theta and weights are NULL. w synthetic weight for population 1. It should be a number between 0 and 1. theta vector of ability values used to approximate integrals with observed score equat- ing. weights vector of weights used to approximate integrals with observed score equating. ... further arguments passed to or from other methods. Details In this function common items are internal, i.e. they are used for scoring the test. Value A data frame containing theta values (only for true score equating), scores of the form chosen as base, equated scores of all other forms, and standard errors of equated scores. Author(s) <NAME> References <NAME>. and <NAME>. (2014). Test equating, scaling, and linking: methods and practices, 3nd ed., New York: Springer. <NAME>. (2001). Item response theory true score equatings and their standard errors. Journal of Educational and Behavioral Statistics, 26, 31–50. <NAME>. (2003). Asymptotic standard errors of IRT observed-score equating methods. Psy- chometrika, 68, 193–211. See Also multiec Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple item response function method eqIRF<-multiec(mods = mods, base = 1, method = "irf") summary(eqIRF) # scoring using the true score equating method score(eqIRF) # scoring using observed score equating method, without standard errors score(eqIRF, method = "OSE", se = FALSE) summary.mlteqc Summarizing Estimated Equating Coefficients Description summary method for class mlteqc. Usage ## S3 method for class 'mlteqc' summary(object, ...) Arguments object an object of the class mlteqc returned by function multiec. ... further arguments passed to or from other methods. Author(s) <NAME> See Also multiec Examples data(est2pl) # prepare the data mods <- modIRT(coef = est2pl$coef, var = est2pl$var, display = FALSE) # Estimation of the equating coefficients with the multiple mean-mean method eqMM <- multiec(mods = mods, base = 1, method = "mean-mean") summary(eqMM)
pupper
readthedoc
Markdown
Stanford Pupper 2020 documentation [Stanford Pupper](index.html#document-index) --- Welcome to Pupper’s documentation![¶](#welcome-to-pupper-s-documentation) === About Pupper[¶](#about-pupper) --- Stanford Pupper is a small quadruped robot that can hop, trot, and run around. We hope that its low cost and simple design will allow robot enthusiasts in K-12 and beyond to get their hands on fun, dynamic robots. The robot’s brain is a Raspberry Pi 4 computer, which receives commands from a wireless PS4 controller and controls the servo motors, three per leg, to move the feet and body to the right places. The robot is designed to be “hacked” – we want you to be able to adjust and expand the robot’s behaviors to your heart’s content. While the robot can walk out-of-the-box, some of the features you could add include different gaits (bounding, galloping, etc), or high level behaviors like playing fetch or following you around. You can also simulate the robot’s motion in PyBullet before touching the real robot. To get started, check out the pages linked below on part sourcing and assembly. If you purchase the parts yourself, it’ll run you about $900-$1000. However, you can purchase a kit to build the robot from either [MangDang](http://www.mangdang.net/Product?_l=en) or [Cypress Software](https://cypress-software-inc.myshopify.com/) for cheaper than what it would cost you to get the parts yourself. The two vendors sell different options so check both of them out to see what works for you. While we’re not affiliated with either company, we’ve verified both of their kits. ### Part Sourcing[¶](#part-sourcing) #### Pre-Made Kits[¶](#pre-made-kits) Two small businesses are now selling Pupper kits. Both vendors sell a variety of types of kits, from partially complete to fully assembled. Buying a kit instead of purchasing the parts yourself will most likely save you money. Check out both websites to see which kit best suits your needs. * [Cypress Software](https://cypress-software-inc.myshopify.com/) * [MangDang](http://www.mangdang.net/Product?_l=en) #### Bill of Materials[¶](#bill-of-materials) Link: [Bill of Materials](https://docs.google.com/spreadsheets/d/1zZ2e00XdzA7zwb35Ly_HdzfDJcsxMIR_5vjwnf-KW70/edit#gid=1141991382) If you’d like to source the parts yourself follow the instructions in the BOM. Most of the parts can be bought directly from a reseller like Amazon or McMaster-Carr, but for some you’ll need to get them custom manufactured for you. The custom parts include the carbon fiber routed parts, the 3D printed parts, the power distribution printed circuit board, and the motors. The BOM spreadsheet goes into much more detail. ### [Assembly](#id3)[¶](#assembly) Contents * [Assembly](#assembly) + [Hip Assembly](#hip-assembly) + [Body Assembly](#body-assembly) + [PCB Assembly](#pcb-assembly) #### [Hip Assembly](#id4)[¶](#hip-assembly) ##### Step 1. Install disc[¶](#step-1-install-disc) * Video Instructions: <https://youtu.be/aUnK3Wvj8CU> * Materials: M3x6mm socket head screw, servo disc, loctite, servo arm (to adjust servo motor) * Tools: 2.5mm hex driver Instructions: 1. Attach a servo horn (not disc) to the servo and turn the servo into its neutral position and then remove the servo horn 2. Align the servo disc on the servo shaft so that the disc holes are roughly at 45 degree marks (see picture below) 3. Put a tiny dab of loctite on the screw 4. Gently start threading the screw in and then continue to screw it in, this will cause the disc to sink onto the servo spline (the output shaft) 5. Some of the servo discs are poorly manufactured, so if this is the case for you, it’s expected that it’ll take a lot of torque to start threading the screw. At some point however, the servo disc will finish deforming to fit the shaft and it’ll become a lot easier to screw on. When the disc is fully seated, you’ll again see an increase in torque and you should stop. Do not over-tighten the screw once the disc is fully seated or you risk breaking the servo motor. Preparing for the assembly Completed step ##### Step 2. Install the M3 threaded insert into the inner hip piece[¶](#step-2-install-the-m3-threaded-insert-into-the-inner-hip-piece) * No Video Available * Materials: Assembly so far, M3 tapered threaded insert * Tools: Soldering iron Instructions: 1. Place the insert into the hole with the tapered side down 2. Set the soldering iron to around 500f or 260c and then gently press the insert into the plastic. I recommend just using the weight of the iron to press the insert in, and I also suggest doing it in steps, ie pressing it in 1mm, taking the iron out, then pressing another 1mm etc into it is all the way in. This method prevents the soldering iron from getting stuck to the iron. Setting up the insert for insertion. The actual 3D printed part in the picture is outdated, but the insertion is still the same. Completed step ##### Step 3. Mount disc to inner hip part[¶](#step-3-mount-disc-to-inner-hip-part) * Video Instructions: <https://youtu.be/qUoLoNEeEI8> * Materials: M3x8 flat head screw, Inner hip part, loctite * Tools: 2mm hex driver Instructions: 1. Install the inner hip part at a 90deg angle and then use the access holes on the other side to tighten down the M3x8 flat head screw attaching it to the servo disc. Preparing for the assembly Completed step for left side Completed step for right side ##### Step 4. Install the inner hip servo[¶](#step-4-install-the-inner-hip-servo) * Video Instructions: <https://youtu.be/6Rd2ZSjpYhM> * Materials: Inner Hip Assembly so far, servo, M4x10mm screws for plastic (silver), M3x16mm button head, 2x standoff * Tools: T20H torx driver, 2mm hex driver Instructions: 1. Place servo motor in the inner hip part and gently wiggle such that the servo shaft is sticking out of the big circular hole in the inner hip part 2. Screw the M4x10mm screws on the left side of the servo and the M3x16mm screws on the right side of the motor. Use locktite on M3x16mm screws 3. Turn over assembly and screw M3x16mm screws onto standoffs Preparing for the assembly Completed step Another look at the assembly. Note that the plastic screws are on the left, and the M3 screws are on the right ##### Step 5. Install servo horn on inner hip servo[¶](#step-5-install-servo-horn-on-inner-hip-servo) * Video Instructions: <https://youtu.be/wqRM8rbfDBM> * Materials: Inner Hip Assembly so far, M3x8mm button head screw, M2x8mm socket head screw, servo horn * Tools: 2mm hex driver Instructions: 1. Turn the servo into its neutral position and then slide the horn on at the angle shown (45 degrees downwards) 2. Screw the M3x8mm screw on to the top of the horn and screw the M2x8mm screws into the side of the horn 3. Don’t forget to use loctite! Preparing for the assembly Completed step for right side Completed step for left side ##### Step 6. Attach Leg[¶](#step-6-attach-leg) * Video Instructions: <https://youtu.be/bMr0gCNQJxM> * Materials: Bottom Leg, Top Leg, 3-part Thrust Bearing x2, Shoulder Bolt, M3 Lock Nut * Tools: 2mm driver, wrench for lock nut Instructions: 1. Add one 3-part thrust bearing on the shoulder bolt, then the Bottom leg, then another 3-part thrust bearing, then the Top leg then locking nut. Flip orientation of Bottom and Top leg accordingly for the left and right leg. See pictures for reference. Preparing for the right assembly Preparing for the left assembly ##### Step 7. Attach top carbon leg link to servo horn[¶](#step-7-attach-top-carbon-leg-link-to-servo-horn) * Video Instructions: <https://youtu.be/Tp3HsjZY7qY> * Materials: Inner Hip Assembly, Leg Assembly, M3x6 Button Head x2. * Tools: 2mm hex driver Instructions: 1. Align the curved edge of the left Top leg with the left Servo horn. Screw in the M3x6 button head screws through the carbon leg holes. Repeat for right side. 2. Be careful when seating the screw nearest to the servo to ensure it is vertical. It is necessary to hold the screw vertically to avoid cross threading. Preparing for the assembly Completed step for right side ##### Step 8. Install outer hip assembly[¶](#step-8-install-outer-hip-assembly) * Video Instructions: <https://youtu.be/iIqjgKaIPs8> * Materials: Servo, outer hip part, M4x10mm screw plastic * Tools: T20H torx driver Instructions: 1. Place servo into joint and add affix with two screws closest to the servo spline Preparing for the assembly Completed step ##### Step 9. Install servo horn on outside servo[¶](#step-9-install-servo-horn-on-outside-servo) * Video Instructions: <https://youtu.be/Tj7zx2M6xas> * Materials: Servo horn, Outer Hip assembly, M3x8 button head, M2x8 socket head * Tools: 2mm hex driver Instructions: 1. Turn the servo horn to its neutral position and then attach the horn at a 45 degree angle as shown. 2. First tighten the servo horn down with the M3x8, then add the M2x8 screws to tension the servo horn. Similar to Step 5. Preparing for the assembly Completed step for right side Completed step for left side ##### Step 10. Assemble the two sides[¶](#step-10-assemble-the-two-sides) * Video Instructions: <https://youtu.be/dKv7VrdE290> * Materials: Inner and Outer Hip assembly, M3x16 button head screws for screwing into standoffs, loctite * Tools: 2mm hex driver Instructions: 1. Align Inner and Outer Hip assembly, M4x10mm plastic screws should be on the same side and servo horns should be at a 90degree angle. 2. Connect assemblies with M3x16 screws through Outer Hip assembly to standoffs. Add loctite on screws. Don’t tighten the screws down all the way yet. 3. At this point, your legs might start to move, feel free to mark your left and right side so you don’t get confused. If you don’t know which side is which, compare with the 3D model: <https://stanford195.autodesk360.com/g/shares/SH919a0QTf3c32634dcfedf61e031f673710Preparing for the assembly Completed step Another look at the assembly ##### Step 11. Assemble the other 2 standoffs[¶](#step-11-assemble-the-other-2-standoffs) * Video Instructions: <https://youtu.be/nD_yWAIB70c> * Materials: Assembly, 4 M3x10 button head screws, 2 standoffs * Tools: 2mm hex driver Instructions: 1. Install the other 2 standoffs and fasten with M3x10 button head screws Preparing for the assembly Completed step ##### Step 12. Test the full range of motion for each servo[¶](#step-12-test-the-full-range-of-motion-for-each-servo) * Video Instructions: <https://youtu.be/gvaUp9pQ-W4Instructions: 1. Hip should go the fully flat on either side 2. The horn nearest the body should go from 45 degrees upward to fully touching the lower standoff 3. The horn away from the body should go from touching the standoff upwards to going 45 degrees downward ##### Step 13. Assemble the upper leg extension rod[¶](#step-13-assemble-the-upper-leg-extension-rod) * Video Instructions: <https://youtu.be/4e2r8jGPv5Q> * Materials: Threaded rod, rod end x 2 * Tools: None Instructions: 1. Screw the rod ends on equally until the distance between the center of the holes in the rod ends is 123.5mm. The goal here is to make the center-to-center hole distance in the extension rod match that of the upper leg link. Preparing for the assembly Completed step ##### Step 14. Attach Upper Leg Extension Rod to Servo Horn[¶](#step-14-attach-upper-leg-extension-rod-to-servo-horn) * Video Instructions: <https://youtu.be/c0DC35XpYTk> * Materials: M3x8 button head screw * Tools: 2mm driver Instructions: 1. From the inside, screw the extension rod to the servo horn with the M3x8 button head screw. Preparing for the assembly ##### Step 15. Attach Upper Leg Extension Rod to Lower Leg Carbon Linkage[¶](#step-15-attach-upper-leg-extension-rod-to-lower-leg-carbon-linkage) * Video Instructions: <https://youtu.be/uQt9EFQzu2w> * Materials: M3x10 button head screw, M3 Locking Nut * Tools: 2mm driver, wrench Instructions: 1. Slide a M3x10 button head screw through the carbon fiber piece and then the rod end. Then fasten the screw with a M3 locknut, using a wrench to keep it in place while you use an allen key to tighten. Preparing for the assembly Completed step. CORRECTION: The extension rod in this picture is actually too short. See the red and blue annotation for the correct assembly. The servo horn, extension rod, upper leg link, and upper part of the lower leg link should form a perfect parallelogram. Another look at the assembly ##### Done![¶](#done) Left and Right side Continue on to Body Assembly #### [Body Assembly](#id5)[¶](#body-assembly) ##### Step 1. Install tapered threaded heat inserts into 3D printed parts[¶](#step-1-install-tapered-threaded-heat-inserts-into-3d-printed-parts) * No Video Available * Materials: M3 tapered heat-set inserts for plastic x16, 4 body pieces * Tools: Soldering iron set to around 500f / 260c Instructions: 1. Each of the 3D printed body pieces have four holes — two on top and two on bottom that hold the tapered heat-set inserts for plastic 2. Place the insert into the hole with the tapered side down 3. Use a soldering iron set to around 500f or 260c to gently press the insert into the plastic. I recommend just using the weight of the iron to press the insert in, and I also suggest doing it in steps, ie pressing it in 1mm, taking the iron out, then pressing another 1mm etc into it is all the way in. This method prevents the soldering iron from getting stuck to the iron. Before pressing the tapered threaded heat insert After pressing the tapered threaded heat insert ##### Step 2: Press the radial bearings into the body pieces[¶](#step-2-press-the-radial-bearings-into-the-body-pieces) * No Video Available * Materials: 4 bearings (3mm x 8mm x 4mm Bearing MR693-zz), Front Front body part, Back Front body part * Tools: Your hands, arbor press, or vice Instructions: 1. Press two bearings into the two holes in the frontmost piece (called Front Front), and two bearings into the two holes in the back piece (called Back Front). Preparing for the assembly Completed Step ##### Step 3. Fasten the hip assemblies[¶](#step-3-fasten-the-hip-assemblies) * Video Instructions: <https://youtu.be/Av9e2HzpbBo> * Materials: 16x M4x8 screws (plastic), 4x M3x8 button head screw, four hip assemblies, four body parts * Tools: Torx T20 + 2mm driver Instructions: 1. Use the M4x8 screws for plastic to fasten two hip assemblies to the Back Back body part and another two hip assemblies to the Front Back body part 2. Then screw the M3x8 button head screws through the bearings you pressed into the Front Front and Back Front parts and thread them into the threaded inserts in the hip assembly Preparing for the assembly Completed Step Another look at the assembly ##### Step 4. Attach the two leg/body assemblies to the bottom carbon fiber plate[¶](#step-4-attach-the-two-leg-body-assemblies-to-the-bottom-carbon-fiber-plate) * Video Instructions: <https://youtu.be/f4iDKkfCkIs> * Materials: 16x M3x6 button head screws, 2 leg/body assemblies, Botton carbon fiber plate * Tools: 2mm hex driver Instructions: 1. Use the M3x6 button head screws to fasten the two leg/body assemblies you built to the bottom carbon fiber plate. Preparing for the assembly Completed step ##### Step 5. Prepare and mount the Raspberry Pi case[¶](#step-5-prepare-and-mount-the-raspberry-pi-case) * Video Instructions: <https://youtu.be/ZlbkTc2Jxu8> * Materials: Raspberry Pi case (picase.stl), 4x M2.5 tapered heat-set inserts, 4x M2.5x6 socket head screws, Dual Lock * Tools: Soldering iron, 2mm driver Instructions: 1. In the same way you installed the previous inserts, press the M2.5 inserts into the holes in the raspberry pi case. Then, use the M2.5x6 socket head screws to screw the raspberry pi to the case 2. Finally, add Dual-Lock to the case to mount it to the bottom carbon fiber plate Preparing for the assembly Completed Step ##### Step 6. Assemble the PCB (if not done so already)[¶](#step-6-assemble-the-pcb-if-not-done-so-already) Navigate to PCB Assembly Instructions ##### Step 7. Plug in servo motors to Raspberry Pi[¶](#step-7-plug-in-servo-motors-to-raspberry-pi) * Video Instructions: <https://youtu.be/ToJtlmDO4AY> * Materials: Four hip assemblies mounted to the bottom plate, mounted Raspberry Pi with servo power distribution hat * Tools: None Instructions: 1. Connect PCB to Rasberry Pi 2. Plug the servo cables into the custom circuit board in this pattern shown below. J1 through J12 correspond to one of the twelve sets of header pins soldered to the circuit board. The circuit board has indicators for how to align the signal, ground, and positive wires from the servo motors into the board, but in case they’re too hard to see, you can know that the signal pins on the servo connectors always face towards the Raspberry Pi header. Preparing for the assembly Completed step ##### Done![¶](#id1) Complete PCB assembly if you haven’t done so already. #### [PCB Assembly](#id6)[¶](#pcb-assembly) ##### Step 1: Solder the servo connector headers to the board[¶](#step-1-solder-the-servo-connector-headers-to-the-board) * No Video Available * Materials: PCB, 12 male headers of 3 pins each * Tools: Soldering iron, preferably a nice one with >=60W heat output. Instructions: 1. Place each of the 12 male header pins into their respective slots as shown in the photo. 2. Then, turn the board upside down so you have access to solder the underside. Be careful that the headers don’t all fall out when you turn the board over. When I did this, I pressed a hard foam block up against the top side of the pins to make sure they didn’t tilt or fall out when I turned the board over. You’ll also want to check that the pins are mostly perpendicular to the board after you turn the board over. 3. Once the board is turned over, solder all of the signal pins to keep the headers in place. The signal pins are the pins closest to the Raspberry Pi header pin holes (the 2x20 array). 4. Once the headers are all tacked into place, solder the remaining ground and positive pins. Placed all the pins into the board unsoldered Completed Step, soldered pins ##### Step 2: Solder the Raspberry Pi header pin[¶](#step-2-solder-the-raspberry-pi-header-pin) * No Video Available * Materials: PCB, 2x20 raspberry pi header pin * Tools: Soldering iron Instructions: 1. Insert the 2x20 header pin into the PCB. Make sure that you insert the header from the bottom so that the pins are coming out the top. This will allow the header to sit on top of the Raspberry Pi. 2. Secure the PCB and header pin in a vice 3. Solder the header pins in from the top. After soldering the 2x20 header pin onto the PCB. Underside of board after soldering the 2x20 header pin. ##### Step 3: Solder the bec and 5V in pins[¶](#step-3-solder-the-bec-and-5v-in-pins) * No Video Available * Materials: PCB, header pins * Tools: Soldering iron, vice Instructions: 1. Snap off a pair of 1x2 header pins and solder them to the areas labelled Vbat and Regulated 5V. Important: If you do not have a dupont/jst crimps and crimper on hand, then do not solder pins to the Vbat holes. BEC and 5V pins (four pins on the right) soldered to the PCB. ##### Step 4: Solder the XT 60 pigtail connector to the PCB[¶](#step-4-solder-the-xt-60-pigtail-connector-to-the-pcb) * No Video Available * Materials: PCB, XT60 pigtail connector * Tools: Soldering iron, vice Instructions: 1. Insert the xt 60 pigtail from the top and solder from the bottom. Make sure you get the polarity correct! The PCB has little labels for the + and - wires of the xt 60 pigtail. Male XT60 Pigtail (Female housing, male pins) After soldering on the XT60 pigtail. Another view of XT60 solder connection. ##### Step 5: Test the power distribution board for shorts[¶](#step-5-test-the-power-distribution-board-for-shorts) * No Video Available * Materials: PCB * Tools: Multimeter Instructions: 1. Inspect the board visually to make sure no solder blobs are shorting together 2. Turn the multimeter to the short detecting setting. This is usually indicated by a little speaker icon. 3. Test that the + and - pins of the xt 60 connector do not short together 4. Test that none of the signal wires short to + or - either. 5. Test that none of the signal wires short to each other. ##### Step 6: Test for servo power[¶](#step-6-test-for-servo-power) * No Video Available * Materials: PCB, servos * Tools: none Instructions: 1. Plug in your 2S lipo (never plug in anything more than 8.4V or it’s very likely you’ll burn out your servos) 2. Connect in a single servo into the board, noting the labels for the signal, -, and + wires. On servos, the signal wire is usually yellow or white. 3. Reference picture to determine correct wire orientation. 4. If the servo doesn’t start smoking when you plug it in, good job! 5. Unplug the servo and battery for now. ##### Step 7: Plug in the 5V voltage regulator[¶](#step-7-plug-in-the-5v-voltage-regulator) * No Video Available * Materials: PCB, 5V regulator (BEC) * Tools: Soldering iron or crimper Instructions: 1. We use a 5V BEC to reduce the 7.4-8.4V voltage from the battery to 5V for the Raspberry Pi. The 5V output of the BEC has a JST connector which mates nicely with the Regulated 5V in pins you soldered in step 4. 2. The input side of the BEC has a male JST connector which you should now snip off. 3. You can either strip these input wires and solder them to the Vbat holes directly, or you can crimp female dupont headers to the wires, put them in a 1x2 housing, and plug the wires into the Vbat pins. ##### Done![¶](#id2) Complete hip assembly and body assembly if you haven’t done so already. ### [Software Installation](#id1)[¶](#software-installation) Contents * [Software Installation](#software-installation) + [Setting up your Raspberry Pi](#setting-up-your-raspberry-pi) - [Preparing the Pi’s SD card](#preparing-the-pi-s-sd-card) * [1. Put the SD card into your desktop / laptop.](#put-the-sd-card-into-your-desktop-laptop) * [2. Download this version of Raspbian](#download-this-version-of-raspbian) * [3. Use etcher to flash the card.](#use-etcher-to-flash-the-card) * [4. Open up the SD card file system.](#open-up-the-sd-card-file-system) * [5. Download the latest release of the RPI-Setup repository.](#download-the-latest-release-of-the-rpi-setup-repository) * [6. Move all the files in the downloaded repository into the SD card.](#move-all-the-files-in-the-downloaded-repository-into-the-sd-card) - [Enabling Basic Functionality](#enabling-basic-functionality) * [1. Turn on your Raspberry Pi.](#turn-on-your-raspberry-pi) * [2. Configure your computer to SSH into the robot](#configure-your-computer-to-ssh-into-the-robot) * [2. SSH into the pi from your computer.](#ssh-into-the-pi-from-your-computer) * [3. Enter read-write mode](#enter-read-write-mode) * [4. Get internet access](#get-internet-access) * [4. [For Stanford Student Students] Get internet access at Stanford](#for-stanford-student-students-get-internet-access-at-stanford) * [5. Install prerequisites](#install-prerequisites) * [What the RPI-Setup repo does](#what-the-rpi-setup-repo-does) + [Installing the StanfordQuadruped software on the Raspberry Pi](#installing-the-stanfordquadruped-software-on-the-raspberry-pi) - [Steps](#steps) * [1. Connect to the Pi over SSH](#connect-to-the-pi-over-ssh) * [2. Test for the internet connection.](#test-for-the-internet-connection) * [3. Clone this repo (on the Pi)](#clone-this-repo-on-the-pi) * [4. Install requirements (on the Pi)](#install-requirements-on-the-pi) * [5. Power-cycle the robot](#power-cycle-the-robot) * [6. Verify everything is working](#verify-everything-is-working) * [7. Done!](#done) #### [Setting up your Raspberry Pi](#id2)[¶](#setting-up-your-raspberry-pi) * Raspberry Pi 4 * SD Card (32GB recommended) * Raspberry Pi 4 power supply (USB-C, 5V, >=3A) * Ethernet cable ##### [Preparing the Pi’s SD card](#id3)[¶](#preparing-the-pi-s-sd-card) From your desktop / laptop: ###### [1. Put the SD card into your desktop / laptop.](#id4)[¶](#put-the-sd-card-into-your-desktop-laptop) ###### [2. Download this version of Raspbian](#id5)[¶](#download-this-version-of-raspbian) Use [this version](https://slack-files.com/T0RAWRCGY-FQG7WTSBH-eb9549ed22) so everyone is using the same version. Unzip and extract the file. ###### 3. Use [etcher](https://www.balena.io/etcher/) to flash the card.[¶](#use-etcher-to-flash-the-card) * If you are using the recommended etcher, this is the start-up menu. Select 2019-09-26-raspbian-buster-lite.img (file inside zip )and the SD card. * Image of SD card being flashed. * Done! ###### [4. Open up the SD card file system.](#id7)[¶](#open-up-the-sd-card-file-system) Sometimes it takes some time for your computer to read the SD card and show the boot folder. Try removing the SD card and putting it back in, if the problem persists. ###### 5. Download the latest release of the [RPI-Setup repository](https://github.com/Nate711/RPI-Setup).[¶](#download-the-latest-release-of-the-rpi-setup-repository) * Unzip and extract all the files. ###### [6. Move all the files in the downloaded repository into the SD card.](#id9)[¶](#move-all-the-files-in-the-downloaded-repository-into-the-sd-card) * Replace any files that conflict so the repository’s version overwrites the original version. You can now delete the zip file and the now empty folder. ##### [Enabling Basic Functionality](#id10)[¶](#enabling-basic-functionality) ###### [1. Turn on your Raspberry Pi.](#id11)[¶](#turn-on-your-raspberry-pi) Remove SD card from computer and put it into your Raspberry Pi. Connect power to the Pi as well. If your Pi does not boot, please try going back to step 3 “Use etcher to flash the card” and use this version of Rasbian instead: <https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2020-02-14/2020-02-13-raspbian-buster-lite.zip###### [2. Configure your computer to SSH into the robot](#id12)[¶](#configure-your-computer-to-ssh-into-the-robot) * To use ethernet for set up (recommended), connect the ethernet cable to your computer and the raspberry pi. * Go to your network settings for the interface you wish to use (ethernet/wifi) * Change your Configure IPv4: Manually * Change your IP Address to something in range 10.0.0.X (If you are a part of Stanford Student Robotics pick something that doesn’t colide with other systems from this [document](https://docs.google.com/spreadsheets/u/1/d/1pqduUwYa1_sWiObJDrvCCz4Al3pl588ytE4u-Dwa6Pw/edit?usp=sharing) ) * Change your Subnet Mask: 255.255.255.0 * Leave the Router blank * After disconnecting from the robot network remember to return those settings to what they orignially were, otherwise your internet on that interface won’t work ###### [2. SSH into the pi from your computer.](#id13)[¶](#ssh-into-the-pi-from-your-computer) Run `ssh pi@10.0.0.10` (The default password is `raspberry`) ###### [3. Enter read-write mode](#id14)[¶](#enter-read-write-mode) Run `rw` in the robot shell. Confirm that the terminal prompt ends with (rw) instead of (ro). ###### [4. Get internet access](#id15)[¶](#get-internet-access) There are two methods for getting internet access: using the raspi-config tool on the Pi or changing the wpa_supplicant file on the SD card before inserting it into the Pi. If you’re on Stanford campus, please follow the instructions in the next section instead since there are special requirements. If you’re not on Stanford campus, using the raspi-config tool is simpler and recommended for beginners. However, modifying the wpa_supplicant file has the benefit that you can set the proper internet settings without SSHing into the Pi. 1. Raspi-config method Once SSH’d into the Pi, run: ``` sudo raspi-config ``` This is the menu that will appear. Go to Network Options, then Wi-Fi and enter your SSID (Wi-Fi name, eg. Netgear, Linksys) and password. 2. Wpa_supplicant method Edit **/etc/wpa_supplicant/wpa_supplicant.conf** as documented in [this link](https://www.raspberrypi.org/documentation/configuration/wireless/wireless-cli.md) , see “Adding the network details to the Raspberry Pi”. You can also see this [link](https://linux.die.net/man/5/wpa_supplicant.conf). Thanks to pi-init2 magic that file can be edited before the pi is ever turned on from **/boot/appliance/etc/wpa_supplicant/wpa_supplicant.conf** ###### [4. [For Stanford Student Students] Get internet access at Stanford](#id16)[¶](#for-stanford-student-students-get-internet-access-at-stanford) * Plug your Pi in to power (over the onboard micro USB port). Log in to the Pi over SSH. In the welcome message that comes after the login line, look for the Pi’s MAC address, which will appear under the line that says “wireless Hardware MAC address”. Note that address down. * Use another computer to navigate to iprequest.stanford.edu. * Log in using your Stanford credentials. * Follow the on-screen instructions to add another device: + **First page:** Device Type: Other, Operating System: Linux, Hardware Address: put Pi’s MAC address + **Second page:** Make and model: Other PC, Hardware Addresses Wired: delete what’s there, Hardware Addresses Wireless: put Pi’s MAC address * Confirm that the Pi is connected to the network: + Wait for an email (to your Stanford email) that the device has been accepted + **sudo reboot** on the Pi + After it’s done rebooting, type ping www.google.com and make sure you are receiving packets over the network ###### [5. Install prerequisites](#id17)[¶](#install-prerequisites) * Run `sudo ./install_packages.sh` * If the IP is still 10.0.0.10 you will be prompted to change it. The raspberry Pi IP should not be the same as your computer’s IP, 10.0.0.Y. * If the hostname is still raspberry you will be prompted to change it. * You will be asked to enter the current time and date. You can skip to the next step if you’d like to automatically set the time and date. * Run `sudo ./time_sync.sh` to automatically set the time and date. ###### [What the RPI-Setup repo does](#id18)[¶](#what-the-rpi-setup-repo-does) * Enables ssh. Because the password is kept unchanged (raspberry) ssh is only enabled on the ethernet interface. Comment out the ListenAddress lines from /boot/appliance/etc/ssh/sshd_config to enable it on all interfaces. * Sets the Pi to connect to the robot network (10.0.0.X) over ethernet * Expands the SD card file system * Sets the file system up as read-only * Prepares to connect to Stanford WiFi (see above for details) * Gives the script to install tools and repos needed for development #### [Installing the StanfordQuadruped software on the Raspberry Pi](#id19)[¶](#installing-the-stanfordquadruped-software-on-the-raspberry-pi) ##### [Steps](#id20)[¶](#steps) ###### [1. Connect to the Pi over SSH](#id21)[¶](#connect-to-the-pi-over-ssh) Check that it has access to the internet. If you’re having trouble SSH-ing into the Pi, please check the instructions for setting the Pi’s ethernet settings linked in the previous step. ``` ssh pi@10.0.0.Y ``` Here, “Y” is the IP address you chose for the Pi when running the install_packages.sh script. When prompted for the password, enter the default password “raspberry” or the one you set in the install_packages.sh script. If you forgot what the Pi’s IP address is, turn off the Pi, take out the SD card and put it in your computer. Then open the sd card folder and go to the folder: boot/appliance/etc/network/. Open the file called “interfaces” in a text editor. On line 19 it should show the IP address as “address 10.0.0.x”. ###### [2. Test for the internet connection.](#id22)[¶](#test-for-the-internet-connection) ``` ping www.google.com ``` This is what the output should look like: If that doesn’t work, do: ``` ifconfig ``` and check the wlan0 portion to check if you have an IP address and other debugging info. ###### [3. Clone this repo (on the Pi)](#id23)[¶](#clone-this-repo-on-the-pi) ``` git clone https://github.com/stanfordroboticsclub/StanfordQuadruped.git ``` ###### [4. Install requirements (on the Pi)](#id24)[¶](#install-requirements-on-the-pi) ``` cd StanfordQuadruped sudo bash install.sh ``` ###### [5. Power-cycle the robot](#id25)[¶](#power-cycle-the-robot) Unplug the battery, wait about 30 seconds, and then plug it back in. ###### [6. Verify everything is working](#id26)[¶](#verify-everything-is-working) 1. If you just powered on the Pi, wait about 30 seconds until the green light stops blinking. 2. SSH into the robot > * Run `ssh pi@10.0.0.xx (where xx is the IP address you chose for the robot)` > 3. Check the status for the joystick service > * Run `sudo systemctl status joystick` > * If you haven’t yet connected the PS4 controller, it should say something like > ``` > pi@pupper(rw):~/StanfordQuadruped$ sudo systemctl status joystick > ● joystick.service - Pupper Joystick service > Loaded: loaded (/home/pi/PupperCommand/joystick.service; enabled; vendor preset: enabled) > Active: active (running) since Sun 2020-03-01 06:57:20 GMT; 1s ago > Main PID: 5692 (python3) > Tasks: 3 (limit: 4035) > Memory: 7.1M > CGroup: /system.slice/joystick.service > ├─5692 /usr/bin/python3 /home/pi/PupperCommand/joystick.py > └─5708 hcitool scan --flush > Mar 01 06:57:20 pupper systemd[1]: Started Pupper Joystick service. > Mar 01 06:57:21 pupper python3[5692]: [info][controller 1] Created devices /dev/input/js0 (joystick) /dev/input/event0 (evdev) > Mar 01 06:57:21 pupper python3[5692]: [info][bluetooth] Scanning for devices > ``` > 4. Connect the PS4 controller to the Pi by putting it pairing mode. > * To put it into pairing mode, hold the share button and circular Playstation button at the same time until it starts making quick double flashes. > * If it starts making slow single flashes, hold the Playstation button down until it stops blinking and try again. > 5. Once the controller is connected, check the status again > * Run `sudo systemctl status joystick` > * It should now look something like: > ``` > pi@pupper(rw):~/StanfordQuadruped$ sudo systemctl status joystick > ● joystick.service - Pupper Joystick service > Loaded: loaded (/home/pi/PupperCommand/joystick.service; enabled; vendor preset: enabled) > Active: active (running) since Sun 2020-03-01 06:57:20 GMT; 55s ago > Main PID: 5692 (python3) > Tasks: 2 (limit: 4035) > Memory: 7.3M > CGroup: /system.slice/joystick.service > └─5692 /usr/bin/python3 /home/pi/PupperCommand/joystick.py > Mar 01 06:57:20 pupper systemd[1]: Started Pupper Joystick service. > Mar 01 06:57:21 pupper python3[5692]: [info][controller 1] Created devices /dev/input/js0 (joystick) /dev/input/event0 (evdev) > Mar 01 06:57:21 pupper python3[5692]: [info][bluetooth] Scanning for devices > Mar 01 06:58:12 pupper python3[5692]: [info][bluetooth] Found device A0:AB:51:33:B5:A0 > Mar 01 06:58:13 pupper python3[5692]: [info][controller 1] Connected to Bluetooth Controller (A0:AB:51:33:B5:A0) > Mar 01 06:58:14 pupper python3[5692]: running > Mar 01 06:58:14 pupper python3[5692]: [info][controller 1] Battery: 50% > ``` > * If the pi can’t find the joystick after a minute or two, it’s possible that the pi’s bluetooth controller was never turned on. Run `sudo hciconfig hci0 up` to turn the radio on. Then restart the pi. > 6. Check the status of the robot service > * Run `sudo systemctl status robot` > * The output varies depending on the order of you running various programs, but just check that it doesn’t have any red text saying that it failed. > * If it did fail, usually this fixes it: `sudo systemctl restart robot` ###### [7. Done!](#id27)[¶](#done) Continue to Calibration. ### Calibration[¶](#calibration) Calibration is a necessary step before running the robot because that we don’t yet have a precise measurement of how the servos arms are fixed relative to the servo output shafts. Running the calibration script will help you determine this rotational offset by prompting you to align each of the 12 degrees of freedom with a known angle, such as the horizontal or the vertical. #### Materials[¶](#materials) 1. Finished robot 2. Some sort of stand to hold the robot up so that its legs can extend without touching the ground/table. #### Steps[¶](#steps) 1. MangDang produced a [video](https://youtu.be/4bmYi6F7OBs) illustrating the calibration steps outlined below. You can stop watching at 17:00 because the new code automatically writes the calibration numbers rather than requiring you to edit the calibration file manually. 2. Plug in your 2S Lipo battery 3. SSH into the robot as done in the installation section 4. Stop the robot script from taking over the PWM outputs: ``` rw sudo systemctl stop robot ``` 5. Run the calibration script > * The calibration script will prompt you through calibrating each of pupper’s 12 servo motors. When it asks you to move a link to the horizontal position, you might be wondering what exactly counts as making the link horizontal. The answer is to align the *joint centers* of each link. For example, when aligning the upper link to the horizontal, you’ll want to the line between the servo spline and bolt that connects the upper link to the lower link to be as horizontal as possible.: > ``` > cd StanfordQudruped > sudo pigpiod > python3 calibrate_servos.py > ``` > * The images below illustate the horizontal and vertical positions mentioned in the calibration script. > * If your servos can’t reach these positions, it’s likely the servo discs and/or arms were assembled incorrectly. Correct alignment for the ab/adduction motors: Correct alignment for the upper link: Correct alignment for the lower link: 1. Re-enable the robot script: ``` sudo systemctl start robot ``` ### Robot operation[¶](#robot-operation) #### Running the robot[¶](#running-the-robot) 1. Plug in your 2S Lipo battery. > * If you followed the instructions above, the code will automatically start running on boot. > * If you want to turn this feature off, ssh into the robot, go into rw mode, and then do: > ``` > sudo systemctl disable robot > ``` > 2. Connect the PS4 controller to the Pi by putting it pairing mode. > * To put it into pairing mode, hold the share button and circular Playstation button at the same time until it starts making quick double flashes. > * If it starts making slow single flashes, hold the Playstation button down until it stops blinking and try again. > 3. Wait until the controller binds to the robot, at which point the controller should turn a dim green (or whatever color you chose in pupper/HardwareConfig.py for the deactivated color). 4. Press L1 on the controller to “activate” the robot. The controller should turn bright green (or again, whatever you chose in HardwareConfig). 5. You’re good to go! Check out the controls section below for operating instructions. #### Robot controls[¶](#robot-controls) * L1: Press to toggle active mode and deactivate mode. > + Note: the PS4 controller’s front light will change colors to indicate if the robot is deactivated or activated. > * R1: Press to transition between Rest mode and Trot mode * Left joystick > + Forward/back: moves the robot forward/backwards when in Trot mode > + Left/right: moves the robot left/right when in Trot mode > * Right joystick > + Forward/back: pitches the robot forward and backward > + Left/right: turns the robot left and right > * D-Pad > + Forward/back: raises and lowers the body > + Left/rights: rolls the body left/right > * “X” button: Press it three times to complete a full hop #### Important Notes[¶](#important-notes) * PS4 controller pairing instructions (repeat of instructions above) > + To put it into pairing mode, hold the share button and circular Playstation button at the same time until it starts making quick double flashes. > + If it starts making slow single flashes, hold the Playstation button down until it stops blinking and try again. > * Battery voltage > + If you power the robot with anything higher than 8.4V (aka >2S) you’ll almost certainly fry all your expensive servos! > + Also note that you should attach a lipo battery alarm to your battery when running the robot so that you are know when the battery is depleted. Discharging your battery too much runs the risk of starting a fire, especially if you try to charge it again after it’s been completely discharged. A good rule-of-thumb for know when a lipo is discharged is checking whether the individual cell voltages are below 3.6V. > + The robot will walk much more poorly when the battery is mostly discharged since a lower voltage is going to the motors. > * Feet! > + Using the bare carbon fiber as feet works well for grippy surfaces, including carpet. If you want to use the robot on a more slippery surface, we recommend buying rubber grommets (McMaster #90131A101) and fastening them to the pre-drilled holes in the feet. #### Tuning[¶](#tuning) * You can play around with different walking parameters by changing the config file `StanfordQuadruped/pupper/Config.py` > + `self.max_x_velocity` [m/s]: The maximum forward/back trotting velocity > + `self.max_y_velocity` [m/s]: Max left/right trotting velocity > + `self.max_yaw_rate` [rad/s]: Max turning velocity > + `self.z_clearance` [m]: How how the robot tries to lift each leg off the ground during swing. It’s called z_clearance because it’s the maximum distance in the z-axis between the foot and ground during swing. You can increase this value to make the robot step higher. > + `self.overlap_time` [s]: Amount of time per step that the robot has all of its legs on the ground. Increase this value for more stable walking. > + `self.swing_time` [s]: Amount of time the robot has each foot in the air for. ### Mechanical Design[¶](#mechanical-design) Fusion 360 CAD model: <https://a360.co/2TEh4gQPower distribution pcb files: <https://github.com/stanfordroboticsclub/Pupper-Raspi-PDB/### Controller Description[¶](#controller-description) #### Main Loop[¶](#main-loop) The main program is **run_robot.py** which is located in this directory. The robot code is run as a loop, with a joystick interface, a controller, and a hardware interface orchestrating the behavior. The joystick interface is responsible for reading joystick inputs from a UDP socket and converting them into a generic robot **command** type. A separate program, **joystick.py**, publishes these UDP messages, and is responsible for reading inputs from the PS4 controller over bluetooth. The controller does the bulk of the work, switching between states (trot, walk, rest, etc) and generating servo position targets. A detailed model of the controller is shown below. The third component of the code, the hardware interface, converts the position targets from the controller into PWM duty cycles, which it then passes to a Python binding to **pigpiod**, which then generates PWM signals in software and sends these signals to the motors attached to the Raspberry Pi. #### Controller Detail[¶](#controller-detail) This diagram shows a breakdown of the robot controller. Inside, you can see four primary components: a gait scheduler (also called gait controller), a stance controller, a swing controller, and an inverse kinematics model. The gait scheduler is responsible for planning which feet should be on the ground (stance) and which should be moving forward to the next step (swing) at any given time. In a trot for example, the diagonal pairs of legs move in sync and take turns between stance and swing. As shown in the diagram, the gait scheduler can be thought of as a conductor for each leg, switching it between stance and swing as time progresses. The stance controller controls the feet on the ground, and is actually quite simple. It looks at the desired robot velocity, and then generates a body-relative target velocity for these stance feet that is in the opposite direction as the desired velocity. It also incorporates turning, in which case it rotates the feet relative to the body in the opposite direction as the desired body rotation. The swing controller picks up the feet that just finished their stance phase, and brings them to their next touchdown location. The touchdown locations are selected so that the foot moves the same distance forward in swing as it does backwards in stance. For example, if in stance phase the feet move backwards at -0.4m/s (to achieve a body velocity of +0.4m/s) and the stance phase is 0.5 seconds long, then we know the feet will have moved backwards -0.20m. The swing controller will then move the feet forwards 0.20m to put the foot back in its starting place. You can imagine that if the swing controller only put the leg forward 0.15m, then every step the foot would lag more and more behind the body by -0.05m. Both the stance and swing controllers generate target positions for the feet in cartesian coordinates relative the body center of mass. It’s convenient to work in cartesian coordinates for the stance and swing planning, but we now need to convert them to motor angles. This is done by using an inverse kinematics model, which maps between cartesian body coordinates and motor angles. These motor angles, also called joint angles, are then populated into the **state** variable and returned by the model. ### Help[¶](#help) You can post any questions you might have to our Google Group: <https://groups.google.com/forum/#!forum/stanford-quadrupedsYou can also email me at [<EMAIL>](mailto:<EMAIL>).
wrappr
cran
R
Package ‘wrappr’ May 23, 2023 Title A Collection of Helper and Wrapper Functions Version 0.1.0 Description Helper functions to easily add functionality to functions. The package can assign functions to have an lazy evaluation allowing you to save and update the arguments before and after each function call. You can set a temporary working directory within functions and wrap console messages around other functions. License MIT + file LICENSE Encoding UTF-8 RoxygenNote 7.2.3 Suggests testthat (>= 3.0.0) Config/testthat/edition 3 Imports methods NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0009-0000-6003-7671>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-05-23 08:50:02 UTC R topics documented: get_cache_or_creat... 2 lazy_eva... 3 msg_wra... 4 set_temp_w... 5 get_cache_or_create Checks if variable exists in environment and returns back or creates a new variable Description Checks if variable exists in environment and returns back or creates a new variable Usage get_cache_or_create( var, func, ..., exists_func_args = NA, get_func_args = NA, warning_msg = NA_character_ ) Arguments var character. The name of the variable to check in the global environment. func function. A function that returns a value. ... Additional arguments to be passed to the param func. exists_func_args list. A list of arguments to use in base::exists. get_func_args list. A list of arguments to use in bass::get. warning_msg character. Message sent to stop function if an error occurs. Value Unknown. The return type from the param func or the existing variable in global environment. Examples ## Not run: df <- data.frame(col_1 = c("a","b","c"), col_2 = c(1,2,3)) create_blank_df <- function() { data.frame(col_1 = NA_character_, col_2 = NA_integer_) } df_1 <- get_cache_or_create( "df", create_blank_df ) df_2 <- get_cache_or_create( "df_2", create_blank_df ) ## End(Not run) lazy_eval save and Delay a function call with the option to change the function and arguments when called Description save and Delay a function call with the option to change the function and arguments when called Usage lazy_eval(..., .f) Arguments ... Additional arguments to be passed to the param .f. Also in closure function returned. .f function. A function that will be called when needed. Also in closure function returned. Value closure function with same param names plus the param names overwrite_args Boolean and re- turn_new_closure Boolean. Examples numbers <- c(1,2,3,4,5) func <- lazy_eval(numbers, .f = sum) sum_result <- func() max_result <- func(.f = max) mean_result <- func(.f = mean) range_result <- func(.f = function(...) { max(...) - min(...)}) add_more_num_result <- func(4,5,6, NA, na.rm = TRUE) updated_func <- func(na.rm = TRUE, return_new_closure = TRUE) updated_func_result <- updated_func() msg_wrap Wraps a message before and/or after a function Description Wraps a message before and/or after a function Usage msg_wrap( func, ..., before_func_msg = "", after_func_msg = "", print_func = print, use_msg = "both", print_return_var = FALSE ) Arguments func function. ... Additional arguments to be passed into the param func. before_func_msg character. after_func_msg character. print_func function. The default is print. Can use related function like message. use_msg character. The default is "both". Selects which messages to print in the function. Use before, after, both or none. print_return_var Boolean. The default is FALSE. Prints the output from the called func using the print argument from param print_func. Value Unknown. The return type from the param func. Examples numbers <- c(1,2,3,4,5) answer <- msg_wrap( sum, numbers, before_func_msg = "Currently summing the numbers", after_func_msg = "Summing the numbers complete" ) numbers_with_na <- c(1,2,3,NA,5) answer_na_removed <- msg_wrap( sum, numbers, na.rm = TRUE, before_func_msg = "Sum with na.rm set to TRUE", use_msg = "before" ) numbers_to_sum <- c(10,20,30) msg_wrap((function(x) sum(x[x%%2 == 1])), x = numbers_to_sum, before_func_msg = "Result from sum of odd numbers", use_msg = "before", print_return_var = TRUE ) set_temp_wd Sets a temporary working directory within the function scope Description Sets a temporary working directory within the function scope Usage set_temp_wd( temp_cwd, func, ..., err_msg = "An error has occured in the function set_temp_wd" ) Arguments temp_cwd character. Folder path to temporarily set the working directory func function. A function that used a directory path ... Additional arguments to be passed to the param func. err_msg character. Message sent to stop function if an error occurs. Value Unknown. The return type from the param func. Examples ## Not run: temp_wd <- "example/folder/address/to/change" get_data <- set_temp_wd(temp_wd, read.csv, file = "file.csv") ## End(Not run)
kirby21
cran
R
Package ‘kirby21.base’ October 13, 2022 Type Package Title Example Data from the Multi-Modal MRI 'Reproducibility' Resource Version 1.7.3 Date 2020-07-02 Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Multi-modal magnetic resonance imaging ('MRI') data from the 'Kirby21' 'reproducibility' study <https://www.nitrc.org/projects/multimodal/>, including functional and structural imaging. License GPL-2 LazyData true LazyLoad true Imports utils, stats, git2r RoxygenNote 7.1.0 URL https://www.nitrc.org/projects/multimodal/, http://dx.doi.org/10.1016/j.neuroimage.2010.11.047 Encoding UTF-8 Suggests testthat (>= 2.1.0) NeedsCompilation no Repository CRAN Date/Publication 2020-07-02 18:40:02 UTC R topics documented: all_modalitie... 2 copy_kirby21_dat... 2 delete_kirby21_dat... 3 download_kirby21_dat... 4 get_id... 5 get_image_filename... 5 get_image_filenames_d... 6 get_par_filename... 7 kirby21_demo... 8 modality_d... 8 subject_id_to_visit_i... 9 all_modalities Return All Modalities Description Return the modalities for images where packages were developed Usage all_modalities() Value Vector of characters copy_kirby21_data Copy Kirby21 Data to an output directory Description Copies files from Kirby21 Package to an output directory Usage copy_kirby21_data(copydir, ...) Arguments copydir Output directory for data ... Arguments to pass to get_image_filenames Value Logical if files are copied Examples on_cran = !identical(Sys.getenv("NOT_CRAN"), "true") on_ci <- nzchar(Sys.getenv("CI")) local_run = grepl("musch", tolower(Sys.info()[["user"]])) run_example = !on_cran || on_ci || local_run if (run_example) { tdir = tempfile() dir.create(tdir) outdir = tempdir() surv_installed = "kirby21.survey" %in% installed.packages() if (!surv_installed) { testthat::expect_error( download_kirby21_data("SURVEY", force = FALSE)) } else { download_kirby21_data("SURVEY", force = FALSE) } res = download_kirby21_data("SURVEY", outdir = outdir, force = TRUE) if (!surv_installed) { try({remove.packages("kirby21.survey")}) } copy_kirby21_data(copydir = tdir, outdir = outdir) } delete_kirby21_data Delete Kirby21 Imaging Data Description This function allows users to remove specific modalities for Kirby21 data sets. This allows this package to be on CRAN Usage delete_kirby21_data(modality = kirby21.base::all_modalities(), outdir = NULL) Arguments modality modality of images that are to be downloaded. You must have the package downloaded for that modality. outdir output directory for files to download. It will default to the directory of the corresponding package for the data. Value Nothing is returned Examples on_cran = !identical(Sys.getenv("NOT_CRAN"), "true") on_ci <- nzchar(Sys.getenv("CI")) local_run = grepl("musch", tolower(Sys.info()[["user"]])) run_example = !on_cran || on_ci || local_run if (run_example) { outdir = tempdir() res = download_kirby21_data("SURVEY", outdir = outdir, force = TRUE) delete_kirby21_data("SURVEY", outdir = outdir) } download_kirby21_data Download Kirby21 Imaging Data Description This function allows users to download specific modalities for Kirby21 data sets. This allows this package to be on CRAN Usage download_kirby21_data( modality = kirby21.base::all_modalities(), progress = TRUE, force = FALSE, outdir = NULL ) Arguments modality modality of images that are to be downloaded. You must have the package downloaded for that modality. progress Should verbose messages be printed when downloading the data force If the package of that modality is not installed stop. If force = FALSE, then this will download the data but not really install the package. outdir output directory for files to download. It will default to the directory of the corresponding package for the data. Value A logical indicating the data is there. Examples on_cran = !identical(Sys.getenv("NOT_CRAN"), "true") on_ci <- nzchar(Sys.getenv("CI")) local_run = grepl("musch", tolower(Sys.info()[["user"]])) run_example = !on_cran || on_ci || local_run if (run_example) { outdir = tempdir() res = download_kirby21_data("SURVEY", outdir = outdir) } get_ids Get IDs with Data in Package Description Return the IDs for the people scanned available in the kirby21 packages Usage get_ids() Value Vector of numeric ids get_image_filenames Get Image Filenames Description Return the filenames for the images Usage get_image_filenames(...) Arguments ... arguments passed to get_image_filenames_df Examples get_image_filenames() get_image_filenames_df Get Image Filenames in a data.frame Description Return a data.frame of filenames for the images Usage get_image_filenames_df( ids = get_ids(), modalities = all_modalities(), visits = c(1, 2), long = TRUE, warn = TRUE, outdir = NULL ) get_image_filenames_matrix(...) get_image_filenames_list(...) get_image_filenames_list_by_visit(...) get_image_filenames_list_by_subject(...) Arguments ids ID to return modalities vector of image modalities within c("FLAIR", "MPRAGE", "T2w", "fMRI", "DTI") to return visits Vector of scan indices to return (1 or 2 or both) long if TRUE, each row is a subject, visit, modality pair warn if TRUE, warnings will be produced when packages are not installed outdir output directory for files to download. It will default to the directory of the corresponding package for the data. ... arguments passed to get_image_filenames_df Value Data.frame of filenames Examples get_image_filenames_df() get_image_filenames_matrix() get_image_filenames_list() get_image_filenames_list_by_visit() get_image_filenames_list_by_subject() get_par_filenames Get Filenames of Par files Description Return the filenames for the par files Usage get_par_filenames( ids = get_ids(), modalities = c("FLAIR", "MPRAGE", "T2w", "fMRI", "DTI"), visits = c(1, 2) ) Arguments ids ID to return modalities vector of image modalities within c("FLAIR", "MPRAGE", "T2w", "fMRI", "DTI") to return visits Vector of scan indices to return (1 or 2 or both) Value Data.frame of filenames Examples get_par_filenames() kirby21_demog Kirby 21 Demographics Description A dataset containing demographic information for kirby21 data sets Format A data frame with 21 rows and 3 columns. Source https://www.nitrc.org/frs/?group_id=313 modality_df All Modalities and the Corresponding package Description Return the modalities for images and the packages that contain them Usage modality_df() Value data.frame of two columns: • modality: modality of image • package: package that contains it subject_id_to_visit_id Kirby 21 Subject Identifiers to NITRC Visit Identifiers Description A dataset containing the mapping from the Subject IDs from the Kirby demographics to the KKI2009 identifiers on NITRC Format A data frame with 42 rows and 4 columns Source https://www.nitrc.org/frs/?group_id=313
GrimR
cran
R
Package ‘GrimR’ October 12, 2022 Type Package Title Calculate Optical Parameters from Spindle Stage Measurements Version 0.5 Date 2018-05-28 Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Description Calculates optical parameters of crystals like the optical axes, the axis angle 2V, and the direction of the principal axes of the indicatrix from extinction angles measured on a spindle stage mounted on a polarisation microscope stage. De- tails of the method can be found in Dufey (2017) <arXiv:1703.00070>. License GPL-3 RoxygenNote 6.0.1 LazyData true Depends car, stats4 NeedsCompilation no Repository CRAN Date/Publication 2018-05-29 09:23:26 UTC R topics documented: Bloss7... 2 Carma... 2 excalibrI... 3 fit.joe... 3 Gunte... 4 pcir... 5 Wulffne... 5 Wulffplo... 6 Wulffpoin... 7 Bloss73 Bloss73 Description Adularia data from: Bloss, <NAME>., and <NAME>. "Computer determination of 2V and indicatrix orientation from extinction data." American Mineralogist 58 (1973): 1052-1061. Usage data("Bloss73") Format A data frame with 19 observations on the following 2 variables. S a numeric vector MS a numeric vector Examples res<-fit.joel(Bloss73,MR=180.95,cw="ccw",optimMR=FALSE) Carman Data for Topaz by Carman Description Data from <NAME>, "The spindle stage, principles and practice", Cambridge UP, Cambridge, 1981, p. 226, for Topaz provided by Prof. Carman. Usage data("Carman") Format A data frame with 36 observations of the following 2 variables. S a numeric vector MS a numeric vector Examples res<-fit.joel(Carman,cw="ccw",optimMR=TRUE) excalibrII excalibrII Description Example data for Tiburon Albite from Bartelmehs, <NAME>., et al. "Excalibr II." Zeitschrift fuer Kristal- lographie 199.3-4 (1992): 185-196. Usage data("excalibrII") Format A data frame with 19 observations on the following 2 variables. S a numeric vector MS a numeric vector Examples res<-fit.joel(excalibrII,MR=180.15,cw="ccw",optimMR=FALSE) fit.joel Function fit.joel Description Calculate the angle between the optical axes 2V, the optical axes in cartesian and polar coordinates and the principal axes of the dielectric tensor in cartesian and polar coordinates. Usage fit.joel(Data, MR = NULL, cw = c("ccw", "cw"),optimMR=FALSE) Arguments Data (data frame) containing the spindle angles S and the extinction angles ES MR (numeric) The reference azimuth; If numeric and optimMR==TRUE, this value will be used as a starting value for further optimization. If NULL, a starting value will be guessed. cw (character) string "cw" for a clockwise graduated table, "ccw" for a counter- clockwise graduated table (default) optimMR (logical) If FALSE, the provided MR will be used without further refinement, if TRUE, the MR will be refined so as to minimize the deviance Value (list) with elements: coeffs list of the fitted parameters covmat matrix of covariances of the parameters delta2V list of estimate of 2V, its standard deviation and upper and lower confidence limits kart data frame with cartesian coordinates of the axes, sd, and confidence intervals sphaer data frame with S and ES values of the axes, sd, and confidence intervals principal data frame with S and MS angles to bring axes into extinction Extinctions data frame with S, MS, ES, calculated ES and ES-ES calculated Wulffdat data necessary to create a plot on the Wulff stereonet Author(s) <NAME> <<EMAIL>> Examples # With 360 deg. data: res<-fit.joel(Carman,MR=NULL,cw="ccw",optimMR=TRUE) Wulffplot(res) #Plot data on a Wulff net #with 180 degree data: res<-fit.joel(Gunter,MR=-0.89,cw="cw",optimMR=FALSE) Wulffplot(res) #Plot data on a Wulff net Gunter Data from Gunter et al. Description Gunter, <NAME>., et al. "Results from a McCrone spindle stage short course, a new version of EXCALIBR, and how to build a spindle stage." MICROSCOPE-LONDON THEN CHICAGO-. 52.1 (2004): 23-39. Usage data("Gunter") Format A data frame with 19 observations on the following 2 variables. S a numeric vector MS a numeric vector Examples res<-fit.joel(Gunter,MR=-0.89,cw="cw",optimMR=FALSE) pcirc Circle Plot Description Add a circle to a plot, with cross-hairs Usage pcirc(gcol = "black", border = "black", ndiv = 36) Arguments gcol color of crosshairs border border color ndiv number of divisions for the circle Value no return values, used for side effects Author(s) <NAME> <<EMAIL>> Examples plot(c(-1,1),c(-1,1)) pcirc(gcol = "black", border = "black", ndiv = 36) Wulffnet Function Wulffnet Description Function Wulffnet Plot a Wulffnet modified from RFOC package; Wulff net rotated Usage Wulffnet(add = FALSE, col = gray(0.7), border = "black", lwd = 1) Arguments add Logical, TRUE=add to existing plot col color border border color lwd line width Details Plots equal-angle stereonet as opposed to equal-area. In comparison to the original Wnet function from RFOC package, Wulff net is rotated by 90 degrees so as to conform with custom in mineralogy. Value graphical side effects Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples Wulffnet(add = FALSE, col = gray(0.7), border = "black", lwd = 1) Wulffplot Function Wulffplot Description Function Wulffplot Plot the S and ES values of measured points calculated points and of all axes on a Wulff stereonet Usage Wulffplot(x) Arguments x (list) Output list from the fit.joel function Author(s) <NAME> <<EMAIL>> Examples res<-fit.joel(Gunter,MR=-0.89,cw="cw",optimMR=FALSE) Wulffplot(res) Wulffpoint Function Wulffpoint Plots Points in the Wulffnet given S and ES Description Function Wulffpoint Plots Points in the Wulffnet given S and ES Usage Wulffpoint(ES, S, col = 2, pch = 5, bg="white" , lab = "") Arguments ES (numeric) azimuth (extinction angle) in degrees S (numeric) spindle angle in degrees col color pch symbol type lab label bg background colour of symbol Author(s) <NAME> <<EMAIL>> See Also Wnet Examples Wulffnet() Wulffpoint(23, 34)
describedata
cran
R
Package ‘describedata’ October 13, 2022 Title Miscellaneous Descriptive Functions Version 0.1.0 Description Helper functions for descriptive tasks such as making print-friendly bivariate tables, sample size flow counts, and visualizing sample distributions. Also contains 'R' approximations of some common 'SAS' and 'Stata' functions such as 'PROC MEANS' from 'SAS' and 'ladder', 'gladder', and 'pwcorr' from 'Stata'. Imports dplyr (>= 0.7), forcats, tibble, tidyr, purrr, broom, stringr, haven, ggplot2, lmtest, rlang License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 6.1.1 Suggests testthat NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2019-08-02 11:50:02 UTC R topics documented: bivariate_compar... 2 cor.pro... 3 describedat... 4 gladde... 5 ladde... 5 nagelkerk... 6 norm_dist_plo... 6 proc_mean... 7 pwcor... 8 sample_flo... 9 stata_tid... 9 univar_fre... 10 bivariate_compare Create publication-style table across one categorical variable Description Descriptive statistics for categorical variables as well as normally and non-normally distributed continuous variables, split across levels of a categorical variable. Depending on the variable type, an appropriate statistical test is used to assess differences across levels of the comparison variable. Usage bivariate_compare(df, compare, normal_vars = NULL, non_normal_vars = NULL, cat_vars = NULL, display_round = 2, p = TRUE, p_round = 4, include_na = FALSE, col_n = TRUE, cont_n = FALSE, all_cont_mean = FALSE, all_cont_median = FALSE, iqr = TRUE, fisher = FALSE, workspace = NULL, var_order = NULL, var_label_df = NULL) Arguments df A data.frame or tibble. compare Discrete variable. Separate statistics will be produced for each level, with statis- tical tests across levels. Must be quoted. normal_vars Character vector of normally distributed continuous variables that will be in- cluded in the descriptive table. non_normal_vars Character vector of non-normally distributed continuous variables that will be included in the descriptive table. cat_vars Character vector of categorical variables that will be included in the descriptive table. display_round Number of decimal places displayed values should be rounded to p Logical. Should p-values be calculated and displayed? Default TRUE. p_round Number of decimal places p-values should be rounded to. include_na Logical. Should NA values be included in the table and accompanying statistical tests? Default FALSE. col_n Logical. Should the total number of observations be displayed for each column? Default TRUE. cont_n Logical. Display sample n for continuous variables in the table. Default FALSE. all_cont_mean Logical. Display mean (sd) for all continuous variables. Default FALSE results in mean (sd) for normally distributed variables and median (IQR) for non-normally distributed variables. Must be FALSE if all_cont_median == TRUE. all_cont_median Logical. Display median (sd) for all continuous variables. Default FALSE re- sults in mean (sd) for normally distributed variables and median (IQR) for non- normally distributed variables. Must be FALSE if all_cont_mean == TRUE. iqr Logical. If the median is displayed for a continuous variable, should interquar- tile range be displayed as well (TRUE), or should the values for the 25th and 75th percentiles be displayed (FALSE)? Default TRUE fisher Logical. Should Fisher’s exact test be used for categorical variables? Default FALSE. Ignored if p == FALSE. workspace Numeric variable indicating the workspace to be used for Fisher’s exact test. If NULL, the default, the default value of 2e5 is used. Ignored if fisher == FALSE. var_order Character vector listing the variable names in the order results should be dis- played. If NULL, the default, continuous variables are displayed first, followed by categorical variables. var_label_df A data.frame or tibble with columns "variable" and "label" that contains dis- play labels for each variable specified in normal_vars, non_normal_vars, and cat_vars. Details Statistical differences between normally distributed continuous variables are assessed using aov(), differences in non-normally distributed variables are assessed using kruskal.test(), and differ- ences in categorical variables are assessed using chisq.test() by default, with a user option for fisher.test() instead. Value A data.frame with columns label, overall, a column for each level of compare, and p.value. For normal_vars, mean (SD) is displayed, for non_normal_vars median (IQR) is displayed, and for cat_vars n (percent) is displayed. For p values on continuous variables, a superscript ’a’ denotes the Kruskal-Wallis test was used Examples bivariate_compare(iris, compare = "Species", normal_vars = c("Sepal.Length", "Sepal.Width")) bivariate_compare(mtcars, compare = "cyl", non_normal_vars = "mpg") cor.prob Calculate pairwise correlations Description Internal function to calculate pairwise correlations and return p values Usage cor.prob(df) Arguments df A data frame or tibble. Value A data.frame with columns h_var, v_var, and p.value describedata describedata: Miscellaneous descriptive and SAS/Stata duplicate functions Description The helpR package contains descriptive functions for tasks such as making print-friendly bivariate tables, sample size flow counts, and more. It also contains R approximations of some common, useful SAS/Stata functions. Frequency functions The helper functions bivariate_compare and univar_freq create frequency tables. univar_freq produces simple n and percent for categories of a single variable, while bivariate_compare com- pares continuous or categorical variables across categories of a comparison variable. This is partic- ularly useful for generating a Table 1 or 2 for a publication manuscript. Sample size functions sample_flow produces tables illustrating how final sample size is determined and the number of participants excluded by each exclusion criteria. Other helper functions nagelkerke calculates the Nagelkerke pseudo r-squared for a logistic regression model. Stata replica functions ladder, gladder, and pwcorr are approximate replicas of the respective Stata functions. Not all functionality is currently incorporated. stata_tidy reformats R model output to a format similar to Stata. SAS replica functions proc_means is an approximate replica of the respective SAS function. Not all functionality is currently incorporated. gladder Replica of Stata’s gladder function Description Creates ladder-of-powers histograms to visualize nine common transformations and compare each to a normal distribution. The following transformations are included: identity, cubic, square, square root, natural logarithm, inverse square root, inverse, inverse square, and inverse cubic. Usage gladder(x) Arguments x A continuous numeric vector. Value A ggplot object with plots of each transformation Examples gladder(iris$Sepal.Length) gladder(mtcars$disp) ladder Replica of Stata’s ladder function Description Searches the ladder of powers histograms to find a transformation to make x normally distributed. The Shapiro-Wilkes test is used to assess for normality. The following transformations are included: identity, cubic, square, square root, natural logarithm, inverse square root, inverse, inverse square, and inverse cubic. Usage ladder(x) Arguments x A continuous numeric vector. Value A data.frame Examples ladder(iris$Sepal.Length) ladder(mtcars$disp) nagelkerke Calculate Nagelkerke pseudo r-squared Description Calculate Nagelkerke pseudo r-squared from a fitted model object. Usage nagelkerke(mod) Arguments mod A glm model object, usually from logistic regression. The model must have been fit using the data option, in order to extract the data from the model object. Value Numeric value of Nagelkerke r-squared for the model norm_dist_plot Create density histogram with normal distribution overlaid Description Plots a simple density histogram for a continuous variable with a normal distribution overlaid. The overlaid normal distribution has the same mean and standard deviation as the provided variable, and the plot provides a visual means to assess the normality of the variable’s distribution. Usage norm_dist_plot(df, vars) Arguments df A data.frame or tibble. vars A character vector of continuous variable names. Value A ggplot object. Examples norm_dist_plot(df = iris, vars = "Sepal.Width") norm_dist_plot(df = iris, vars = c("Sepal.Width", "Sepal.Length")) proc_means Replica of SAS’s PROC MEANS Description Descriptive statistics for continuous variables, with the option of stratifying by a categorical vari- able. Usage proc_means(df, vars = NULL, var_order = NULL, by = NULL, n = T, mean = TRUE, sd = TRUE, min = TRUE, max = TRUE, median = FALSE, q1 = FALSE, q3 = FALSE, iqr = FALSE, nmiss = FALSE, nobs = FALSE, p = FALSE, p_round = 4, display_round = 3) Arguments df A data frame or tibble. vars Character vector of numeric variables to generate descriptive statistics for. If the default (NULL), all variables are included, except for any specified in by. var_order Character vector listing the variable names in the order results should be dis- played. If the default (NULL), variables are displayed in the order specified in vars. by Discrete variable. Separate statistics will be produced for each level. Default NULL provides statistics for all observations. n logical. Display number of rows with values. Default TRUE. mean logical. Display mean value. Default TRUE. sd logical. Display standard deviation. Default TRUE. min logical. Display minimum value. Default TRUE. max logical. Display maximum value. Default TRUE. median logical. Display median value. Default FALSE. q1 logical. Display first quartile value. Default FALSE. q3 logical. Display third quartile value. Default FALSE. iqr logical. Display interquartile range. Default FALSE. nmiss logical. Display number of missing values. Default FALSE. nobs logical. Display total number of rows. Default FALSE. p logical. Calculate p-value across by groups using aov. Ignored if no by variable specified. Default FALSE. p_round Number of decimal places p-values should be rounded to. display_round Number of decimal places displayed values should be rounded to Value A data.frame with columns variable, by variable, and a column for each summary statistic. Examples proc_means(iris, vars = c("Sepal.Length", "Sepal.Width")) proc_means(iris, by = "Species") pwcorr Replica of Stata’s pwcorr function Description Calculate and return a matrix of pairwise correlation coefficients. Returns significance levels if method == "pearson" Usage pwcorr(df, vars = NULL, method = "pearson", var_label_df = NULL) Arguments df A data.frame or tibble. vars A character vector of numeric variables to generate pairwise correlations for. If the default (NULL), all variables are included. method One of "pearson", "kendall", or "spearman" passed on to "cor". var_label_df A data.frame or tibble with columns "variable" and "label" that contains display labels for each variable specified in vars. Value A data.frame displaying the pairwise correlation coefficients between all variables in vars. sample_flow Create table illustrating sample exclusions Description Generate a table illustrating sequential exclusion from an analytical sample due to user specified exclusions. Usage sample_flow(df, exclusions = c()) Arguments df A data.frame or tibble. exclusions Character vector of logical conditions indicating which rows should be excluded from the final sample. Exclusions occur in the order specified. Value A data.frame with columns Exclusion, ’Sequential Excluded’, and ’Total Excluded’ for display. stata_tidy Tidy model output into similar format from Stata Description Create a display data frame similar to Stata model output for a fitted R model. Usage stata_tidy(mod, var_label_df = NULL) Arguments mod A fitted model object var_label_df A data.frame or tibble with columns "variable" and "label" that contains display labels for each variable in mod. Value A data.frame with columns term and display univar_freq Univariate statistics for a discrete variable Description Descriptive statistics (N, Usage univar_freq(df, var, na.rm = FALSE) Arguments df A data frame or tibble. var A discrete, numeric variable. na.rm logical. Should missing values (including NaN) be removed?) Value A data.frame with columns var, NObs, and Percent Examples univar_freq(iris, var = "Species") univar_freq(mtcars, var = "cyl")
RKelly
cran
R
Package ‘RKelly’ October 12, 2022 Type Package Title Translate Odds and Probabilities Version 1.0 Description Calculates the Kelly criterion (Kelly, J.L. (1956) <doi:10.1002/j.1538- 7305.1956.tb03809.x>) for bets given quoted prices, model predictions and commissions. Additionally it contains helper functions to calculate the probabilities for wins and draws in multi- leg games. License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 6.1.1 Suggests testthat, knitr, rmarkdown VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2019-09-04 13:50:02 UTC R topics documented: chance_to_draw_n_game... 2 chance_to_win_n_game... 2 kelly_back_de... 3 kelly_criterio... 3 kelly_lay_de... 4 chance_to_draw_n_games Calculates the chance to draw out of n matches Description Calculates the chance to draw out of n matches Usage chance_to_draw_n_games(p, n) Arguments p probability of first (or second) player winning match n number of matches Value The decimal chance for a draw Examples chance_to_draw_n_games(0.4, 4) # Draw chance if one player has p=0.4 in four matches chance_to_win_n_games Calculate win chance after multiple matches Description Chance of a player winning the majority of n matches. Draws count not as a win Usage chance_to_win_n_games(p, n) Arguments p probability for player to win a single match n number of total matches playes Value The decimal chance of winning a game Examples chance_to_win_n_games(0.55,5) # Chance for player with p=0.55 to win best of 5 matches kelly_back_dec Kelly for back bet Description Kelly for back bet Usage kelly_back_dec(price, p, commision_rate) Arguments price Price to back in decimal odds p Probability of event to to materialise commision_rate Rate of commision charged on WINNINGS Value Kelly optimised fraction of stake relative to bank Examples kelly_back_dec(2,0.5,0.05) kelly_criterion The Kelly criterion Description The Kelly criterion Usage kelly_criterion(p, alpha_w, alpha_l) Arguments p The objective probability of the event alpha_w The return multiplier in case of the event happening alpha_l The return multiplier in case of the event not happening Value The Kelly optimised fraction of the bankroll that should be bet References Thorp, <NAME>. (1997; revised 1998). The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market. http://www.eecs.harvard.edu/cs286r/courses/fall12/papers/Thorpe_ KellyCriterion2007.pdf Examples kelly_criterion(0.5,1,1) kelly_lay_dec Kelly for lay bet Description Kelly for lay bet Usage kelly_lay_dec(price, p, commision_rate) Arguments price Price at which to lay p Base probability of event that is being laid commision_rate Rate of commision charged on WINNINGS Value Kelly optimised fraction of stake relative to bank
@reframe/default-kit
npm
JavaScript
Reframe's default kit `@reframe/default-kit` === The default kit includes: * [@reframe/react](https://github.com/reframejs/reframe/blob/HEAD/react) * [@reframe/path-to-regexp](https://github.com/reframejs/reframe/blob/HEAD/path-to-regexp) * And other things. (See the default kit's source code to see these things.) ### Usage The default kit is included by default and you have to opt-out if you don't want the default kit: ``` // reframe.config.js module.exports = { skipDefaultKit: true plugins: [ // You will need to include: // - A rendered (e.g. `@reframe/react`) // - A router (e.g. `@reframe/path-to-regexp` or `@reframe/crossroads`) // - And more stuff. Look at the source code of `@reframe/default-kit`. ], }; ``` In certain cases, when customizing Reframe, you will need to add the default kit yourself: ``` // reframe.config.js const defaultKit = require('@reframe/default-kit'); // npm install @reframe/default-kit module.exports = { plugins: [ defaultKit() ], }; ``` Readme --- ### Keywords none
qrious
npm
JavaScript
``` .d88888b. 8888888b. d8b d88P" "Y88b 888 Y88b Y8P 888 888 888 888 888 888 888 d88P 888 .d88b. 888 888 .d8888b 888 888 8888888P" 888 d88""88b 888 888 88K 888 Y8b 888 888 T88b 888 888 888 888 888 "Y8888b. Y88b.Y8b88P 888 T88b 888 Y88..88P Y88b 888 X88 "Y888888" 888 T88b 888 "Y88P" "Y88888 88888P' Y8b ``` [QRious](https://github.com/neocotic/qrious) is a pure JavaScript library for generating QR codes using HTML5 canvas. * [Install](#install) * [Examples](#examples) * [API](#api) * [Migrating from older versions](#migrating-from-older-versions) * [Bugs](#bugs) * [Contributors](#contributors) * [License](#license) Install --- Install using the package manager for your desired environment(s): ``` $ npm install --save qrious# OR:$ bower install --save qrious ``` If you want to simply download the file to be used in the browser you can find them below: * [Development Version](https://cdnjs.cloudflare.com/ajax/libs/qrious/4.0.2/qrious.js) (71kb - [Source Map](https://cdnjs.cloudflare.com/ajax/libs/qrious/4.0.2/qrious.js.map)) * [Production Version](https://cdnjs.cloudflare.com/ajax/libs/qrious/4.0.2/qrious.min.js) (18kb - [Source Map](https://cdnjs.cloudflare.com/ajax/libs/qrious/4.0.2/qrious.min.js.map)) Check out [node-qrious](https://github.com/neocotic/node-qrious) if you want to install it for use within [Node.js](https://nodejs.org). Examples --- ``` <!DOCTYPE html><html> <body> <canvas id="qr"></canvas> <script src="/path/to/qrious.js"></script> <script> (function() { var qr = new QRious({ element: document.getElementById('qr'), value: 'https://github.com/neocotic/qrious' }); })(); </script> </body></html> ``` Open up `demo.html` in your browser to play around a bit. API --- Simply create an instance of `QRious` and you've done most of the work. You can control many aspects of the QR code using the following fields on your instance: | Field | Type | Description | Default | Read Only | | --- | --- | --- | --- | --- | | background | String | Background color of the QR code | `"white"` | No | | backgroundAlpha | Number | Background alpha of the QR code | `1.0` | No | | element | Element | Element to render the QR code | `<canvas>` | Yes | | foreground | String | Foreground color of the QR code | `"black"` | No | | foregroundAlpha | Number | Foreground alpha of the QR code | `1.0` | No | | level | String | Error correction level of the QR code (L, M, Q, H) | `"L"` | No | | mime | String | MIME type used to render the image for the QR code | `"image/png"` | No | | padding | Number | Padding for the QR code (pixels) | `null` (auto) | No | | size | Number | Size of the QR code (pixels) | `100` | No | | value | String | Value encoded within the QR code | `""` | No | ``` var qr = new QRious();qr.background = 'green';qr.backgroundAlpha = 0.8;qr.foreground = 'blue';qr.foregroundAlpha = 0.8;qr.level = 'H';qr.padding = 25;qr.size = 500;qr.value = 'https://github.com/neocotic/qrious'; ``` The QR code will automatically update when you change one of these fields, so be wary when you plan on changing lots of fields at the same time. You probably want to make a single call to `set(options)` instead as it will only update the QR code once: ``` var qr = new QRious();qr.set({ background: 'green', backgroundAlpha: 0.8, foreground: 'blue', foregroundAlpha: 0.8, level: 'H', padding: 25, size: 500, value: 'https://github.com/neocotic/qrious'}); ``` These can also be passed as options to the constructor itself: ``` var qr = new QRious({ background: 'green', backgroundAlpha: 0.8, foreground: 'blue', foregroundAlpha: 0.8, level: 'H', padding: 25, size: 500, value: 'https://github.com/neocotic/qrious'}); ``` You can also pass in an `element` option to the constructor which can be used to generate the QR code using an existing DOM element, which is the only time that you can specify read only options. `element` must either be a `<canvas>` element or an `<img>` element which can then be accessed via the `canvas` or `image` fields on the instance respectively. An element will be created for whichever one isn't provided or for both if no `element` is specified, which means that they can be appended to the document at a later time. ``` var qr = new QRious({ element: document.querySelector('canvas'), value: 'https://github.com/neocotic/qrious'}); qr.canvas.parentNode.appendChild(qr.image); ``` A reference to the `QRious` instance is also stored on both of the elements for convenience. ``` var canvas = document.querySelector('canvas');var qr = new QRious({ element: canvas, value: 'https://github.com/neocotic/qrious'}); qr === canvas.qrious;//=> true ``` ### `toDataURL([mime])` Generates a base64 encoded data URI for the QR code. If you don't specify a MIME type, it will default to the one passed to the constructor as an option or the default value for the `mime` option. ``` var qr = new QRious({ value: 'https://github.com/neocotic/qrious'}); qr.toDataURL();//=> "data:image/png;base64,iVBOR...AIpqDnseH86KAAAAAElFTkSuQmCC"qr.toDataURL('image/jpeg');//=> "data:image/jpeg;base64,/9j/...xqAqIqgKFAAAAAq3RRQAUUUUAf/Z" ``` Migrating from older versions --- If you've been using an older major version and would like details on what's changed and information on how to migrate to the latest major release below: <https://github.com/neocotic/qrious/wiki/Migrating-from-older-versionsBugs --- If you have any problems with QRious or would like to see changes currently in development you can do so [here](https://github.com/neocotic/nqrious/issues). Core features and issues are maintained separately [here](https://github.com/neocotic/qrious-core/issues). Contributors --- If you want to contribute, you're a legend! Information on how you can do so can be found in [CONTRIBUTING.md](https://github.com/neocotic/qrious/blob/master/CONTRIBUTING.md). We want your suggestions and pull requests! A list of QRious contributors can be found in [AUTHORS.md](https://github.com/neocotic/qrious/blob/master/AUTHORS.md). License --- Copyright © 2017 <NAME> Copyright © 2010 <NAME> See [LICENSE.md](https://github.com/neocotic/qrious/blob/master/LICENSE.md) for more information on our GPLv3 license. Readme --- ### Keywords * qr * code * encode * canvas * image
github.com/tevino/abool/v2
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package abool provides atomic Boolean type for cleaner code and better performance. ### Index [¶](#pkg-index) * [type AtomicBool](#AtomicBool) * + [func New() *AtomicBool](#New) + [func NewBool(ok bool) *AtomicBool](#NewBool) * + [func (ab *AtomicBool) IsNotSet() bool](#AtomicBool.IsNotSet) + [func (ab *AtomicBool) IsSet() bool](#AtomicBool.IsSet) + [func (ab *AtomicBool) MarshalJSON() ([]byte, error)](#AtomicBool.MarshalJSON) + [func (ab *AtomicBool) Set()](#AtomicBool.Set) + [func (ab *AtomicBool) SetTo(yes bool)](#AtomicBool.SetTo) + [func (ab *AtomicBool) SetToIf(old, new bool) (set bool)](#AtomicBool.SetToIf) + [func (ab *AtomicBool) UnSet()](#AtomicBool.UnSet) + [func (ab *AtomicBool) UnmarshalJSON(b []byte) error](#AtomicBool.UnmarshalJSON) #### Examples [¶](#pkg-examples) * [AtomicBool](#example-AtomicBool) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [AtomicBool](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L27) [¶](#AtomicBool) ``` type AtomicBool [int32](/builtin#int32) ``` AtomicBool is an atomic Boolean. Its methods are all atomic, thus safe to be called by multiple goroutines simultaneously. Note: When embedding into a struct one should always use *AtomicBool to avoid copy. Example [¶](#example-AtomicBool) ``` cond := New() // default to false any := true old := any new := !any cond.Set() // Sets to true cond.IsSet() // Returns true cond.UnSet() // Sets to false cond.IsNotSet() // Returns true cond.SetTo(any) // Sets to whatever you want cond.SetToIf(new, old) // Sets to `new` only if the Boolean matches the `old`, returns whether succeeded ``` ``` Output: ``` #### func [New](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L11) [¶](#New) ``` func New() *[AtomicBool](#AtomicBool) ``` New creates an AtomicBool with default set to false. #### func [NewBool](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L16) [¶](#NewBool) ``` func NewBool(ok [bool](/builtin#bool)) *[AtomicBool](#AtomicBool) ``` NewBool creates an AtomicBool with given default value. #### func (*AtomicBool) [IsNotSet](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L45) [¶](#AtomicBool.IsNotSet) ``` func (ab *[AtomicBool](#AtomicBool)) IsNotSet() [bool](/builtin#bool) ``` IsNotSet returns whether the Boolean is false. #### func (*AtomicBool) [IsSet](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L40) [¶](#AtomicBool.IsSet) ``` func (ab *[AtomicBool](#AtomicBool)) IsSet() [bool](/builtin#bool) ``` IsSet returns whether the Boolean is true. #### func (*AtomicBool) [MarshalJSON](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L73) [¶](#AtomicBool.MarshalJSON) ``` func (ab *[AtomicBool](#AtomicBool)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON behaves the same as if the AtomicBool is a builtin.bool. NOTE: There's no lock during the process, usually it shouldn't be called with other methods in parallel. #### func (*AtomicBool) [Set](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L30) [¶](#AtomicBool.Set) ``` func (ab *[AtomicBool](#AtomicBool)) Set() ``` Set sets the Boolean to true. #### func (*AtomicBool) [SetTo](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L50) [¶](#AtomicBool.SetTo) ``` func (ab *[AtomicBool](#AtomicBool)) SetTo(yes [bool](/builtin#bool)) ``` SetTo sets the boolean with given Boolean. #### func (*AtomicBool) [SetToIf](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L60) [¶](#AtomicBool.SetToIf) ``` func (ab *[AtomicBool](#AtomicBool)) SetToIf(old, new [bool](/builtin#bool)) (set [bool](/builtin#bool)) ``` SetToIf sets the Boolean to new only if the Boolean matches the old. Returns whether the set was done. #### func (*AtomicBool) [UnSet](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L35) [¶](#AtomicBool.UnSet) ``` func (ab *[AtomicBool](#AtomicBool)) UnSet() ``` UnSet sets the Boolean to false. #### func (*AtomicBool) [UnmarshalJSON](https://github.com/tevino/abool/blob/v2.1.0/v2/bool.go#L79) [¶](#AtomicBool.UnmarshalJSON) ``` func (ab *[AtomicBool](#AtomicBool)) UnmarshalJSON(b [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON behaves the same as if the AtomicBool is a builtin.bool. NOTE: There's no lock during the process, usually it shouldn't be called with other methods in parallel.
FunCC
cran
R
Package ‘FunCC’ October 12, 2022 Title Functional Cheng and Church Bi-Clustering Version 1.0 Author <NAME> [aut, cre], <NAME> [aut, cre], <NAME> [aut], <NAME>- tini[aut] Maintainer <NAME> <<EMAIL>> Description The FunCC algorithm allows to apply the FunCC algorithm to simultaneously clus- ter the rows and the columns of a data matrix whose inputs are functions. Depends R (>= 3.5.1) License GPL (>= 3) Encoding UTF-8 LazyData true RoxygenNote 7.0.2 Imports narray, biclust, reshape, RColorBrewer, ggplot2 NeedsCompilation no Repository CRAN Date/Publication 2020-06-08 10:10:02 UTC R topics documented: find_best_delt... 2 funCCdat... 3 funcc_biclus... 4 funcc_show_bicluster_coverag... 5 funcc_show_bicluster_dimensio... 6 funcc_show_bicluster_hscor... 6 funcc_show_block_matri... 7 funcc_show_result... 8 find_best_delta Functional Cheng and Church Algorithm varying the delta value Description The find_best_delta function evaluate the results of FunCC algorithm in terms of total H-score value, the number of obtained bi-clusters and the number of not assigned elements when varying the delta value Usage find_best_delta( fun_mat, delta_min, delta_max, num_delta = 10, template.type = "mean", theta = 1.5, number = 100, alpha = 0, beta = 0, const_alpha = FALSE, const_beta = FALSE, shift.alignement = FALSE, shift.max = 0.1, max.iter.align = 100 ) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T delta_min scalar: Manimum value of the maximum of accepted score, should be a real value > 0 delta_max scalar: Maximum value of the maximum of accepted score, should be a real value > 0 num_delta integer: number of delta to be evaluated between delta_min and delta_max template.type character: type of template required. If template.type=’mean’ the template is evaluated as the average function, if template.type=’medoid’ the template is evaluated as the medoid function. theta scalar: Scaling factor should be a real value > 1 number integer: Maximum number of iterations alpha binary: if alpha=1 row shift is allowed, if alpha=0 row shift is avoided beta binary: if beta=1 row shift is allowed, if beta=0 row shift is avoided const_alpha logicol: indicates if row shift is contrained as constant const_beta logicol: indicates if col shift is contrained as constant shift.alignement logicol: If shift.alignement=True the shift aligment is performed, if shift.alignement=False no alignment is performed shift.max scalar: shift.max controls the maximal allowed shift, at each iteration, in the alignment procedure with respect to the range of curve domains. t.max must be such that 0<shift.max<1 max.iter.align integer: maximum number of iteration in the alignment procedure Value a dataframe containing for each evaluated delta: Htot_sum (the sum of totale H-score), num_clust (the number of found Bi-clusters), not_assigned (the number of not assigned elements) Examples ## Not run: data("funCCdata") find_best_delta(funCCdata,delta_min=0.1,delta_max=20,num_delta=20,alpha=1,beta=0,const_alpha=TRUE) ## End(Not run) funCCdata Simulated data Description funCC.data is a functional dataset displaying block structure Usage data(funCCdata) Format An object of class array of dimension 30 x 7 x 240. Examples data(funCCdata) funcc_biclust Functional Cheng and Church algorithm Description The funCC algorithm allows to simultaneously cluster the rows and the columns of a data matrix where each entry of the matrix is a function or a time series Usage funcc_biclust( fun_mat, delta, theta = 1, template.type = "mean", number = 100, alpha = 0, beta = 0, const_alpha = FALSE, const_beta = FALSE, shift.alignement = FALSE, shift.max = 0.1, max.iter.align = 100 ) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T delta scalar: Maximum of accepted score, should be a real value > 0 theta scalar: Scaling factor should be a real value > 1 template.type character: type of template required. If template.type=’mean’ the template is evaluated as the average function, if template.type=’medoid’ the template is evaluated as the medoid function. number integer: Maximum number of iteration alpha binary: if alpha=1 row shift is allowed, if alpha=0 row shift is avoided beta binary: if beta=1 row shift is allowed, if beta=0 row shift is avoided const_alpha logicol: Indicates if row shift is contrained as constant. const_beta logicol: Indicates if col shift is contrained as constant. shift.alignement logicol: If shift.alignement=True the shift aligment is performed, if shift.alignement=False no alignment is performed shift.max scalar: shift.max controls the maximal allowed shift, at each iteration, in the alignment procedure with respect to the range of curve domains. t.max must be such that 0<shift.max<1 max.iter.align integer: maximum number of iteration in the alignment procedure Value a list of two elements containing respectively the Biclustresults and a dataframe containing the pa- rameters setting of the algorithm @examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,be res funcc_show_bicluster_coverage plotting coverage of each bi-cluster Description funcc_show_bicluster_coverage graphically shows the coverage of each bi-cluster in terms of per- centage of included functions Usage funcc_show_bicluster_coverage( fun_mat, res_input, not_assigned = TRUE, max_coverage = 1 ) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T res_input An object produced by the funcc_biclust function not_assigned logicol: if true also the cluster of not assigned elements is included max_coverage scalar: percentage of maximum cumulative coverage to be shown Value a figure representing for each bi-cluster the coverage in terms of percentage of included functions Examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,beta=0,const_alpha=TRUE) funcc_show_bicluster_coverage(funCCdata,res) funcc_show_bicluster_dimension plotting dimensions of each bi-cluster Description funcc_show_bicluster_dimension graphically shows the dimensions of each bi-cluster (i.e. number of rows and columns) Usage funcc_show_bicluster_dimension(fun_mat, res_input) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T res_input An object produced by the funcc_biclust function Value a figure representing the dimensions of each bi-cluster (i.e. number of rows and columns) Examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,beta=0,const_alpha=TRUE) funcc_show_bicluster_dimension(funCCdata,res) funcc_show_bicluster_hscore plotting hscore of each bi-cluster on bicluster dimension Description funcc_show_bicluster_hscore graphically shows the hscore vs the dimension (i.e. number of rows and columns) of each bi-cluster Usage funcc_show_bicluster_hscore(fun_mat, res_input) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T res_input An object produced by the funcc_biclust function Value a figure representing the dimensions of each bi-cluster (i.e. number of rows and columns) Examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,beta=0,const_alpha=TRUE) funcc_show_bicluster_hscore(funCCdata,res) funcc_show_block_matrix Plotting co-clustering results of funCC on the data matrix Description funcc_show_block_matrix graphically shows the bi-clusters positions in the original data matrix Usage funcc_show_block_matrix(fun_mat, res_input) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T res_input An object produced by the funcc_biclust function Value a figure representing the bi-clusters positions in the original data matrix Examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,beta=0,const_alpha=TRUE) funcc_show_block_matrix(funCCdata,res) funcc_show_results Plotting co-clustering results of funCC Description funcc_show_results graphically shows the results of the bi-clustering Usage funcc_show_results( fun_mat, res_input, only.mean = FALSE, aligned = FALSE, warping = FALSE ) Arguments fun_mat The data array (n x m x T) where each entry corresponds to the measure of one observation i, i=1,...,n, for a functional variable m, m=1,...,p, at point t, t=1,...,T res_input An object produced by the funcc_biclust function only.mean logicol: if True only the template functions for each bi-cluster is displayed aligned logicol: if True the alignemd functions are displayed warping logicol: if True also a figure representing the warping functions are displayed Value a figure representing each bi-cluster in terms of functions contained in it or templates Examples data("funCCdata") res <- funcc_biclust(funCCdata,delta=10,theta=1,alpha=1,beta=0,const_alpha=TRUE) funcc_show_results(funCCdata,res)
fontdue
rust
Rust
Crate fontdue === Fontdue is a font parser, rasterizer, and layout tool. This is a no_std crate, but still requires the alloc crate. Modules --- * layoutTools for laying out strings of text. Structs --- * FontRepresents a font. Fonts are immutable after creation and owns its own copy of the font data. * FontSettingsSettings for controlling specific font and layout behavior. * LineMetricsMetrics associated with line positioning. * MetricsEncapsulates all layout information associated with a glyph for a fixed scale. * OutlineBoundsDefines the bounds for a glyph’s outline in subpixels. A glyph’s outline is always contained in its bitmap. Type Definitions --- * FontResultAlias for Result<T, &’static str>. Crate fontdue === Fontdue is a font parser, rasterizer, and layout tool. This is a no_std crate, but still requires the alloc crate. Modules --- * layoutTools for laying out strings of text. Structs --- * FontRepresents a font. Fonts are immutable after creation and owns its own copy of the font data. * FontSettingsSettings for controlling specific font and layout behavior. * LineMetricsMetrics associated with line positioning. * MetricsEncapsulates all layout information associated with a glyph for a fixed scale. * OutlineBoundsDefines the bounds for a glyph’s outline in subpixels. A glyph’s outline is always contained in its bitmap. Type Definitions --- * FontResultAlias for Result<T, &’static str>. Module fontdue::layout === Tools for laying out strings of text. Structs --- * CharacterDataMiscellaneous metadata associated with a character to assist in layout. * GlyphPositionA positioned scaled glyph. * GlyphRasterConfigConfiguration for rasterizing a glyph. This struct is also a hashable key that can be used to uniquely identify a rasterized glyph for applications that want to cache glyphs. * LayoutText layout requires a small amount of heap usage which is contained in the Layout struct. This context is reused between layout calls. Reusing the Layout struct will greatly reduce memory allocations and is advisable for performance. * LayoutSettingsSettings to configure how text layout is constrained. Text layout is considered best effort and layout may violate the constraints defined here if they prevent text from being laid out. * LinePositionMetrics about a positioned line. * TextStyleA style description for a segment of text. Enums --- * CoordinateSystemThe direction that the Y coordinate increases in. Layout needs to be aware of your coordinate system to place the glyphs correctly. * HorizontalAlignHorizontal alignment options for text when a max_width is provided. * VerticalAlignVertical alignment options for text when a max_height is provided. * WrapStyleWrap style is a hint for how strings of text should be wrapped to the next line. Line wrapping can happen when the max width/height is reached. Struct fontdue::OutlineBounds === ``` pub struct OutlineBounds { pub xmin: f32, pub ymin: f32, pub width: f32, pub height: f32, } ``` Defines the bounds for a glyph’s outline in subpixels. A glyph’s outline is always contained in its bitmap. Fields --- `xmin: f32`Subpixel offset of the left-most edge of the glyph’s outline. `ymin: f32`Subpixel offset of the bottom-most edge of the glyph’s outline. `width: f32`The width of the outline in subpixels. `height: f32`The height of the outline in subpixels. Implementations --- ### impl OutlineBounds #### pub fn scale(&self, scale: f32) -> OutlineBounds Scales the bounding box by the given factor. Trait Implementations --- ### impl Clone for OutlineBounds #### fn clone(&self) -> OutlineBounds Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. #### fn eq(&self, other: &OutlineBounds) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for OutlineBounds ### impl StructuralPartialEq for OutlineBounds Auto Trait Implementations --- ### impl RefUnwindSafe for OutlineBounds ### impl Send for OutlineBounds ### impl Sync for OutlineBounds ### impl Unpin for OutlineBounds ### impl UnwindSafe for OutlineBounds Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Type Definition fontdue::FontResult === ``` pub type FontResult<T> = Result<T, &'static str>; ``` Alias for Result<T, &’static str>.
pathml
readthedoc
Python
`PathML` is a toolkit supporting each step in the computational pathology research workflow. We aim to accelerate research, reduce barriers to entry for new researchers, and promote open science in computational pathology. We provide pre-built tools, public datasets, and documentation to allow anyone with beginner proficiency in Python to get started. For advanced users, we provide a modular set of tools which can be composed into custom workflows, as well as complete API documentation to enable implementation of new features or tools on top of `PathML` . ## License The GNU GPL v2 version of PathML is made available via Open Source licensing. The user is free to use, modify, and distribute under the terms of the GNU General Public License version 2. Commercial license options are available also. Please contact us at <EMAIL> # Installation PathML Previous Next Installation See https://github.com/Dana-Farber-AIOS/pathml/blob/master/README.md#Installation ## Individual Images The first step in any computational pathology workflow is to load the image from disk. In `PathML` this can be done in one line: ``` wsi = HESlide("../data/CMU-1.svs", name = "example") ``` ## Datasets of Images Using “in-house” datasets from the local filesystem is also supported. Simply initialize a `SlideDataset` object by passing a list of individual `SlideData` objects: ``` from pathlib import Path from pathml.core import HESlide, SlideDataset # assuming that all WSIs are in a single directory, all with .svs file extension data_dir = Path("/path/to/data/") wsi_paths = list(data_dir.glob("*.svs")) # create a list of SlideData objects by loading each path wsi_list = [HESlide(p) for p in wsi_paths] # initialize a SlideDataset dataset = SlideDataset(wsi_list) ``` ## Supported slide types All slides are represented as `SlideData` objects. We provide several convenience classes for loading common types of slides: It is also possible to load a slide by using the generic `SlideData` class and specifying explicitly the slide_type and which backend to use (refer to table below): ``` wsi = SlideData("../data/CMU-1.svs", name = "example", slide_backend = "openslide", slide_type = types.HE) ``` For more information on specifying `slide_type` , see full documentation at `SlideType` ## Supported file formats Whole-slide images can come in a variety of file formats, depending on the type of image and the scanner used. `PathML` has several backends for loading images, enabling support for a wide variety of data formats. All backends use the same API for interfacing with other parts of `PathML` . Choose the appropriate backend for the file format: b'Backend\n \n \n Supported file types\n \n \n \n .svs, .tif, .tiff, .bif, .ndpi, .vms, .vmu, .scn, .mrxs, .svslide\n \n \n .dcm, .dicomDigital Imaging and Communications in Medicine (DICOM)\n \n \n Supports almost all commonly used file formats, including multiparametric and volumetric TIFF files..1sc, .2fl, .acff, .afi, .afm, .aim, .al3d, .ali, .am,.amiramesh, .apl, .arf, .avi, .bif, .bin, .bip, .bmp,.c01, .cfg, .ch5, .cif, .cr2, .crw, .cxd, .czi,.dat, .dat, .db, .dib, .dm2, .dm3, .dm4, .dti, .dv,.eps, .epsi, .exp, .fdf, .fff, .ffr, .fits, .fli, .frm,.gel, .grey, .hdr, .hdr, .hdr, .hdr, .hed, .his, .htd,.htd, .hx, .i2i, .ics, .ids, .im3, .img, .img, .ims,.inr, .ipl, .ipm, .ipw, .j2k, .jp2, .jpf, .jpk, .jpx,.klb, .l2d, .labels, .lei, .lif, .liff, .lim, .lms,.lsm, .map, .mdb, .mnc, .mng, .mod, .mov, .mrc,.mrcs, .mrw, .msr, .msr, .mtb, .mvd2, .naf, .nd,.nef, .nhdr, .nii, .nii.gz, .nrrd, .obf, .obsep, .oib,.oif, .oir, .ome, .ome.btf, .ome.tf2, .ome.tf8, .ome.tif,.ome.tiff, .ome.xml, .par, .pbm, .pcoraw, .pcx, .pds,.pgm, .pic, .pict, .png, .pnl, .ppm, .pr3, .ps, .psd,.qptiff, .r3d, .raw, .rcpnl, .rec, .rec, .scn, .scn, .sdt,.seq, .sif, .sld, .sld, .sm2, .sm3, .spc, .spe, .spi,.st, .stk, .stk, .stp, .sxm, .tfr, .tga, .tif, .tiff,.tnb, .top, .vff, .vsi, .vws, .wat, .wlz, .wpi,.xdce, .xml, .xqd, .xqf, .xv, .xys, .zfp, .zfr, .zvi' Preprocessing pipelines define how raw images are transformed and prepared for downstream analysis. The `pathml.preprocessing` module provides tools to define modular preprocessing pipelines for whole-slide images. In this section we will walk through how to define a `Pipeline` object by composing pre-made `Transform` objects, and how to implement a new custom `Transform` . ## What is a Transform? The `Transform` is the building block for creating preprocessing pipelines. Each `Transform` applies a specific operation to a `Tile` which may include modifying an input image, creating or modifying pixel-level metadata (i.e., masks), or creating or modifying image-level metadata (e.g., image quality metrics or an AnnData counts matrix). ## What is a Pipeline? A preprocessing pipeline is a set of independent operations applied sequentially. In `PathML` , a `Pipeline` is defined as a sequence of `Transform` objects. This makes it easy to compose a custom `Pipeline` by mixing-and-matching: In the PathML API, this is concise: In this example, the preprocessing pipeline will first apply a box blur kernel, and then apply tissue detection. ## Creating custom Transforms For advanced users In some cases, you may want to implement a custom `Transform` . For example, you may want to apply a transformation which is not already implemented in `PathML` . Or, perhaps you want to create a new transformation which combines several others. To define a new custom `Transform` , all you need to do is create a class which inherits from `Transform` and implements an `apply()` method which takes a `Tile` as an argument and modifies it in place. You may also implement a functional method `F()` , although that is not strictly required. For example, let’s take a look at how `BoxBlur` is implemented: ``` class BoxBlur(Transform): """Box (average) blur kernel.""" def __init__(self, kernel_size=5): self.kernel_size = kernel_size def F(self, image): return cv2.boxFilter(image, ksize = (self.kernel_size, self.kernel_size), ddepth = -1) def apply(self, tile): tile.image = self.F(tile.image) ``` Once you define your custom `Transform` , you can plug it in with any of the other `Pipeline` , etc. ## How it works Whole-slide images are typically too large to load in memory, and computational requirements scale poorly in image size. `PathML` therefore runs preprocessing on smaller regions of the image which can be held in RAM, and then aggregates the results at the end. Preprocessing pipelines are defined in `Pipeline` objects. When `SlideData.run()` is called, `Tile` objects are lazily extracted from the slide by the ``` SlideData.generate_tiles() ``` method and passed to the `Pipeline.apply()` method, which modifies the tiles in place. Finally, all processed tiles are aggregated into a single `h5py.Dataset` array and a PyTorch Dataset is generated. Each tile is processed independently, and this data-parallel design makes it easy to utilize computational resources and scale up to large datasets of gigapixel-scale whole-slide images. ## Preprocessing a single WSI Get started by loading a WSI from disk and running a preprocessing pipeline: wsi = HESlide("../data/CMU-1.svs", name = "example") wsi.run(pipeline) ``` ## Preprocessing a dataset of WSI Pipelines can also be run on entire datasets, with no change to the code: Here we create a mock `SlideDataset` and run the same pipeline as above: # create demo dataset n = 10 slide_list = [HESlide("../data/CMU-1.svs", name = "example") for _ in range(10)] slide_dataset = SlideDataset(slide_list) slide_dataset.run(pipeline) ``` ## Distributed processing When running a pipeline, `PathML` will use multiprocessing by default to distribute the workload to all available cores. This allows users to efficiently process large datasets by scaling up computational resources (local cluster, cloud machines, etc.) without needing to make any changes to the code. It also makes it feasible to run preprocessing pipelines on less powerful machines, e.g. laptops for quick prototyping. We use dask.distributed as the backend for multiprocessing. Jobs are submitted to a `Client` , which takes care of sending them to available resources and collecting the results. By default, `PathML` creates a local cluster. Several libraries exist for creating `Clients` on different systems, e.g.: dask-kubernetes for kubernetes * dask-jobqueue for common job queuing systems including PBS, Slurm, MOAB, SGE, LSF, and HTCondor typically found in high performance supercomputers, academic research institutions, etc. * dask-yarn for Hadoop YARN clusters To take full advantage of available computational resources, users must initialize the appropriate `Client` object for their system and pass it as an argument to the `SlideData.run()` or `SlideDataset.run()` . Please refer to the Dask documentation linked above for complete information on creating the `Client` object to suit your needs. For advanced users ## Overview A single whole-slide image may contain on the order of 1010 pixels, making it infeasible to process entire images in RAM. `PathML` supports efficient manipulation of large-scale imaging data via the h5path format, a hierarchical data structure which allows users to access small regions of the processed WSI without loading the entire image. This feature reduces the RAM required to run a `PathML` workflow (pipelines can be run on a consumer laptop), simplifies the reading and writing of processed WSIs, improves data exploration utilities, and enables fast reading for downstream tasks (e.g. PyTorch Dataloaders). Since slides are managed on disk, your drive must have sufficient storage. Performance will benefit from storage with fast read/write (SSD, NVMe). ## How it Works Each `SlideData` object is backed by an `.h5path` file on disk. All interaction with the `.h5path` file is handled automatically by the `h5pathManager` . For example, when a user calls ``` slidedata.tiles[tile_key] ``` , the `h5pathManager` will retrieve the tile from disk and return it, without the user needing to worry about accessing the HDF5 file themself. As tiles are extracted and passed to a preprocessing pipeline, the `h5pathManager` also handles aggregating the processed tiles into the `.h5path` file. At the conclusion of preprocessing, the h5py object can optionally be permanently written to disk in `.h5path` format via the `SlideData.write()` method. ## About HDF5 The internals of `PathML` as well as the `.h5path` file format are based on the hierarchical data format HDF5, implemented by h5py. HDF5 format consists of 3 types of elements: `Groups` are container-like and can be queried like dictionaries: ``` import h5py root = h5py.File('path/to/file.h5path', 'r') masks = root['masks'] ``` `Datasets` can be treated like `numpy.ndArray` objects: Important To retrieve a `numpy.ndArray` object from `h5py.Dataset` you must slice the Dataset with NumPy fancy-indexing syntax: for example […] to retrieve the full array, or [a:b, …] to return the array with first dimension sliced to the interval [a, b]. ``` import h5py root = h5py.File('path/to/file.h5path', 'r') im = root['tiles']['(0, 0)']['array'][...] im_slice = root['tiles']['(0, 0)']['array'][0:100, 0:100, :] ``` `Attributes` are stored in a `.attrs` object which can be queried like a dictionary: ``` import h5py root = h5py.File('path/to/file.h5path', 'r') tile_shape = root['tiles'].attrs['tile_shape'] ``` ## `.h5path` File Format h5path utilizes a self-describing hierarchical file system similar to `SlideData` . Here we examine the h5path file format in detail: ``` root/ (Group) ├── fields/ (Group) │ ├── name (Attribute, str) │ ├── shape (Attribute, tuple) │ ├── labels (Group) │ │ ├── label1 (Attribute, [str, int, float, array]) │ │ ├── label2 (Attribute, [str, int, float, array]) │ │ └── etc... │ └── slide_type (Group) │ ├── stain (Attribute, str) │ ├── tma (Attribute, bool) │ ├── rgb (Attribute, bool) │ ├── volumetric (Attribute, bool) │ └── time_series (Attribute, bool) ├── masks/ (Group) │ ├── mask1 (Dataset, array) │ ├── mask2 (Dataset, array) │ └── etc... ├── counts (Group) │ └── `.h5ad` format └── tiles/ (Group) ├── tile_shape (Attribute, tuple) ├── tile_stride (Attribute, tuple) ├── tile_key1/ (Group) │ ├── array (Dataset, array) │ ├── masks/ (Group) │ │ ├── mask1 (Dataset, array) │ │ ├── mask2 (Dataset, array) │ │ └── etc... │ ├── coords (Attribute, tuple) │ ├── name (Attribute, str) │ └── labels/ (Group) │ ├── label1 (Attribute, [str, int, float, array]) │ ├── label2 (Attribute, [str, int, float, array]) │ └── etc... ├── tile_key2/ (Group) │ └── etc... └── etc... ``` Slide-level metadata is stored in the `fields/` group. Slide-level counts matrix metadata is stored in the `counts/` group. The `tiles/` group stores tile-level data. Each tile occupies its own group, and tile coordinates are used as keys for indexing tiles within the `tiles/` group. Within each tile’s group, the `array` dataset contains the tile image, the `masks/` group contains tile-level masks, and other metadata including name, labels, and coords are stored as attributes. Slide-level metadata about tiling, including tile shape and stride, are stored as attributes in the `tiles/` group. Whole-slide masks are stored in the `masks/` Group. All masks are enforced to be the same shape as the image array. However, when running a pipeline, these masks are moved to the tile-level and stored within the tile groups. The slide-level masks are therefore not saved when calling `SlideData.write()` . We use `float16` as the data type for all Datasets. Note Be aware that the `h5path` format specification may change between major versions ## Reading and Writing `SlideData` objects are easily written to h5path format by calling `SlideData.write()` . All files with `.h5` or `.h5path` extensions are loaded to `SlideData` objects automatically. The `pathml.datasets` module provides easy access to common datasets for standardized model evaluation and comparison. ## DataModules `PathML` uses `DataModules` to encapsulate datasets. DataModule objects are responsible for downloading the data (if necessary) and formatting the data into `DataSet` and `DataLoader` objects for use in downstream tasks. Keeping everything in a single object is easier for users and also facilitates reproducibility. Inspired by PyTorch Lightning. ## Using public datasets PathML has built-in support for several public datasets: * PanNuke1 * <NAME>., <NAME>., <NAME>., <NAME>., 2018, October. Deepfocus: Detection of out-of-focus regions in whole slide digital images using deep learning. PLOS One 13(10): e0205387. After running a preprocessing pipeline and writing the resulting `.h5path` file to disk, the next step is to create a DataLoader for feeding tiles into a machine learning model in PyTorch. To do this, use the `TileDataset` class and then wrap it in a PyTorch DataLoader: ``` dataset = TileDataset("/path/to/file.h5path") dataloader = torch.utils.data.DataLoader(dataset, batch_size = 16, shuffle = True, num_workers = 4) ``` Note Label dictionaries are not standardized, as users are free to store whatever labels they want. For that reason, PyTorch cannot automatically stack labels into batches. It may therefore be necessary to create a custom `collate_fn` to specify how to create batches of labels. See here. This provides an interface between PathML and the broader ecosystem of machine learning tools built on PyTorch. For more information on how to use Datasets and DataLoaders, please see the PyTorch documentation and tutorials. Date: 2015-01-01 Categories: Tags: `PathML` comes with model architectures ready to use out of the box. You can also use models from fantastic resources such as torchvision.models and pytorch-image-models (timm). * Unet * <NAME>., <NAME>. and <NAME>., 2015, October. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham. * HoVerNet * Date: 2010-01-12 Categories: Tags: `PathML` provides support for loading a wide array of imaging modalities and file formats under a standardized syntax. In this vignette, we highlight code snippets for loading a range of image types ranging from brightfield H&E and IHC to highly multiplexed immunofluorescence and spatial expression and proteomics, from small images to gigapixel scale: All images used in these examples are publicly available for download at the links listed above. Note that across the wide diversity of modalities and file formats, the syntax for loading images is consistent (see examples below). `[1]:` ``` # import utilities for loading images from pathml.core import HESlide, CODEXSlide, VectraSlide, SlideData, types ``` ## Aperio SVS ``` my_aperio_image = HESlide("./data/CMU-1.svs") ``` ## Generic tiled TIFF ``` my_generic_tiff_image = HESlide("./data/CMU-1.tiff", backend = "bioformats") ``` ## Hamamatsu NDPI The `labels` field can be used to store slide-level metadata. For example, in this case we store the target gene, which is Ki-67: `[4]:` ``` my_ndpi_image = SlideData("./data/OS-2.ndpi", labels = {"taget" : "Ki-67"}, slide_type = types.IHC) ``` ## Hamamatsu VMS ``` my_vms_image = HESlide("./data/CMU-1/CMU-1-40x - 2010-01-12 13.24.05.vms", backend = "openslide") ``` ## Leica SCN ``` my_leica_image = HESlide("./data/Leica-1.scn") ``` ## MIRAX ``` my_mirax_image = SlideData("./data/Mirax2-Fluorescence-1/Mirax2-Fluorescence-1.mrxs", slide_type = types.IF) ``` ## Olympus VSI Again, we use the `labels` field to store slide-level metadata such as the name of the target gene. `[8]:` ``` my_olympus_vsi = SlideData("./data/OS-3/OS-3.vsi", labels = {"taget" : "PTEN"}, slide_type = types.IHC) ``` ## Trestle TIFF ``` my_trestle_tiff = SlideData("./data/CMU-2/CMU-2.tif") ``` ## Ventana BIF `[10]:` ``` my_ventana_bif = SlideData("./data/OS-1.bif") ``` ## Zeiss ZVI Again, we use the `labels` field to store slide-level metadata such as the name of the target gene. `[11]:` ``` my_zeiss_zvi = SlideData("./data/Zeiss-1-Stacked.zvi", labels = {"target" : "HER-2"}, slide_type = types.IF) ``` ## DICOM ``` my_dicom = HESlide("./data/orthanc_example.dcm") ``` ## Volumetric + time-series OME-TIFF ``` my_volumetric_timeseries_image = SlideData( "./data/tubhiswt-4D/tubhiswt_C1_TP42.ome.tif", labels = {"organism" : "C elegans"}, volumetric = True, time_series = True, backend = "bioformats" ) ``` ## CODEX spatial proteomics The `labels` field can be used to store whatever slide-level metadata the user wants; here we specify the tissue type `[14]:` ``` my_codex_image = CODEXSlide('../../data/reg031_X01_Y01.tif', labels = {"tissue type" : "CRC"}); ``` ## MERFISH spatial gene expression ``` my_merfish_image = SlideData("./data/aligned_images0.tif", backend = "bioformats") ``` ## Visium 10x spatial gene expression Here we load an image with accompanying expression data in `AnnData` format. `[16]:` ``` # load the counts matrix of spatial genomics information import scanpy as sc adata = sc.read_10x_h5("./data/Visium_FFPE_Mouse_Brain_IF_raw_feature_bc_matrix.h5") # load the image, with accompanying counts matrix metadata my_visium_image = SlideData("./data/Visium_FFPE_Mouse_Brain_IF_image.tif", counts=adata, backend = "bioformats") ``` ``` Variable names are not unique. To make them unique, call `.var_names_make_unique`. Variable names are not unique. To make them unique, call `.var_names_make_unique`. ``` Date: 2016-01-01 Categories: Tags: This notebook gives examples of the stain deconvolution and color normalization tools available in `PathML` . H&E images are the result of applying two stains to a tissue sample: hematoxylin and eosin. The hematoxylin binds to the cell nuclei and colors them purple, while the eosin binds to the cytoplasm and extracellular matrix, coloring them pink. Stain deconvolution is the process of untangling these two superimposed stains from an H&E image. Digital pathology images can vary for many reasons, including: variation in stain intensity due to inconsistencies of technicians while applying stains to specimens * variation in image qualities due to differences in slide scanners * variation due to differences in lighting conditions when slide is scanned * etc. For these reasons, color normalization is a crucial part of any computational pathology workflow. Stain deconvolution can also be used in other ways, due to the different biological properties of the stains. For example, we can apply stain separation and use the hematoxylin channel as input to a nucleus detection algorithm (see nucleus detection example notebook). `PathML` comes with two stain deconvolution algorithms out of the box: the Macenko and Vahadane methods (Macenko et al. 2009; Vahadane et al. 2016). As more stain deconvolution methods are incorporated into `PathML` , they will be added here. `[2]:` from pathml.core import HESlide from pathml.preprocessing import StainNormalizationHE ``` `[3]:` `fontsize = 20` OpenSlide Data This example notebook uses publicly available images from OpenSlide. Download them here if you want to run this notebook locally, or change the filepaths to any whole-slide images that you have locally. We will pull out a 500px tile to use as an example: `[4]:` ``` wsi = HESlide("../data/CMU-1-Small-Region.svs") region = wsi.slide.extract_region(location = (900, 800), size = (500, 500)) ``` ``` plt.imshow(region) plt.title('Original image', fontsize=fontsize) plt.gca().set_xticks([]) plt.gca().set_yticks([]) plt.show() ``` ``` fig, axarr = plt.subplots(nrows=2, ncols=3, figsize=(10, 7.5)) for i, method in enumerate(["macenko", "vahadane"]): for j, target in enumerate(["normalize", "hematoxylin", "eosin"]): # initialize stain normalization object normalizer = StainNormalizationHE(target = target, stain_estimation_method = method) # apply on example image im = normalizer.F(region) # plot results ax = axarr[i, j] ax.imshow(im) if j == 0: ax.set_ylabel(f"{method} method", fontsize=fontsize) if i == 0: ax.set_title(target, fontsize = fontsize) for a in axarr.ravel(): a.set_xticks([]) a.set_yticks([]) Here we demonstrate a typical workflow for preprocessing of H&E images. The image used in this example is publicly avilalable for download: http://openslide.cs.cmu.edu/download/openslide-testdata/Aperio/ a. Load the image `[3]:` ``` from pathml.core import SlideData, types # load the image wsi = SlideData("../../data/CMU-1.svs", name = "example", slide_type = types.HE) ``` b. Define a preprocessing pipeline Pipelines are created by composing a sequence of modular transformations; in this example we apply a blur to reduce noise in the image followed by tissue detection `[5]:` c. Run preprocessing Now that we have constructed our pipeline, we are ready to run it on our WSI. PathML supports distributed computing, speeding up processing by running tiles in parallel among many workers rather than processing each tile sequentially on a single worker. This is supported by Dask.distributed on the backend, and is highly scalable for very large datasets. The first step is to create a `Client` object. In this case, we will use a simple cluster running locally; however, Dask supports other setups including Kubernetes, SLURM, etc. See the PathML documentation for more information. `[6]:` ``` from dask.distributed import Client, LocalCluster cluster = LocalCluster(n_workers=6) client = Client(cluster) wsi.run(pipeline, distributed=True, client=client); ``` e. Save results to disk The resulting preprocessed data is written to disk, leveraging the HDF5 data specification optimized for efficiently manipulating larger-than-memory data. `[8]:` ``` wsi.write("./data/CMU-1-preprocessed.h5path") ``` f. Create PyTorch DataLoader The `DataLoader` provides an interface with any machine learning model built on the PyTorch ecosystem `[9]:` ``` from pathml.ml import TileDataset from torch.utils.data import DataLoader dataset = TileDataset("./data/CMU-1-preprocessed.h5path") dataloader = DataLoader(dataset, batch_size = 16, num_workers = 4) ``` Date: 2020-01-01 Categories: Tags: Pathology imaging experiments commonly produce data where each channel corresponds to a molecular feature, such as the expression level of a protein or nucleic acid. PathML implements MultiparametricSlide, a subclass of SlideData for which we implement special transforms (for more information about transforms, see “Creating Preprocessing Pipelines” in our documentation). MultiparametricSlide is the appropriate type to analyze low-dimensional techniques including immunofluoresence (protein and in situ hybridization/RNAscope). Recently multiple approaches to higher dimensional imaging of ‘spatial omics’, the simultaneous measurement of a large number of molecular features, have emerged (see https://www.nature.com/articles/s41592-020-01033-y, https://pubmed.ncbi.nlm.nih.gov/30078711/, among many others). In this notebook we run a pipeline to analyze CODEX data to demonstrate PathML’s support for multiparametric imaging data. We use the MultiparametricSlide subclass, CODEXSlide, which supports special preprocessing transformations for the CODEX technique. See “Convenience SlideData Classes” (https://pathml.readthedocs.io/en/latest/api_core_reference.html#convenience-slidedata-classes) to see other subclasses that we have implemented. `[ ]:` ``` # load libraries and data from pathml.core.slide_data import CODEXSlide from pathml.preprocessing.pipeline import Pipeline from pathml.preprocessing.transforms import SegmentMIF, QuantifyMIF, CollapseRunsCODEX import numpy as np import matplotlib.pyplot as plt from dask.distributed import Client from deepcell.utils.plot_utils import make_outline_overlay from deepcell.utils.plot_utils import create_rgb_image import scanpy as sc import squidpy as sq import warnings warnings.filterwarnings('ignore') %matplotlib inline slidedata = CODEXSlide('/home/ryc4001/Documents/pathmlproj/nolan_codex/reg031_X01_Y01.tif'); ``` Here we analyze a TMA from Schurch et al., Coordinated Cellular Neighborhoods Orchestrate Antitumoral Immunity at the Colorectal Cancer Invasive Front (Cell, 2020) Below are the proteins measured in this TMA and the cell types they label. CODEX images proteins in cycles of 3, so here we list proteins by cycle. ``` # These tif are of the form (x,y,z,c,t) but t is being used to denote cycles # 17 z-slices, 4 channels per 23 cycles, 70 regions slidedata.slide.shape ``` ``` (1920, 1440, 17, 4, 23) ``` ## Defining a Multiparametric Pipeline We define a pipeline that chooses a z-slice from our CODEX image, segments cells in the image, then quantifies the expression of each protein in each cell. `[ ]:` ``` # 31 -> Na-K-ATPase pipe = Pipeline([ CollapseRunsCODEX(z=6), SegmentMIF(model='mesmer', nuclear_channel=0, cytoplasm_channel=31, image_resolution=0.377442), QuantifyMIF(segmentation_mask='cell_segmentation') ]) client = Client() slidedata.run(pipe, distributed = False, client = client, tile_size=1000, tile_pad=False); ``` ``` img = slidedata.tiles[0].image ``` ``` def plot(slidedata, tile, channel1, channel2): image = np.expand_dims(slidedata.tiles[tile].image, axis=0) nuc_segmentation_predictions = np.expand_dims(slidedata.tiles[tile].masks['nuclear_segmentation'], axis=0) cell_segmentation_predictions = np.expand_dims(slidedata.tiles[tile].masks['cell_segmentation'], axis=0) #nuc_cytoplasm = np.expand_dims(np.concatenate((image[:,:,:,channel1,0], image[:,:,:,channel2,0]), axis=2), axis=0) nuc_cytoplasm = np.stack((image[:,:,:,channel1], image[:,:,:,channel2]), axis=-1) rgb_images = create_rgb_image(nuc_cytoplasm, channel_colors=['blue', 'green']) overlay_nuc = make_outline_overlay(rgb_data=rgb_images, predictions=nuc_segmentation_predictions) overlay_cell = make_outline_overlay(rgb_data=rgb_images, predictions=cell_segmentation_predictions) fig, ax = plt.subplots(1, 2, figsize=(15, 15)) ax[0].imshow(rgb_images[0, ...]) ax[1].imshow(overlay_cell[0, ...]) ax[0].set_title('Raw data') ax[1].set_title('Cell Predictions') plt.show() ``` Let’s check the quality of our segmentations in a 1000x1000 pixel tile by looking at DAPI, Syp, and CD44. `[6]:` ``` # DAPI + Syp plot(slidedata, tile=0, channel1=0, channel2=60) # DAPI + CD44 plot(slidedata, tile=0, channel1=0, channel2=24) ``` ## AnnData Integration and Spatial Single Cell Analysis Now let’s explore the single-cell quantification of our imaging data. Our pipeline produced a single-cell matrix of shape (cell x protein) where each cell has attached additional information including location on the slide and the size of the cell in the image. This information is stored in slidedata.counts as an anndata object (https://anndata.readthedocs.io/en/latest/anndata.AnnData.html). `[7]:` ``` adata = slidedata.counts.to_memory() ``` `[8]:` `adata` `[8]:` ``` AnnData object with n_obs × n_vars = 1102 × 92 obs: 'x', 'y', 'coords', 'filled_area', 'slice', 'euler_number', 'tile' obsm: 'spatial' layers: 'min_intensity', 'max_intensity' ``` `[9]:` `adata.X` `[9]:` ``` array([[ 85.2459 , 119.18852 , 137.40984 , ..., 18.467213 , 1.8032787 , 133.16394 ], [ 99.308334 , 111.98333 , 122.60833 , ..., 14.7 , 1.7583333 , 126.225 ], [ 41.63498 , 139.92775 , 132.1711 , ..., 14.707224 , 4.5095057 , 131.92015 ], ..., [ 28.657894 , 139.27193 , 133.86842 , ..., 21.307018 , 4.2017546 , 125.20175 ], [114.90351 , 123.552635 , 119.76316 , ..., 13.078947 , 0.98245615, 117.02631 ], [ 72.8951 , 126.034966 , 130.91608 , ..., 20.86014 , 0.5244755 , 126.97203 ]], dtype=float32) ``` `[10]:` `adata.obs` `[10]:` x | y | coords | filled_area | slice | euler_number | tile | | --- | --- | --- | --- | --- | --- | --- | 0 | 2.770492 | 452.377049 | [array([[ 0, 444],\n [ 0, 445],\n ... | 122 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1 | 4.050000 | 212.316667 | [array([[ 0, 444],\n [ 0, 445],\n ... | 120 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 2 | 5.813688 | 273.250951 | [array([[ 0, 444],\n [ 0, 445],\n ... | 263 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 3 | 3.914141 | 351.075758 | [array([[ 0, 444],\n [ 0, 445],\n ... | 198 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 4 | 4.190045 | 959.393665 | [array([[ 0, 444],\n [ 0, 445],\n ... | 221 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | ... | ... | ... | ... | ... | ... | ... | ... | 1097 | 995.254902 | 597.872549 | [array([[ 0, 444],\n [ 0, 445],\n ... | 102 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1098 | 996.529412 | 108.694118 | [array([[ 0, 444],\n [ 0, 445],\n ... | 85 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1099 | 995.561404 | 250.298246 | [array([[ 0, 444],\n [ 0, 445],\n ... | 114 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1100 | 995.789474 | 610.833333 | [array([[ 0, 444],\n [ 0, 445],\n ... | 114 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1101 | 995.188811 | 730.727273 | [array([[ 0, 444],\n [ 0, 445],\n ... | 143 | [(slice(0, 7, None), slice(444, 464, None))\n ... | 1 | (0, 0) | 1102 rows × 7 columns `[11]:` `adata.var` `[11]:` 0 | | --- | 1 | 2 | 3 | 4 | ... | 87 | 88 | 89 | 90 | 91 | 92 rows × 0 columns This anndata object gives us access to the entire python (or Seurat) single cell analysis ecosystem of tools. We follow a single cell analysis workflow described in https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html and https://www.embopress.org/doi/full/10.15252/msb.20188746. `[12]:` ``` import scanpy as sc sc.pl.violin(adata, keys = ['0','24','60']) sc.pp.log1p(adata) sc.pp.scale(adata, max_value=10) sc.tl.pca(adata, svd_solver='arpack') sc.pp.neighbors(adata, n_neighbors=10, n_pcs=10) sc.tl.umap(adata) sc.pl.umap(adata, color=['0','24','60']) ``` ``` sc.tl.leiden(adata, resolution = 0.15) sc.pl.umap(adata, color='leiden') sc.tl.rank_genes_groups(adata, 'leiden', method='t-test') sc.pl.rank_genes_groups_dotplot(adata, groupby='leiden', vmax=5, n_genes=5) ``` ``` WARNING: dendrogram data not found (using key=dendrogram_leiden). Running `sc.tl.dendrogram` with default parameters. For fine tuning it is recommended to run `sc.tl.dendrogram` independently. ``` We can also use spatial analysis tools such as https://github.com/theislab/squidpy. `[14]:` ``` import scanpy as sc import squidpy as sq sc.pl.spatial(adata, color='leiden', spot_size=15) sc.pl.spatial( adata, color="leiden", groups=[ "2", "4" ], spot_size=15 ) ``` ``` sq.gr.co_occurrence(adata, cluster_key="leiden") sq.pl.co_occurrence( adata, cluster_key="leiden" ) ``` <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., 2020. Coordinated cellular neighborhoods orchestrate antitumoral immunity at the colorectal cancer invasive front. Cell, 182(5), pp.1341-1359. Date: 2021-08-14 Categories: Tags: In this tutorial, we will analyze the CODEX images provided by Schürch et. al: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7479520/. Here, we will use PathML to process a large number of CODEX slides (140 slides) simultaneously through the SlideDataset class. We also show how the generated count matrices can be used to investigate the complex spatial architecture of the iTME in colorectal cancer. `[1]:` ``` from os import listdir,path,getcwd import glob import re import pandas as pd from pathml.core import SlideDataset from pathml.core.slide_data import VectraSlide from pathml.core.slide_data import CODEXSlide from pathml.preprocessing.pipeline import Pipeline from pathml.preprocessing.transforms import SegmentMIF, QuantifyMIF, CollapseRunsCODEX import numpy as np import matplotlib.pyplot as plt from matplotlib.pyplot import rc_context from dask.distributed import Client, LocalCluster from deepcell.utils.plot_utils import make_outline_overlay from deepcell.utils.plot_utils import create_rgb_image import scanpy as sc import squidpy as sq import anndata as ad import bbknn from joblib import parallel_backend ``` ``` In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. warnings.warn( ``` ``` # set working directory %cd "~/Documents/Research/Projects/CRC_TMA" ``` ``` /Users/mohamedomar/Documents/Research/Projects/CRC_TMA ``` ``` ## Read channel names channelnames = pd.read_csv("data/channelNames.txt", header = None, dtype = str, low_memory=False) channelnames ``` 0 | | --- | 0 | HOECHST1 | 1 | blank | 2 | blank | 3 | blank | 4 | HOECHST2 | ... | ... | 87 | empty-Cy5-22 | 88 | HOECHST23 | 89 | empty-A488-23 | 90 | empty-Cy3-23 | 91 | DRAQ5 | 92 rows × 1 columns ## Reading the slides ``` dirpath = r"/Volumes/Mohamed/CRC" # assuming that all slides are in a single directory, all with .tif file extension for A,B in [listdir(dirpath)]: vectra_list_A = [CODEXSlide(p, stain='IF') for p in glob.glob(path.join(dirpath, A, "*.tif"))] vectra_list_B = [CODEXSlide(p, stain='IF') for p in glob.glob(path.join(dirpath, B, "*.tif"))] # Fix the slide names and add origin labels (A, B) for slide_A, slide_B in zip(vectra_list_A, vectra_list_B): slide_A.name = re.sub("X.*", "A", slide_A.name) slide_B.name = re.sub("X.*", "B", slide_B.name) # Store all slides in a SlideDataSet object dataset = SlideDataset(vectra_list_A + vectra_list_B) ``` ## Define and run the preprocessing pipeline ``` # Here, we use DAPI (channel 0) and vimentin (channel 29) for segmentation # Z=0 since we are processing images with a single slice (best focus) pipe = Pipeline([ CollapseRunsCODEX(z=0), SegmentMIF(model='mesmer', nuclear_channel=0, cytoplasm_channel=29, image_resolution=0.377442), QuantifyMIF(segmentation_mask='cell_segmentation') ]) # Initialize a dask cluster using 10 workers. PathML pipelines can be run in distributed mode on # cloud compute or a cluster using dask.distributed. cluster = LocalCluster(n_workers=10, threads_per_worker=1, processes=True) client = Client(cluster) # Run the pipeline dataset.run(pipe, distributed = True, client = client, tile_size=(1920,1440), tile_pad=False) # Write the processed datasets to disk dataset.write('data/dataset_processed.h5') ``` ## Extract and concatenate the resulting count matrices Combine the count matrices into a single adata object: `[ ]:` ``` ## Combine the count matrices into a single adata object: adata = ad.concat([x.counts for x in dataset.slides], join="outer", label="Region", index_unique='_') # Fix and replace the regions names origin = adata.obs['Region'] origin = origin.astype(str).str.replace("[^a-zA-Z0-9 \n\.]", "") origin = origin.astype(str).str.replace("[\n]", "") origin = origin.str.replace("SlideDataname", "") adata.obs['Region'] = origin # save the adata object adata_combined.write(filename='./data/adata_combined.h5ad') ``` ``` # Rename the variable names (channels) in the adata object adata.var_names = channelnames[0] adata.var_names_make_unique() ``` Filter the cells using the DAPI and DRAQ5 intensity `[7]:` ``` # Plot the DAPI intensity distribution: sc.pl.violin(adata, keys=['HOECHST1', 'DRAQ5'], multi_panel = True) ``` ``` /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/anndata/_core/anndata.py:1220: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Removing unused categories will always return a new Categorical object. c.reorder_categories(natsorted(c.categories), inplace=True) ... storing 'coords' as categorical /Users/mohamedomar/opt/anaconda3/envs/pathml2/lib/python3.8/site-packages/anndata/_core/anndata.py:1220: FutureWarning: The `inplace` parameter in pandas.Categorical.reorder_categories is deprecated and will be removed in a future version. Removing unused categories will always return a new Categorical object. c.reorder_categories(natsorted(c.categories), inplace=True) ... storing 'slice' as categorical ``` ``` # Remove cells with low DAPI intensity (most likely artifacts) adata = adata[adata[: , 'HOECHST1'].X > 60, :] adata = adata[adata[: , 'DRAQ5'].X > 100, :] ``` ``` # Remove the empty and nuclear channels keep = ['CD44 - stroma', 'FOXP3 - regulatory T cells', 'CD8 - cytotoxic T cells', 'p53 - tumor suppressor', 'GATA3 - Th2 helper T cells', 'CD45 - hematopoietic cells', 'T-bet - Th1 cells', 'beta-catenin - Wnt signaling', 'HLA-DR - MHC-II', 'PD-L1 - checkpoint', 'Ki67 - proliferation', 'CD45RA - naive T cells', 'CD4 - T helper cells', 'MUC-1 - epithelia', 'CD30 - costimulator', 'CD2 - T cells', 'Vimentin - cytoplasm', 'CD20 - B cells', 'LAG-3 - checkpoint', 'Na-K-ATPase - membranes', 'CD5 - T cells', 'IDO-1 - metabolism', 'Cytokeratin - epithelia', 'CD11b - macrophages', 'CD56 - NK cells', 'aSMA - smooth muscle', 'BCL-2 - apoptosis', 'CD25 - IL-2 Ra', 'PD-1 - checkpoint', 'Granzyme B - cytotoxicity', 'EGFR - singling', 'VISTA - costimulator', 'CD15 - granulocytes', 'ICOS - costimulator', 'Synaptophysin - neuroendocrine', 'GFAP - nerves', 'CD7 - T cells', 'CD3 - T cells', 'Chromogranin A - neuroendocrine', 'CD163 - macrophages', 'CD45RO - memory cells', 'CD68 - macrophages', 'CD31 - vasculature', 'Podoplanin - lymphatics', 'CD34 - vasculature', 'CD38 - multifunctional', 'CD138 - plasma cells'] adata = adata[:, keep] ``` `[10]:` `adata_combined` `[10]:` ``` View of AnnData object with n_obs × n_vars = 272829 × 47 obs: 'Region', 'coords', 'euler_number', 'filled_area', 'slice', 'tile', 'x', 'y', 'TMA' obsm: 'spatial' layers: 'max_intensity', 'min_intensity' ``` `[11]:` ``` # Rename the markers adata.var_names = ['CD44', 'FOXP3', 'CD8', 'p53', 'GATA3', 'CD45', 'T-bet', 'beta-cat', 'HLA-DR', 'PD-L1', 'Ki67', 'CD45RA', 'CD4', 'MUC-1', 'CD30', 'CD2', 'Vimentin', 'CD20', 'LAG-3', 'Na-K-ATPase', 'CD5', 'IDO-1', 'Cytokeratin', 'CD11b', 'CD56', 'aSMA', 'BCL-2', 'CD25-IL-2Ra', 'PD-1', 'Granzyme B', 'EGFR', 'VISTA', 'CD15', 'ICOS', 'Synaptophysin', 'GFAP', 'CD7', 'CD3', 'ChromograninA', 'CD163', 'CD45RO', 'CD68', 'CD31', 'Podoplanin', 'CD34', 'CD38', 'CD138'] ``` ``` # store the raw data for further use (differential expression or other analysis that uses raw counts) adata.raw = adata ``` Annotation dict for clinical groups: - CLR: Crohns like reaction - DII: Diffuse inflammatory infiltrate `[14]:` ``` # map each slide to its source patient and clinical group (CLR vs DII) adata.obs['patients'] = ( adata.obs['Region'] .map(regions_to_patients) .astype('category') ) adata.obs['groups'] = ( adata.obs['Region'] .map(regions_to_groups) .astype('category') ) ``` `[18]:` ``` # log transform and scale the data sc.pp.log1p(adata) sc.pp.scale(adata, max_value=10) ``` `[19]:` ``` # PCA and batch correction using Harmony sc.tl.pca(adata) sc.external.pp.harmony_integrate(adata, key='Region') ``` ``` 2021-08-14 23:23:40,665 - harmonypy - INFO - Iteration 1 of 10 2021-08-14 23:26:19,392 - harmonypy - INFO - Iteration 2 of 10 2021-08-14 23:29:16,153 - harmonypy - INFO - Iteration 3 of 10 2021-08-14 23:32:22,647 - harmonypy - INFO - Converged after 3 iterations ``` `[20]:` ``` # save for future use adata.write(filename='./data/adata_harmony.h5ad') ``` ``` # Compute neighbors and UMAP embedding sc.pp.neighbors(adata, n_neighbors=15, n_pcs=30, use_rep='X_pca_harmony') sc.tl.umap(adata) ``` `[22]:` ``` # louvain clustering with parallel_backend('threading', n_jobs=15): sc.tl.louvain(adata, resolution = 3) ``` ``` # Plot UMAP sc.pl.umap(adata, color=['patients', 'groups', 'louvain'], ncols = 1) ``` ## Annotate the clusters based on the markers intensity Here, we define a function for cell annotation which takes as input the processed adata object together with a set of thresholds for each marker/channel. The function annotates the cells based on the specified thresholds and return the annotated adata object with the cell coordinates. `[27]:` ``` def phenotype(adata, annot_dict): """ Given a dict of dicts including phenotypes and marker gene thresholds, phenotype cells. Args: adata : the anndata object. annot_dict (dict): annotation dictionary each key is a cell type of interest and its value is a dictionary indicating protein expression ranges for that cell type. Each value should be a tuple (min, max) containing the minimum and maximum thresholds. """ # Get the count matrix data = adata.copy() countMat = data.to_df() # Annotate the cell types for label in annot_dict.keys() : for key, value in annot_dict[label].items(): cond = np.logical_and.reduce([((countMat[k] >= countMat[k].quantile(list(v)[0])) & (countMat[k] <= countMat[k].quantile(list(v)[1]))) for k, v in annot_dict[label].items()]) data.obs.loc[cond, 'cell_types'] = label # replace nan with unknown data.obs.cell_types.fillna('unknown', inplace = True) return data ``` ``` annot_dict = { 'CD3+ T lymphocytes': {'CD3': (0.85, 1.0), 'CD4':(0.0, 0.50), 'CD8':(0.00, 0.50)}, 'CD4+ T lymphocytes': {'CD3': (0.50, 1.0), 'CD4':(0.50, 1.0), 'CD8':(0.0, 0.75), 'CD45RO':(0.0, 0.75)}, 'CD8+ T lymphocytes': {'CD3': (0.50, 1), 'CD8':(0.50, 1), 'CD4':(0.0, 0.75)}, 'CD4+CD45RO+ T cells': {'CD3': (0.50, 1), 'CD8':(0.0, 0.75), 'CD4':(0.50, 1), 'CD45RO':(0.50, 1)}, 'Tregs': {'CD3': (0.50, 1.0), 'CD25-IL-2Ra': (0.75, 1), 'FOXP3': (0.75, 1), 'CD8':(0.0, 0.50)}, 'B cells': {'CD20': (0.50, 1), 'CD3': (0.0, 0.75)}, 'plasma cells': {'CD38': (0.50, 1), 'CD20':(0.50, 1), 'CD3': (0.0, 0.75)}, 'granulocytes': {'CD15': (0.50, 1),'CD11b':(0.50, 1), 'CD3': (0.0, 0.85)}, 'CD68+ macrophages': {'CD68': (0.95, 1), 'CD3': (0.0, 0.50), 'CD163': (0.0, 0.95)}, 'CD163+ macrophages': {'CD163': (0.95, 1), 'CD3': (0.0, 0.50), 'CD68': (0.0, 0.50)}, 'CD68+CD163 macrophages': {'CD68': (0.50, 1),'CD163':(0.50, 1), 'CD3': (0.0, 0.95)}, 'CD11b+CD68+ macrophages': {'CD68': (0.95, 1),'CD11b':(0.50, 1), 'CD3': (0.0, 0.50)}, 'NK cells': {'CD56': (0.75, 1), 'CD3': (0.0, 0.50), 'Cytokeratin':(0.0, 0.50)}, 'vasculature': {'CD34': (0.50, 1),'CD31':(0.50, 1), 'Cytokeratin': (0.0, 0.50)}, 'tumor cells': {'Cytokeratin': (0.50, 1), 'p53':(0.50, 1), 'aSMA': (0.0, 0.75)}, 'immune cells': {'CD20': (0.50, 1),'CD38':(0.50, 1), 'CD3': (0.50, 1), 'GFAP': (0.50, 1), 'CD15': (0.50, 1), 'Cytokeratin': (0.0, 0.50), 'aSMA': (0.0, 0.75)}, 'tumor/immune': {'CD20': (0.50, 1), 'CD3': (0.75, 1),'CD38':(0.50, 1), 'GFAP': (0.80, 1), 'Cytokeratin': (0.85, 1), 'p53':(0.50, 1), 'aSMA': (0.0, 0.75)}, 'vascular/immune': {'CD20': (0.50, 1), 'CD3': (0.85, 1),'CD38':(0.50, 1), 'GFAP': (0.80, 1), 'CD34': (0.75, 1),'CD31':(0.75, 1), 'aSMA': (0.0, 0.75)}, 'stromal cells': {'Vimentin': (0.50, 1), 'Cytokeratin':(0.0, 0.50)}, 'Adipocytes': {'p53': (0.75, 1), 'Vimentin':(0.75, 1), 'Cytokeratin': (0.0, 0.50), 'aSMA': (0.0, 0.50), 'CD44': (0.0, 0.50)}, 'smooth muscles': {'aSMA': (0.70, 1),'Vimentin':(0.50, 1), 'CD3': (0.0, 0.50)}, 'nerves': {'Synaptophysin': (0.85, 1), 'Vimentin':(0.50, 1), 'GFAP': (0.85, 1), 'CD3': (0.0, 0.50)}, 'lymphatics': {'Podoplanin': (0.99, 1), 'CD3': (0.0, 0.75)}, 'artifact': {'CD20': (0.0, 0.50), 'CD3': (0.0, 0.50),'CD38':(0.0, 0.50), 'GFAP': (0.0, 0.50), 'Cytokeratin': (0.0, 0.50), 'p53':(0.0, 0.50), 'aSMA': (0.0, 0.50), 'CD15': (0.0, 0.50), 'CD68': (0.0, 0.50), 'CD25-IL-2Ra': (0.0, 0.50), 'CD34': (0.0, 0.50),'CD31':(0.0, 0.50), 'CD56': (0.0, 0.50), 'Vimentin': (0.0, 0.50)} } ``` ``` # Annotate the adata adata_annot = process_adata(adata, annot_dict = annot_dict) adata_annot.obs.cell_types.value_counts() ``` ``` unknown 42834 tumor cells 37035 stromal cells 31646 CD68+CD163 macrophages 28479 granulocytes 23164 vasculature 20016 smooth muscles 19990 B cells 13433 CD8+ T lymphocytes 10962 plasma cells 8911 CD4+CD45RO+ T cells 6603 artifact 4650 CD4+ T lymphocytes 4395 vascular/immune 4004 CD3+ T lymphocytes 3975 immune cells 3075 Adipocytes 2825 tumor/immune 1925 CD68+ macrophages 1431 Tregs 1067 CD11b+CD68+ macrophages 869 lymphatics 865 nerves 366 CD163+ macrophages 220 NK cells 89 Name: cell_types, dtype: int64 ``` `[33]:` ``` sc.pl.violin(adata_annot, groupby='cell_types', keys=['CD4', 'CD8', 'CD3'], rotation = 90) ``` `[35]:` ``` sc.pl.violin(adata_annot, groupby='cell_types', keys=['CD68', 'CD163', 'CD11b'], rotation = 90) ``` `[36]:` ``` sc.pl.spatial(adata_annot[adata_annot.obs.Region == 'reg020_A'], color='cell_types', spot_size=25, size=1) ``` `[37]:` ``` sc.pl.spatial(adata_annot[adata_annot.obs.Region == 'reg020_B'], color='cell_types', spot_size=25, size=1) ``` `[39]:` ``` # Put in a dataframe for further analysis countData = adata_annot.to_df() obs = adata_annot.obs data = pd.concat([countData, obs], axis = 1) data['CellID'] = Data.index\# Remove the cells with unknown annotation and the artifacts data = data.loc[~Data['cell_types'].isin(['unknown'])] data = data.loc[~Data['cell_types'].isin(['artifact'])] data['cell_types'] = data['cell_types'].cat.remove_unused_categories() data['cell_types'] = data['cell_types'].astype('str') ``` `[44]:` `data.shape` `[44]:` `(225345, 61)` `[45]:` ``` # save data.to_csv('./data/CRC_pathml.csv') ``` ## Identification of cellular neighborhoods After identifying cell types, the next step is to identify cellular neighborhoods using the same approach described in Schürch et. al. and utilizing the code available from: https://github.com/nolanlab/NeighborhoodCoordination In summary, for each cell, we are going to idenify the 10 nearest spatial neighbors (windows), then we are going to cluster these windows into distinct neighborhoods based on their cell type composition. `[46]:` ``` from sklearn.neighbors import NearestNeighbors import time import sys from sklearn.cluster import MiniBatchKMeans import seaborn as sns ``` `[47]:` ``` # Function for identifying the windows def get_windows(job, n_neighbors): ''' For each region and each individual cell in dataset, return the indices of the nearest neighbors. 'job: meta data containing the start time,index of region, region name, indices of region in original dataframe n_neighbors: the number of neighbors to find for each cell ''' start_time, idx, tissue_name, indices = job job_start = time.time() print("Starting:", str(idx + 1) + '/' + str(len(exps)), ': ' + exps[idx]) # tissue_group: a grouped data frame with X and Y coordinates grouped by unique tissue regions tissue = tissue_group.get_group(tissue_name) to_fit = tissue.loc[indices][['x', 'y']].values # Unsupervised learner for implementing neighbor searches. fit = NearestNeighbors(n_neighbors=n_neighbors).fit(tissue[['x', 'y']].values) # Find the nearest neighbors m = fit.kneighbors(to_fit) m = m[0], m[1] ## sort_neighbors args = m[0].argsort(axis=1) add = np.arange(m[1].shape[0]) * m[1].shape[1] sorted_indices = m[1].flatten()[args + add[:, None]] neighbors = tissue.index.values[sorted_indices] end_time = time.time() print("Finishing:", str(idx + 1) + "/" + str(len(exps)), ": " + exps[idx], end_time - job_start, end_time - start_time) return neighbors.astype(np.int32) ``` `[48]:` ``` data = pd.read_csv('./data/CRC_pathml.csv') ``` `[49]:` ``` # make dummy variables data = pd.concat([Data,pd.get_dummies(Data['cell_types'])], axis = 1) # Extract the cell types with dummy variables sum_cols = data['cell_types'].unique() values = data[sum_cols].values ``` find windows for each cell in each tissue region `[51]:` ``` # Keep the X and Y coordianates + the tissue regions >> then group by tissue regions (140 unique regions) tissue_group = data[['x','y','Region']].groupby('Region') # Create a list of unique tissue regions exps = list(data['Region'].unique()) # time.time(): current time is seconds # indices: a list of indices (rownames) of each dataframe in tissue_group # exps.index(t) : t represents the index of each one of the indices eg, exps.index("reg001_A") is 0 and exps.index("reg001_B") is 1 and so on # t is the name of tissue regions eg, reg001_A tissue_chunks = [(time.time(),exps.index(t),t,a) for t,indices in tissue_group.groups.items() for a in np.array_split(indices,1)] # Get the window (the 10 closest cells to each cell in each tissue region) tissues = [get_windows(job,10) for job in tissue_chunks] ``` ``` Starting: 61/140 : reg001_A Finishing: 61/140 : reg001_A 0.009814977645874023 0.022227048873901367 Starting: 77/140 : reg001_B Finishing: 77/140 : reg001_B 0.004416942596435547 0.026772260665893555 Starting: 19/140 : reg002_A Finishing: 19/140 : reg002_A 0.006669044494628906 0.0336461067199707 Starting: 101/140 : reg002_B Finishing: 101/140 : reg002_B 0.0061969757080078125 0.04012012481689453 Starting: 58/140 : reg003_A Finishing: 58/140 : reg003_A 0.005404233932495117 0.045838117599487305 Starting: 85/140 : reg003_B Finishing: 85/140 : reg003_B 0.002471923828125 0.04852771759033203 Starting: 65/140 : reg004_A Finishing: 65/140 : reg004_A 0.005899906158447266 0.054521799087524414 Starting: 71/140 : reg004_B Finishing: 71/140 : reg004_B 0.0063402652740478516 0.06109809875488281 Starting: 40/140 : reg005_A Finishing: 40/140 : reg005_A 0.0077631473541259766 0.0690910816192627 Starting: 105/140 : reg005_B Finishing: 105/140 : reg005_B 0.005625247955322266 0.07496023178100586 Starting: 8/140 : reg006_A Finishing: 8/140 : reg006_A 0.004698991775512695 0.0797572135925293 Starting: 127/140 : reg006_B Finishing: 127/140 : reg006_B 0.004808902740478516 0.08475184440612793 Starting: 17/140 : reg007_A Finishing: 17/140 : reg007_A 0.007723093032836914 0.09262681007385254 Starting: 92/140 : reg007_B Finishing: 92/140 : reg007_B 0.0049021244049072266 0.09776902198791504 Starting: 24/140 : reg008_A Finishing: 24/140 : reg008_A 0.005953073501586914 0.1038961410522461 Starting: 123/140 : reg008_B Finishing: 123/140 : reg008_B 0.008786916732788086 0.11287188529968262 Starting: 52/140 : reg009_A Finishing: 52/140 : reg009_A 0.0069849491119384766 0.12202978134155273 Starting: 130/140 : reg009_B Finishing: 130/140 : reg009_B 0.0077838897705078125 0.12995290756225586 Starting: 12/140 : reg010_A Finishing: 12/140 : reg010_A 0.006242036819458008 0.13639497756958008 Starting: 133/140 : reg010_B Finishing: 133/140 : reg010_B 0.003675222396850586 0.14031219482421875 Starting: 23/140 : reg011_A Finishing: 23/140 : reg011_A 0.004063129425048828 0.1445319652557373 Starting: 124/140 : reg011_B Finishing: 124/140 : reg011_B 0.00393986701965332 0.14860916137695312 Starting: 69/140 : reg012_A Finishing: 69/140 : reg012_A 0.005756855010986328 0.15452790260314941 Starting: 75/140 : reg012_B Finishing: 75/140 : reg012_B 0.0026569366455078125 0.15730595588684082 Starting: 41/140 : reg013_A Finishing: 41/140 : reg013_A 0.0051670074462890625 0.16249799728393555 Starting: 108/140 : reg013_B Finishing: 108/140 : reg013_B 0.00403904914855957 0.1666700839996338 Starting: 44/140 : reg014_A Finishing: 44/140 : reg014_A 0.003408193588256836 0.17021417617797852 Starting: 98/140 : reg014_B Finishing: 98/140 : reg014_B 0.005333900451660156 0.1755669116973877 Starting: 54/140 : reg015_A Finishing: 54/140 : reg015_A 0.006878852844238281 0.18254685401916504 Starting: 88/140 : reg015_B Finishing: 88/140 : reg015_B 0.004870891571044922 0.1875441074371338 Starting: 29/140 : reg016_A Finishing: 29/140 : reg016_A 0.006279945373535156 0.19389891624450684 Starting: 107/140 : reg016_B Finishing: 107/140 : reg016_B 0.00708317756652832 0.20116114616394043 Starting: 67/140 : reg017_A Finishing: 67/140 : reg017_A 0.006649971008300781 0.20815491676330566 Starting: 72/140 : reg017_B Finishing: 72/140 : reg017_B 0.005545854568481445 0.21404600143432617 Starting: 4/140 : reg018_A Finishing: 4/140 : reg018_A 0.008655071258544922 0.2230057716369629 Starting: 78/140 : reg018_B Finishing: 78/140 : reg018_B 0.006451129913330078 0.2296450138092041 Starting: 32/140 : reg019_A Finishing: 32/140 : reg019_A 0.006360054016113281 0.23619771003723145 Starting: 113/140 : reg019_B Finishing: 113/140 : reg019_B 0.003787994384765625 0.24015522003173828 Starting: 60/140 : reg020_A Finishing: 60/140 : reg020_A 0.010369062423706055 0.2506411075592041 Starting: 138/140 : reg020_B Finishing: 138/140 : reg020_B 0.008259773254394531 0.25922179222106934 Starting: 39/140 : reg021_A Finishing: 39/140 : reg021_A 0.004830837249755859 0.2642190456390381 Starting: 116/140 : reg021_B Finishing: 116/140 : reg021_B 0.003367900848388672 0.26804184913635254 Starting: 9/140 : reg022_A Finishing: 9/140 : reg022_A 0.007760047912597656 0.27587199211120605 Starting: 126/140 : reg022_B Finishing: 126/140 : reg022_B 0.0032799243927001953 0.27936792373657227 Starting: 20/140 : reg023_A Finishing: 20/140 : reg023_A 0.004238128662109375 0.283825159072876 Starting: 93/140 : reg023_B Finishing: 93/140 : reg023_B 0.005738973617553711 0.2899298667907715 Starting: 33/140 : reg024_A Finishing: 33/140 : reg024_A 0.002911806106567383 0.2929408550262451 Starting: 112/140 : reg024_B Finishing: 112/140 : reg024_B 0.013455867767333984 0.30647897720336914 Starting: 62/140 : reg025_A Finishing: 62/140 : reg025_A 0.006231069564819336 0.313082218170166 Starting: 76/140 : reg025_B Finishing: 76/140 : reg025_B 0.009344816207885742 0.3225746154785156 Starting: 50/140 : reg026_A Finishing: 50/140 : reg026_A 0.0073397159576416016 0.3299558162689209 Starting: 95/140 : reg026_B Finishing: 95/140 : reg026_B 0.010460853576660156 0.34046483039855957 Starting: 57/140 : reg027_A Finishing: 57/140 : reg027_A 0.008396148681640625 0.3488960266113281 Starting: 81/140 : reg027_B Finishing: 81/140 : reg027_B 0.008210897445678711 0.3574059009552002 Starting: 14/140 : reg028_A Finishing: 14/140 : reg028_A 0.007547855377197266 0.3653140068054199 Starting: 87/140 : reg028_B Finishing: 87/140 : reg028_B 0.00654292106628418 0.3718750476837158 Starting: 46/140 : reg029_A Finishing: 46/140 : reg029_A 0.009541988372802734 0.38150620460510254 Starting: 104/140 : reg029_B Finishing: 104/140 : reg029_B 0.006811857223510742 0.38849782943725586 Starting: 47/140 : reg030_A Finishing: 47/140 : reg030_A 0.003790140151977539 0.39242005348205566 Starting: 99/140 : reg030_B Finishing: 99/140 : reg030_B 0.006364107131958008 0.39893221855163574 Starting: 53/140 : reg031_A Finishing: 53/140 : reg031_A 0.004971981048583984 0.4040391445159912 Starting: 128/140 : reg031_B Finishing: 128/140 : reg031_B 0.006245851516723633 0.41039395332336426 Starting: 28/140 : reg032_A Finishing: 28/140 : reg032_A 0.006634950637817383 0.4171791076660156 Starting: 106/140 : reg032_B Finishing: 106/140 : reg032_B 0.004702329635620117 0.4219231605529785 Starting: 1/140 : reg033_A Finishing: 1/140 : reg033_A 0.008253097534179688 0.4303441047668457 Starting: 73/140 : reg033_B Finishing: 73/140 : reg033_B 0.006574869155883789 0.43709278106689453 Starting: 15/140 : reg034_A Finishing: 15/140 : reg034_A 0.006840944290161133 0.4440948963165283 Starting: 89/140 : reg034_B Finishing: 89/140 : reg034_B 0.009439229965209961 0.45371532440185547 Starting: 25/140 : reg035_A Finishing: 25/140 : reg035_A 0.008854866027832031 0.4627108573913574 Starting: 120/140 : reg035_B Finishing: 120/140 : reg035_B 0.009293794631958008 0.47219085693359375 Starting: 68/140 : reg036_A Finishing: 68/140 : reg036_A 0.009788990020751953 0.48207616806030273 Starting: 139/140 : reg036_B Finishing: 139/140 : reg036_B 0.01093912124633789 0.49315786361694336 Starting: 43/140 : reg037_A Finishing: 43/140 : reg037_A 0.005067110061645508 0.4984712600708008 Starting: 117/140 : reg037_B Finishing: 117/140 : reg037_B 0.006145000457763672 0.5049071311950684 Starting: 38/140 : reg038_A Finishing: 38/140 : reg038_A 0.006515026092529297 0.5115442276000977 Starting: 110/140 : reg038_B Finishing: 110/140 : reg038_B 0.006800174713134766 0.5184721946716309 Starting: 59/140 : reg039_A Finishing: 59/140 : reg039_A 0.003709077835083008 0.5222232341766357 Starting: 74/140 : reg039_B Finishing: 74/140 : reg039_B 0.0035741329193115234 0.5277011394500732 Starting: 18/140 : reg040_A Finishing: 18/140 : reg040_A 0.004296064376831055 0.5321159362792969 Starting: 96/140 : reg040_B Finishing: 96/140 : reg040_B 0.004117727279663086 0.536341667175293 Starting: 7/140 : reg041_A Finishing: 7/140 : reg041_A 0.003995180130004883 0.540438175201416 Starting: 82/140 : reg041_B Finishing: 82/140 : reg041_B 0.003885030746459961 0.5444629192352295 Starting: 37/140 : reg042_A Finishing: 37/140 : reg042_A 0.003648042678833008 0.548213005065918 Starting: 109/140 : reg042_B Finishing: 109/140 : reg042_B 0.003875255584716797 0.552192211151123 Starting: 63/140 : reg043_A Finishing: 63/140 : reg043_A 0.0053560733795166016 0.5576450824737549 Starting: 79/140 : reg043_B Finishing: 79/140 : reg043_B 0.0034198760986328125 0.5609321594238281 Starting: 10/140 : reg044_A Finishing: 10/140 : reg044_A 0.003528118133544922 0.5645391941070557 Starting: 131/140 : reg044_B Finishing: 131/140 : reg044_B 0.002878904342651367 0.5675170421600342 Starting: 21/140 : reg045_A Finishing: 21/140 : reg045_A 0.01139378547668457 0.5790097713470459 Starting: 97/140 : reg045_B Finishing: 97/140 : reg045_B 0.004516124725341797 0.58365797996521 Starting: 6/140 : reg046_A Finishing: 6/140 : reg046_A 0.003998756408691406 0.5878088474273682 Starting: 134/140 : reg046_B Finishing: 134/140 : reg046_B 0.01061391830444336 0.5984470844268799 Starting: 34/140 : reg047_A Finishing: 34/140 : reg047_A 0.005155086517333984 0.6037991046905518 Starting: 118/140 : reg047_B Finishing: 118/140 : reg047_B 0.007688999176025391 0.6116068363189697 Starting: 30/140 : reg048_A Finishing: 30/140 : reg048_A 0.007600307464599609 0.6195840835571289 Starting: 115/140 : reg048_B Finishing: 115/140 : reg048_B 0.011443138122558594 0.6316161155700684 Starting: 2/140 : reg049_A Finishing: 2/140 : reg049_A 0.00881505012512207 0.6405959129333496 Starting: 135/140 : reg049_B Finishing: 135/140 : reg049_B 0.011723041534423828 0.6528370380401611 Starting: 3/140 : reg050_A Finishing: 3/140 : reg050_A 0.0052111148834228516 0.6582260131835938 Starting: 136/140 : reg050_B Finishing: 136/140 : reg050_B 0.009176015853881836 0.6675820350646973 Starting: 31/140 : reg051_A Finishing: 31/140 : reg051_A 0.008751153945922852 0.6764531135559082 Starting: 114/140 : reg051_B Finishing: 114/140 : reg051_B 0.008320093154907227 0.68532395362854 Starting: 51/140 : reg052_A Finishing: 51/140 : reg052_A 0.0061681270599365234 0.6917729377746582 Starting: 84/140 : reg052_B Finishing: 84/140 : reg052_B 0.009895801544189453 0.7022178173065186 Starting: 45/140 : reg053_A Finishing: 45/140 : reg053_A 0.004172086715698242 0.7066078186035156 Starting: 94/140 : reg053_B Finishing: 94/140 : reg053_B 0.004163980484008789 0.7108440399169922 Starting: 27/140 : reg054_A Finishing: 27/140 : reg054_A 0.005264997482299805 0.7162370681762695 Starting: 111/140 : reg054_B Finishing: 111/140 : reg054_B 0.003180980682373047 0.7195639610290527 Starting: 66/140 : reg055_A Finishing: 66/140 : reg055_A 0.00409388542175293 0.7237598896026611 Starting: 80/140 : reg055_B Finishing: 80/140 : reg055_B 0.0030677318572998047 0.7269597053527832 Starting: 22/140 : reg056_A Finishing: 22/140 : reg056_A 0.005424976348876953 0.7325129508972168 Starting: 125/140 : reg056_B Finishing: 125/140 : reg056_B 0.011126041412353516 0.7440230846405029 Starting: 13/140 : reg057_A Finishing: 13/140 : reg057_A 0.00240325927734375 0.7465171813964844 Starting: 132/140 : reg057_B Finishing: 132/140 : reg057_B 0.004606008529663086 0.7512319087982178 Starting: 56/140 : reg058_A Finishing: 56/140 : reg058_A 0.00801992416381836 0.7594480514526367 Starting: 129/140 : reg058_B Finishing: 129/140 : reg058_B 0.007853984832763672 0.7673490047454834 Starting: 49/140 : reg059_A Finishing: 49/140 : reg059_A 0.007686138153076172 0.775170087814331 Starting: 119/140 : reg059_B Finishing: 119/140 : reg059_B 0.008450984954833984 0.7838461399078369 Starting: 11/140 : reg060_A Finishing: 11/140 : reg060_A 0.007905006408691406 0.7919058799743652 Starting: 86/140 : reg060_B Finishing: 86/140 : reg060_B 0.00969386100769043 0.8019630908966064 Starting: 16/140 : reg061_A Finishing: 16/140 : reg061_A 0.007141828536987305 0.8094010353088379 Starting: 100/140 : reg061_B Finishing: 100/140 : reg061_B 0.008517742156982422 0.8181126117706299 Starting: 64/140 : reg062_A Finishing: 64/140 : reg062_A 0.008826017379760742 0.8272860050201416 Starting: 140/140 : reg062_B Finishing: 140/140 : reg062_B 0.007968902587890625 0.83530592918396 Starting: 35/140 : reg063_A Finishing: 35/140 : reg063_A 0.003358125686645508 0.8388819694519043 Starting: 122/140 : reg063_B Finishing: 122/140 : reg063_B 0.005372285842895508 0.8444433212280273 Starting: 48/140 : reg064_A Finishing: 48/140 : reg064_A 0.0074350833892822266 0.8521482944488525 Starting: 121/140 : reg064_B Finishing: 121/140 : reg064_B 0.008898019790649414 0.8611080646514893 Starting: 55/140 : reg065_A Finishing: 55/140 : reg065_A 0.00514674186706543 0.8664467334747314 Starting: 137/140 : reg065_B Finishing: 137/140 : reg065_B 0.006044864654541016 0.8725459575653076 Starting: 26/140 : reg066_A Finishing: 26/140 : reg066_A 0.003931999206542969 0.8765108585357666 Starting: 91/140 : reg066_B Finishing: 91/140 : reg066_B 0.012857198715209961 0.889430046081543 Starting: 5/140 : reg067_A Finishing: 5/140 : reg067_A 0.004929780960083008 0.8947200775146484 Starting: 83/140 : reg067_B Finishing: 83/140 : reg067_B 0.004603862762451172 0.899526834487915 Starting: 70/140 : reg068_A Finishing: 70/140 : reg068_A 0.003593921661376953 0.9033100605010986 Starting: 90/140 : reg068_B Finishing: 90/140 : reg068_B 0.008028984069824219 0.9117860794067383 Starting: 42/140 : reg069_A Finishing: 42/140 : reg069_A 0.004137992858886719 0.9161639213562012 Starting: 102/140 : reg069_B Finishing: 102/140 : reg069_B 0.005839109420776367 0.9220380783081055 Starting: 36/140 : reg070_A Finishing: 36/140 : reg070_A 0.0063631534576416016 0.92854905128479 Starting: 103/140 : reg070_B Finishing: 103/140 : reg070_B 0.004929065704345703 0.9335160255432129 ``` for each cell and its nearest neighbors, reshape and count the number of each cell type in those neighbors. `[52]:` ``` ks = [10] out_dict = {} for k in ks: for neighbors, job in zip(tissues, tissue_chunks): chunk = np.arange(len(neighbors)) # indices tissue_name = job[2] indices = job[3] window = values[neighbors[chunk, :k].flatten()].reshape(len(chunk), k, len(sum_cols)).sum(axis=1) out_dict[(tissue_name, k)] = (window.astype(np.float16), indices) ``` concatenate the summed windows and combine into one dataframe for each window size tested. `[53]:` ``` keep_cols = ['x','y','Region','cell_types'] windows = {} for k in ks: window = pd.concat( [pd.DataFrame(out_dict[(exp, k)][0], index=out_dict[(exp, k)][1].astype(int), columns=sum_cols) for exp in exps], 0) window = window.loc[Data.index.values] window = pd.concat([Data[keep_cols], window], 1) windows[k] = window ``` ``` <ipython-input-53-dbc25edfe709>:4: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only window = pd.concat( <ipython-input-53-dbc25edfe709>:8: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only window = pd.concat([Data[keep_cols], window], 1) ``` `[54]:` ``` neighborhood_name = "neighborhood"+str(k) k_centroids = {} windows2 = windows[10] ``` Clustering the windows `[64]:` ``` km = MiniBatchKMeans(n_clusters = 10,random_state=0) labelskm = km.fit_predict(windows2[sum_cols].values) k_centroids[10] = km.cluster_centers_ data['neighborhood10'] = labelskm data[neighborhood_name] = data[neighborhood_name].astype('category') ``` `[65]:` ``` cell_order = [ 'tumor cells', 'CD68+CD163 macrophages', 'CD11b+CD68+ macrophages', 'CD68+ macrophages', 'CD163+ macrophages', 'granulocytes', 'NK cells', 'CD3+ T lymphocytes', 'CD4+ T lymphocytes', 'CD4+CD45RO+ T cells', 'CD8+ T lymphocytes', 'Tregs', 'B cells', 'plasma cells', 'tumor/immune', 'vascular/immune', 'immune cells', 'smooth muscles', 'stromal cells', 'vasculature', 'lymphatics', 'nerves' ] ``` This plot shows the cell types abundance in the different niches `[66]:` ``` niche_clusters = (k_centroids[10]) tissue_avgs = values.mean(axis = 0) fc = np.log2(((niche_clusters+tissue_avgs)/(niche_clusters+tissue_avgs).sum(axis = 1, keepdims = True))/tissue_avgs) fc = pd.DataFrame(fc,columns = sum_cols) s=sns.clustermap(fc.loc[[0,1,2,3,4,5,6,7,8,9],cell_order], vmin =-3,vmax = 3,cmap = 'bwr',row_cluster = False) ``` Visualize the identified neighborhoods on the slides in each clinical group `[67]:` ``` # CLR Data['neighborhood10'] = Data['neighborhood10'].astype('category') sns.lmplot(data = Data[Data['groups']=='CLR'],x = 'x',y='y',hue = 'neighborhood10',palette = 'bright',height = 8, col = 'Region', col_wrap = 10,fit_reg = False) ``` `[67]:` ``` <seaborn.axisgrid.FacetGrid at 0x1a79e42e0> ``` ``` # DII Data['neighborhood10'] = Data['neighborhood10'].astype('category') sns.lmplot(data = Data[Data['groups']=='DII'],x = 'x',y='y',hue = 'neighborhood10',palette = 'bright',height = 8,col = 'Region', col_wrap = 10,fit_reg = False) ``` ``` <seaborn.axisgrid.FacetGrid at 0x1a78afc40> ``` Plot for each group and each patient the percent of total cells allocated to each neighborhood `[70]:` ``` fc = Data.groupby(['patients','groups']).apply(lambda x: x['neighborhood10'].value_counts(sort = False,normalize = True)) fc.columns = range(10) melt = pd.melt(fc.reset_index(),id_vars = ['patients','groups']) melt = melt.rename(columns = {'variable':'neighborhood','value':'frequency of neighborhood'}) melt['neighborhood'] = melt['neighborhood'].map( { 0: 'smooth muscles', 1: 'plasma cells-enriched', 2: 'tumor', 3: 'B cells-enriched', 4:'vasculature', 5:'stroma', 6:'TAMs-enriched', 7:'TILs-enriched', 8:'granulocytes-enriched', 9:'vasculature/immune' } ) f,ax = plt.subplots(figsize = (10,7)) sns.stripplot(data = melt, hue = 'groups',dodge = True,alpha = .2,x ='neighborhood', y ='frequency of neighborhood') sns.pointplot(data = melt, scatter_kws = {'marker': 'd'},hue = 'groups',dodge = .5,join = False,x ='neighborhood', y ='frequency of neighborhood') handles, labels = ax.get_legend_handles_labels() plt.xticks(rotation=90, fontsize="10", ha="center") ax.legend(handles[:2], labels[:2], title="Groups", handletextpad=0, columnspacing=1, loc="upper left", ncol=3, frameon=True) plt.tight_layout() ``` Date: 2020-12-26 Categories: Tags: In this notebook, we will train HoVer-Net model to perform nucleus detection and classification, using data from PanNuke dataset. This notebook should be a good reference for how to do a full machine learning workflow using `PathML` and `PyTorch` `[1]:` ``` import numpy as np from tqdm import tqdm import copy import matplotlib.pyplot as plt from matplotlib import cm import torch from torch.optim.lr_scheduler import StepLR import albumentations as A ``` ``` from pathml.datasets.pannuke import PanNukeDataModule from pathml.ml.hovernet import HoVerNet, loss_hovernet, post_process_batch_hovernet from pathml.ml.utils import wrap_transform_multichannel, dice_score from pathml.utils import plot_segmentation ``` ## Data augmentation Data augmentation is the process of applying random transformations to the data before feeding it to the network. This introduces some noise and can help improve model performance by reducing overfitting. For example, each image can be randomly rotated by 90 degrees - the idea is that this would force the network to learn representations which are robust to rotation. Importantly, whatever transform is applied to the image also needs to be applied to the corresponding mask! We’ll use the Albumentations library to handle data augmentation. You can also write custom data augmentations, but albumentations and other similar libraries (e.g. torchvision.transforms) are convenient because they automatically handle masks in the augmentation pipeline. However, because our masks have multiple channels, they are not natively supported by Albumentations. So we’ll wrap each transform in the ``` wrap_transform_multichannel() ``` utility function which will make it compatible. `[3]:` # data augmentation transform hover_transform = A.Compose( [A.VerticalFlip(p=0.5), A.HorizontalFlip(p=0.5), A.RandomRotate90(p=0.5), A.GaussianBlur(p=0.5), A.MedianBlur(p=0.5, blur_limit=5)], additional_targets = {f"mask{i}" : "mask" for i in range(n_classes_pannuke)} ) transform = wrap_transform_multichannel(hover_transform) ``` ## Load PanNuke dataset ``` pannuke = PanNukeDataModule( data_dir="../data/pannuke/", download=False, nucleus_type_labels=True, batch_size=8, hovernet_preprocess=True, split=1, transforms=transform ) train_dataloader = pannuke.train_dataloader valid_dataloader = pannuke.valid_dataloader test_dataloader = pannuke.test_dataloader ``` Let’s visualize what the inputs to HoVer-Net model look like: `[7]:` ``` images, masks, hvs, types = next(iter(train_dataloader)) n = 4 fig, ax = plt.subplots(nrows=n, ncols=4, figsize = (8, 8)) cm_mask = copy.copy(cm.get_cmap("tab10")) cm_mask.set_bad(color='white') for i in range(n): im = images[i, ...].numpy() ax[i, 0].imshow(np.moveaxis(im, 0, 2)) m = masks.argmax(dim=1)[i, ...] m = np.ma.masked_where(m == 5, m) ax[i, 1].imshow(m, cmap = cm_mask) ax[i, 2].imshow(hvs[i, 0, ...], cmap = 'coolwarm') ax[i, 3].imshow(hvs[i, 1, ...], cmap = 'coolwarm') for a in ax.ravel(): a.axis("off") for c,v in enumerate(["H&E Image", "Nucleus Types", "Horizontal Map", "Vertical Map"]): ax[0, c].set_title(v) ## Model Training ### Training with multi-GPU `.to(device)` . ``` torch.nn.DataParallel() ``` . PyTorch will then take care of all the tricky parts of distributing the computation across the GPUs. `[5]:` ``` print(f"GPUs used:\t{torch.cuda.device_count()}") device = torch.device("cuda:0") print(f"Device:\t\t{device}") ``` ``` GPUs used: 4 Device: cuda:0 ``` # load the model hovernet = HoVerNet(n_classes=n_classes_pannuke) # wrap model to use multi-GPU hovernet = torch.nn.DataParallel(hovernet) ``` ``` # set up optimizer opt = torch.optim.Adam(hovernet.parameters(), lr = 1e-4) # learning rate scheduler to reduce LR by factor of 10 each 25 epochs scheduler = StepLR(opt, step_size=25, gamma=0.1) ``` ``` # send model to GPU hovernet.to(device); ``` ### Main training loop This contains all our logic for looping over batches, doing a forward pass through the network, computing the loss, and then stepping the model parameters to minimize the loss. We also add some code to evaluate the model on the validation set as we train, and to track the performance metrics throughout the training process. `[ ]:` ``` n_epochs = 50 # print performance metrics every n epochs print_every_n_epochs = None # evaluating performance on a random subset of validation mini-batches # this saves time instead of evaluating on the entire validation set n_minibatch_valid = 50 epoch_train_losses = {} epoch_valid_losses = {} epoch_train_dice = {} epoch_valid_dice = {} best_epoch = 0 # main training loop for i in tqdm(range(n_epochs)): minibatch_train_losses = [] minibatch_train_dice = [] # put model in training mode hovernet.train() for data in train_dataloader: # send the data to the GPU images = data[0].float().to(device) masks = data[1].to(device) hv = data[2].float().to(device) tissue_type = data[3] # zero out gradient opt.zero_grad() # track loss minibatch_train_losses.append(loss.item()) # compute gradients loss.backward() # step optimizer and scheduler opt.step() #step LR scheduler scheduler.step() # evaluate on random subset of validation data hovernet.eval() minibatch_valid_losses = [] minibatch_valid_dice = [] # randomly choose minibatches for evaluating minibatch_ix = np.random.choice(range(len(valid_dataloader)), replace=False, size=n_minibatch_valid) with torch.no_grad(): for j, data in enumerate(valid_dataloader): if j in minibatch_ix: # send the data to the GPU images = data[0].float().to(device) masks = data[1].to(device) hv = data[2].float().to(device) tissue_type = data[3] # track loss minibatch_valid_losses.append(loss.item()) # average performance metrics over minibatches mean_train_loss = np.mean(minibatch_train_losses) mean_valid_loss = np.mean(minibatch_valid_losses) mean_train_dice = np.mean(minibatch_train_dice) mean_valid_dice = np.mean(minibatch_valid_dice) # save the model with best performance if i != 0: if mean_valid_loss < min(epoch_valid_losses.values()): best_epoch = i torch.save(hovernet.state_dict(), f"hovernet_best_perf.pt") # track performance over training epochs epoch_train_losses.update({i : mean_train_loss}) epoch_valid_losses.update({i : mean_valid_loss}) epoch_train_dice.update({i : mean_train_dice}) epoch_valid_dice.update({i : mean_valid_dice}) if print_every_n_epochs is not None: if i % print_every_n_epochs == print_every_n_epochs - 1: print(f"Epoch {i+1}/{n_epochs}:") print(f"\ttraining loss: {np.round(mean_train_loss, 4)}\tvalidation loss: {np.round(mean_valid_loss, 4)}") print(f"\ttraining dice: {np.round(mean_train_dice, 4)}\tvalidation dice: {np.round(mean_valid_dice, 4)}") # save fully trained model torch.save(hovernet.state_dict(), f"hovernet_fully_trained.pt") print(f"\nEpoch with best validation performance: {best_epoch}") ``` ``` 36%|███▌ | 18/50 [4:22:48<7:46:23, 874.50s/it] ``` `[23]:` ``` fix, ax = plt.subplots(nrows=1, ncols=2, figsize = (10, 4)) ax[0].plot(epoch_train_losses.keys(), epoch_train_losses.values(), label = "Train") ax[0].plot(epoch_valid_losses.keys(), epoch_valid_losses.values(), label = "Validation") ax[0].scatter(x=best_epoch, y=epoch_valid_losses[best_epoch], label = "Best Model", color = "green", marker="*") ax[0].set_title("Training: Loss") ax[0].set_xlabel("Epoch") ax[0].set_ylabel("Loss") ax[0].legend() ax[1].plot(epoch_train_dice.keys(), epoch_train_dice.values(), label = "Train") ax[1].plot(epoch_valid_dice.keys(), epoch_valid_dice.values(), label = "Validation") ax[1].scatter(x=best_epoch, y=epoch_valid_dice[best_epoch], label = "Best Model", color = "green", marker="*") ax[1].set_title("Training: Dice Score") ax[1].set_xlabel("Epoch") ax[1].set_ylabel("Dice Score") ax[1].legend() plt.show() ``` ## Evaluate Model Now that we have trained the model, we can evaluate performance on the held-out test set. First we load the weights for the best model: `[30]:` ``` # load the best model checkpoint = torch.load("hovernet_best_perf.pt") hovernet.load_state_dict(checkpoint) ``` ``` <All keys matched successfully> ``` Next, we loop through the test set and store the model predictions: `[69]:` ``` hovernet.eval() ims = None mask_truth = None mask_pred = None tissue_types = [] with torch.no_grad(): for i, data in tqdm(enumerate(test_dataloader)): # send the data to the GPU images = data[0].float().to(device) masks = data[1].to(device) hv = data[2].float().to(device) tissue_type = data[3] # pass thru network to get predictions outputs = hovernet(images) preds_detection, preds_classification = post_process_batch_hovernet(outputs, n_classes=n_classes_pannuke) if i == 0: ims = data[0].numpy() mask_truth = data[1].numpy() mask_pred = preds_classification tissue_types.extend(tissue_type) else: ims = np.concatenate([ims, data[0].numpy()], axis=0) mask_truth = np.concatenate([mask_truth, data[1].numpy()], axis=0) mask_pred = np.concatenate([mask_pred, preds_classification], axis=0) tissue_types.extend(tissue_type) ``` ``` 341it [16:56, 2.98s/it] ``` Now we can compute the Dice score for each image in the test set: `[70]:` ``` # collapse multi-class preds into binary preds preds_detection = np.sum(mask_pred, axis=1) dice_scores = np.empty(shape = len(tissue_types)) for i in range(len(tissue_types)): truth_binary = mask_truth[i, -1, :, :] == 0 preds_binary = preds_detection[i, ...] != 0 dice = dice_score(preds_binary, truth_binary) dice_scores[i] = dice ``` `[124]:` ``` dice_by_tissue = pd.DataFrame({"Tissue Type" : tissue_types, "dice" : dice_scores}) dice_by_tissue.groupby("Tissue Type").mean().plot.bar() plt.title("Dice Score by Tissue Type") plt.ylabel("Averagae Dice Score") plt.gca().get_legend().remove() plt.show() ``` `[72]:` ``` print(f"Average Dice score in test set: {np.mean(dice_scores)}") ``` ``` Average Dice score in test set: 0.7850396088887557 ``` ## Examples Let’s take a look at some example predictions from the network to see how it is performing. `[100]:` ``` # change image tensor from (B, C, H, W) to (B, H, W, C) # matplotlib likes channels in last dimension ims = np.moveaxis(ims, 1, 3) ``` `[119]:` ``` n = 8 ix = np.random.choice(np.arange(len(tissue_types)), size = n) fig, ax = plt.subplots(nrows = n, ncols = 2, figsize = (8, 2.5*n)) for i, index in enumerate(ix): ax[i, 0].imshow(ims[index, ...]) ax[i, 1].imshow(ims[index, ...]) plot_segmentation(ax = ax[i, 0], masks = mask_pred[index, ...]) plot_segmentation(ax = ax[i, 1], masks = mask_truth[index, ...]) ax[i, 0].set_ylabel(tissue_types[index]) for a in ax.ravel(): a.get_xaxis().set_ticks([]) a.get_yaxis().set_ticks([]) ax[0, 0].set_title("Prediction") ax[0, 1].set_title("Truth") plt.tight_layout() plt.show() ``` We can see that the model is doing quite well at nucleus detection, although there are some discrepancies in nucleus classification. ## Conclusion We trained HoVer-Net from scratch on the public PanNuke dataset to perform simulataneous nucleus segmentation and classification. We wrote model training and evaluation loops in PyTorch, including code to distribute training across 4 GPUs. The trained model performs well, with an average Dice coefficient of 0.785 on held-out test set. We also evaluated performance across tissue types, finding that the model performs best in Stomach tissue and worst in Head & Neck tissue. Load this pre-trained model and test it out yourself! ## Session info ``` import IPython print(IPython.sys_info()) print(f"torch version: {torch.__version__}") ``` ``` {'commit_hash': '223e783c4', 'commit_source': 'installation', 'default_encoding': 'utf-8', 'ipython_path': '/opt/conda/envs/pathml/lib/python3.8/site-packages/IPython', 'ipython_version': '7.19.0', 'os_name': 'posix', 'platform': 'Linux-4.19.0-12-cloud-amd64-x86_64-with-glibc2.10', 'sys_executable': '/opt/conda/envs/pathml/bin/python', 'sys_platform': 'linux', 'sys_version': '3.8.6 | packaged by conda-forge | (default, Dec 26 2020, ' '05:05:16) \n' '[GCC 9.3.0]'} torch version: 1.7.1 ``` `[29]:` ``` # hash for PathML commit: !git rev-parse HEAD ``` ``` 3f68d77d0c7b324acce74214e713a0bf79e60d84 ``` In PathML, preprocessing pipelines are created by composing modular `Transforms` . The following tutorial contains an overview of the PathML pre-processing Transforms, with examples. We will divide Transforms into three primary categories, depending on their function: Transforms that modify an image Gaussian Blur * Median Blur * Box Blur * Stain Normalization * Superpixel Interpolation * Transforms that create a mask Nucleus Detection * Binary Threshold * Transforms that modify a mask Morphological Closing * Morphological Opening * Foreground Detection * Tissue Detection `[2]:` from pathml.core import HESlide, Tile, types from pathml.utils import plot_mask, RGB_to_GREY from pathml.preprocessing import ( BoxBlur, GaussianBlur, MedianBlur, NucleusDetectionHE, StainNormalizationHE, SuperpixelInterpolation, ForegroundDetection, TissueDetectionHE, BinaryThreshold, MorphClose, MorphOpen ) fontsize = 14 ``` Note that a `Transform` operates on `Tile` objects. We must first load a whole-slide image, extract a smaller region, and create a `Tile` : `[3]:` ``` wsi = HESlide("./../data/CMU-1-Small-Region.svs") region = wsi.slide.extract_region(location = (900, 800), size = (500, 500)) def smalltile(): # convenience function to create a new tile return Tile(region, coords = (0, 0), name = "testregion", slide_type = types.HE) ``` ## Transforms that modify an image ### Blurring Transforms We’ll start with the 3 blurring transforms: `GaussianBlur` , `MedianBlur` , and `BoxBlur` Blurriness can be control with the `kernel_size` parameter. A larger kernel width yields a more blurred result for all blurring transforms: `[4]:` ``` blurs = ["Original Image", GaussianBlur, MedianBlur, BoxBlur] blur_name = ["Original Image", "GaussianBlur", "MedianBlur", "BoxBlur"] k_size = [5, 11, 21] fig, axarr = plt.subplots(nrows=4, ncols=3, figsize=(7.5, 10)) for i, blur in enumerate(blurs): for j, kernel_size in enumerate(k_size): tile = smalltile() if blur != "Original Image": b = blur(kernel_size = kernel_size) b.apply(tile) ax = axarr[i, j] ax.imshow(tile.image) if i == 0: ax.set_title(f"Kernel_size = {kernel_size}", fontsize=fontsize) if j == 0: ax.set_ylabel(blur_name[i], fontsize = fontsize) for a in axarr.ravel(): a.set_xticks([]) a.set_yticks([]) plt.tight_layout() plt.show() ``` ### Superpixel Interpolation Superpixel interpolation is a method for grouping together nearby similar pixels to form larger “superpixels.” The ``` SuperpixelInterpolation ``` Transform divides the input image into superpixels using SLIC algorithm, then interpolates each superpixel with average color. The `region_size` parameter controls how big the superpixels are: `[5]:` ``` region_sizes = ["original", 10, 20, 30] fig, axarr = plt.subplots(nrows=1, ncols=4, figsize=(10, 10)) for i, region_size in enumerate(region_sizes): tile = smalltile() if region_size == "original": axarr[i].set_title("Original Image", fontsize = fontsize) else: t = SuperpixelInterpolation(region_size = region_size) t.apply(tile) axarr[i].set_title(f"Region Size = {region_size}", fontsize = fontsize) axarr[i].imshow(tile.image) for ax in axarr.ravel(): ax.set_yticks([]) ax.set_xticks([]) plt.tight_layout() plt.show() ``` ### Stain Normalization H&E images are a combination of two stains: hematoxylin and eosin. Stain deconvolution methods attempt to estimate the relative contribution of each stain for each pixel. Each stain can then be pulled out into a separate image, and the deconvolved images can then be recombined to normalize the appearance of the image. The `StainNormalizationHE` Transform implements two algorithms for stain deconvolution. `[6]:` ``` fig, axarr = plt.subplots(nrows=2, ncols=3, figsize=(10, 7.5)) fontsize = 18 for i, method in enumerate(["macenko", "vahadane"]): for j, target in enumerate(["normalize", "hematoxylin", "eosin"]): tile = smalltile() normalizer = StainNormalizationHE(target = target, stain_estimation_method = method) normalizer.apply(tile) ax = axarr[i, j] ax.imshow(tile.image) if j == 0: ax.set_ylabel(f"{method} method", fontsize=fontsize) if i == 0: ax.set_title(target, fontsize = fontsize) for a in axarr.ravel(): a.set_xticks([]) a.set_yticks([]) plt.tight_layout() plt.show() ``` ## Transforms that create a mask ### Binary Threshold The `BinaryThreshold` transform creates a mask by classifying whether each pixel is above or below the given threshold. Note that you can supply a `threshold` parameter, or use Otsu’s method to automatically determine a threshold: `[7]:` ``` thresholds = ["original", 50, 180, "otsu"] fig, axarr = plt.subplots(nrows=1, ncols=len(thresholds), figsize=(12, 6)) for i, thresh in enumerate(thresholds): tile = smalltile() if thresh == "original": axarr[i].set_title("Original Image", fontsize = fontsize) axarr[i].imshow(tile.image) elif thresh == "otsu": t = BinaryThreshold(mask_name = "binary_threshold", inverse = True, use_otsu = True) t.apply(tile) axarr[i].set_title(f"Otsu Threshold", fontsize = fontsize) axarr[i].imshow(tile.masks["binary_threshold"]) else: t = BinaryThreshold(mask_name = "binary_threshold", threshold = thresh, inverse = True, use_otsu = False) t.apply(tile) axarr[i].set_title(f"Threshold = {thresh}", fontsize = fontsize) axarr[i].imshow(tile.masks["binary_threshold"]) for ax in axarr.ravel(): ax.set_yticks([]) ax.set_xticks([]) plt.tight_layout() plt.show() ``` ### Nucleus Detection The `NucleusDetectionHE` transform employs a simple nucleus detection algorithm for H&E stained images. It works by first separating hematoxylin channel, then doing interpolation using superpixels, and finally using Otsu’s method for binary thresholding. This is an example of a compound Transform created by combining several other Transforms: `[8]:` ``` tile = smalltile() nucleus_detection = NucleusDetectionHE(mask_name = "detect_nuclei") nucleus_detection.apply(tile) fig, axarr = plt.subplots(nrows=1, ncols=2, figsize=(8, 8)) axarr[0].imshow(tile.image) axarr[0].set_title("Original Image", fontsize=fontsize) axarr[1].imshow(tile.masks["detect_nuclei"]) axarr[1].set_title("Nucleus Detection", fontsize=fontsize) for ax in axarr.ravel(): ax.set_yticks([]) ax.set_xticks([]) plt.tight_layout() plt.show() ``` We can also overlay the results on the original image to see which regions were identified as being nuclei: `[9]:` ``` fig, ax = plt.subplots(figsize=(7, 7)) plot_mask(im = tile.image, mask_in=tile.masks["detect_nuclei"], ax = ax) plt.title("Overlay", fontsize = fontsize) plt.axis('off') plt.show() ``` ## Transforms that modify a mask For the following transforms, we’ll use a Tile containing a larger region extracted from the slide. `[10]:` ``` bigregion = wsi.slide.extract_region(location = (800, 800), size = (1000, 1000)) def bigtile(): # convenience function to create a new tile with a binary mask bigtile = Tile(bigregion, coords = (0, 0), name = "testregion", slide_type = types.HE) BinaryThreshold(mask_name = "binary_threshold", inverse=True, threshold = 100, use_otsu = False).apply(bigtile) return bigtile plt.imshow(bigregion) plt.axis("off") plt.show() ``` ### Morphological Opening Morphological opening reduces noise in a binary mask by first applying binary erosion n times, and then applying binary dilation n times. The effect is to remove small objects from the background. The strength of the effect can be controlled by setting n `[11]:` ### Morphological Closing Morphological closing is similar to opening, but in the opposite order: first, binary dilation is applied n times, then binary erosion is applied n times. The effect is to reduce noise in a binary mask by closing small holes in the foreground. The strength of the effect can be controlled by setting n `[12]:` ### Foreground Detection This transform operates on binary masks and identifies regions that have a total area greater than specified threshold. Supports including holes within foreground regions, or excluding holes above a specified area threshold. `[13]:` ``` tile = bigtile() foreground_detector = ForegroundDetection(mask_name = "binary_threshold") original_mask = tile.masks["binary_threshold"].copy() foreground_detector.apply(tile) fig, axarr = plt.subplots(nrows=1, ncols=2, figsize=(8, 8)) axarr[0].imshow(original_mask) axarr[0].set_title("Original Mask", fontsize=fontsize) axarr[1].imshow(tile.masks["binary_threshold"]) axarr[1].set_title("Detected Foreground", fontsize=fontsize) for ax in axarr.ravel(): ax.set_yticks([]) ax.set_xticks([]) plt.tight_layout() plt.show() ``` ### Tissue Detection `TissueDetectionHE` is a Transform for detecting regions of tissue from an H&E image. It is composed by applying a sequence of other Transforms: first a median blur, then binary thresholding, then morphological opening and closing, and finally foreground detection. `[14]:` ``` tile = bigtile() tissue_detector = TissueDetectionHE(mask_name = "tissue", outer_contours_only=True) tissue_detector.apply(tile) fig, axarr = plt.subplots(nrows=1, ncols=3, figsize=(8, 8)) axarr[0].imshow(tile.image) axarr[0].set_title("Original Image", fontsize=fontsize) axarr[1].imshow(tile.masks["tissue"]) axarr[1].set_title("Detected Tissue", fontsize=fontsize) plot_mask(im = tile.image, mask_in=tile.masks["tissue"], ax = axarr[2]) axarr[2].set_title("Overlay", fontsize=fontsize) for ax in axarr.ravel(): ax.set_yticks([]) ax.set_xticks([]) plt.tight_layout() plt.show() ``` ## SlideData The central class in `PathML` for representing a whole-slide image. * class pathml.core.SlideData(filepath, name=None, masks=None, tiles=None, labels=None, backend=None, slide_type=None, stain=None, platform=None, tma=None, rgb=None, volumetric=None, time_series=None, counts=None, dtype=None) * Main class representing a slide and its annotations. filepath (str) – Path to file on disk. * name (str, optional) – name of slide. If `None` , and a `filepath` is provided, name defaults to filepath. * masks (pathml.core.Masks, optional) – object containing {key, mask} pairs * tiles (pathml.core.Tiles, optional) – object containing {coordinates, tile} pairs * labels (collections.OrderedDict, optional) – dictionary containing {key, label} pairs * backend (str, optional) – backend to use for interfacing with slide on disk. Must be one of {“OpenSlide”, “BioFormats”, “DICOM”, “h5path”} (case-insensitive). Note that for supported image formats, OpenSlide performance can be significantly better than BioFormats. Consider specifying ``` backend = "openslide" ``` when possible. If `None` , and a `filepath` is provided, tries to infer the correct backend from the file extension. Defaults to `None` . * platform (str, optional) – Flag indicating the imaging platform (e.g. CODEX, Vectra, etc.). Defaults to `None` . Ignored if `slide_type` is specified. * counts (anndata.AnnData) – object containing counts matrix associated with image quantification * property counts(self) * extract_region(self, location, size, *args, **kwargs) * Extract a region of the image. This is a convenience method which passes arguments through to the `extract_region()` method of whichever backend is in use. Refer to documentation for each backend. *args – positional arguments passed through to `extract_region()` method of the backend. * **kwargs – keyword arguments passed through to `extract_region()` method of the backend. * Returns * Generator over Tile objects containing regions of the image. Calls `generate_tiles()` method of the backend. Tries to add the corresponding slide-level masks to each tile, if possible. Adds slide-level labels to each tile, if possible. View a thumbnail of the image, using matplotlib. Not supported by all backends. * run(self, pipeline, distributed=True, client=None, tile_size=256, tile_stride=None, level=0, tile_pad=False, overwrite_existing_tiles=False, write_dir=None, **kwargs) * Run a preprocessing pipeline on SlideData. Tiles are generated by calling self.generate_tiles() and pipeline is applied to each tile. tile_size (int, optional) – Size of each tile. Defaults to 256px * tile_stride (int, optional) – Stride between tiles. If `None` , uses ``` tile_stride = tile_size ``` for non-overlapping tiles. Defaults to `None` . * level (int, optional) – Level to extract tiles from. Defaults to `None` . * tile_pad (bool) – How to handle chunks on the edges. If `True` , these edge chunks will be zero-padded symmetrically and yielded with the other chunks. If `False` , incomplete edge chunks will be ignored. Defaults to `False` . * overwrite_existing_tiles (bool) – Whether to overwrite existing tiles. If `False` , running a pipeline will fail if `tiles is not None` . Defaults to `False` . * write_dir (str) – Path to directory to write the processed slide to. The processed SlideData object will be written to the directory immediately after the pipeline has completed running. The filepath will default to “<write_dir>/<slide.name>.h5path. Defaults to `None` . * Convenience method for getting the image shape. Calling `wsi.shape` is equivalent to calling ``` wsi.slide.get_image_shape() ``` with default arguments. * write(self, path) * Write contents to disk in h5path format. path (Union[str, bytes, os.PathLike]) – path to file to be written ### Convenience SlideData Classes * class pathml.core.HESlide(*args, **kwargs) * ``` slide_type = types.HE ``` flag. Refer to `SlideData` for full documentation. * class pathml.core.VectraSlide(*args, **kwargs) * ``` slide_type = types.Vectra ``` * class pathml.core.MultiparametricSlide(*args, **kwargs) * ``` slide_type = types.IF ``` * class pathml.core.CODEXSlide(*args, **kwargs) * Convenience class to load a SlideData object from Akoya Biosciences CODEX format. Passes through all arguments to `SlideData()` , along with ``` slide_type = types.CODEX ``` * # TODO: * hierarchical biaxial gating (flow-style analysis) ## Slide Types * class pathml.core.SlideType(stain=None, platform=None, tma=None, rgb=None, volumetric=None, time_series=None) * SlideType objects define types based on a set of image parameters. stain (str, optional) – One of [‘HE’, ‘IHC’, ‘Fluor’]. Flag indicating type of slide stain. Defaults to None. * platform (str, optional) – Flag indicating the imaging platform (e.g. CODEX, Vectra, etc.). * tma (bool, optional) – Flag indicating whether the slide is a tissue microarray (TMA). Defaults to False. * rgb (bool, optional) – Flag indicating whether image is in RGB color. Defaults to False. * volumetric (bool, optional) – Flag indicating whether image is volumetric. Defaults to False. * time_series (bool, optional) – Flag indicating whether image is time-series. Defaults to False. Examples > >>> from pathml import SlideType, types >>> he_type = SlideType(stain = "HE", rgb = True) # define slide type manually >>> types.HE == he_type # can also use pre-made types for convenience True * asdict(self) * Convert to a dictionary. None values are represented as zeros and empty strings for compatibility with h5py attributes. If `a` is a SlideType object, then ``` a == SlideType(**a.asdict()) ``` will be `True` . We also provide instantiations of common slide types for convenience: Type stain platform rgb tma volumetric time_series `pathml.core.types.HE` ‘HE’ ``` pathml.core.types.IHC ``` ‘IHC’ False `pathml.core.types.IF` ‘Fluor’ ``` pathml.core.types.CODEX ``` ‘CODEX’ ``` pathml.core.types.Vectra ``` ‘Vectra’ ## Tile * class pathml.core.Tile(image, coords, name=None, masks=None, labels=None, counts=None, slide_type=None, stain=None, tma=None, rgb=None, volumetric=None, time_series=None) * Object representing a tile extracted from an image. Holds the array for the tile, as well as the (i,j) coordinates of the top-left corner of the tile in the original image. The (i,j) coordinate system is based on labelling the top-leftmost pixel as (0, 0) image (np.ndarray) – Image array of tile * coords (tuple) – Coordinates of tile relative to the whole-slide image. The (i,j) coordinate system is based on labelling the top-leftmost pixel of the WSI as (0, 0). * name (str, optional) – Name of tile * masks (dict) – masks belonging to tile. If masks are supplied, all masks must be the same shape as the tile. * labels – labels belonging to tile * counts (AnnData) – counts matrix for the tile. * View the tile image, using matplotlib. Only supports RGB images currently convenience method. Calling `tile.shape` is equivalent to calling `tile.image.shape` ## SlideDataset * class pathml.core.SlideDataset(slides) * Container for a dataset of WSIs slides – list of SlideData objects * run(self, pipeline, client=None, distributed=True, **kwargs) * Runs a preprocessing pipeline on all slides in the dataset kwargs (dict) – keyword arguments passed to `run()` for each slide * write(self, dir, filenames=None) * Write all SlideData objects to the specified directory. Calls .write() method for each slide in the dataset. Optionally pass a list of filenames to use, otherwise filenames will be created from `.name` attributes of each slide. dir (Union[str, bytes, os.PathLike]) – Path to directory where slides are to be saved * filenames (List[str], optional) – list of filenames to be used. ## Tiles and Masks helper classes * class pathml.core.Tiles(h5manager, tiles=None) * Object wrapping a dict of tiles. tiles (Union[dict[tuple[int], ~pathml.core.tiles.Tile], list[~pathml.core.tiles.Tile]]) – tile objects Remove tile from tiles. key (str) – key (coords) indicating tile to be removed * property tile_shape(self) * update(self, tile) * Update a tile. tile (pathml.core.tile.Tiles) – key of tile to be updated * class pathml.core.Masks(h5manager, masks=None) * Object wrapping a dict of masks. h5manager (pathml.core.h5pathManager) – * masks (dict) – dictionary of np.ndarray objects representing ex. labels, segmentations. * add(self, key, mask) * Add mask indexed by key to self.h5manager. key (str) – key * mask (np.ndarray) – array of mask. Must contain elements of type int8 Remove mask. * slice(self, slicer) * Slice all masks in self.h5manager extending of numpy array slicing. slices – list where each element is an object of type slice indicating how the dimension should be sliced ## Slide Backends ### OpenslideBackend * class pathml.core.OpenSlideBackend(filename) * Use OpenSlide to interface with image files. Depends on openslide-python which wraps the openslide C library. * extract_region(self, location, size, level=None) * Extract a region of the image * generate_tiles(self, shape=3000, stride=None, pad=False, level=0) * level (int, optional) – For slides with multiple levels, which level to extract tiles from. Defaults to 0 (highest resolution). * Yields * * get_image_shape(self, level=0) * Get the shape of the image at specified level. level (int) – Which level to get shape from. Level 0 is highest resolution. Defaults to 0. * Returns * Shape of image at target level, in (i, j) coordinates. * Return type * * get_thumbnail(self, size) * Get a thumbnail of the slide. size (Tuple[int, int]) – the maximum size of the thumbnail * Returns * ### BioFormatsBackend * class pathml.core.BioFormatsBackend(filename, dtype=None) * Use BioFormats to interface with image files. Now support multi-level images. Depends on python-bioformats which wraps ome bioformats java library, parses pixel and metadata of proprietary formats, and converts all formats to OME-TIFF. Please cite: https://pubmed.ncbi.nlm.nih.gov/20513764/ dtype (numpy.dtype) – data type of image. If `None` , will use BioFormats to infer the data type from the image’s OME metadata. Defaults to `None` . Note While the Bio-Formats convention uses XYZCT channel order, we use YXZCT for compatibility with the rest of PathML which is based on (i, j) coordinate system. * extract_region(self, location, size, level=0, series_as_channels=False, normalize=True) * Extract a region of the image. All bioformats images have 5 dimensions representing (i, j, z, channel, time). Even if an image does not have multiple z-series or time-series, those dimensions will still be kept. For example, a standard RGB image will be of shape (i, j, 1, 3, 1). If a tuple with len < 5 is passed, missing dimensions will be retrieved in full. location (Tuple[int, int]) – (i, j) location of corner of extracted region closest to the origin. * size (Tuple[int, int, ...]) – (i, j) size of each region. If an integer is passed, will convert to a * of (tuple) – dimensions will be retrieved in full. * series_as_channels (bool) – Whether to treat image series as channels. If `True` , multi-level images are not supported. Defaults to `False` . * normalize (bool, optional) – Whether to normalize the image to int8 before returning. Defaults to True. If False, image will be returned as-is immediately after reading, typically in float64. * Returns * image at the specified region. 5-D array of (i, j, z, c, t) * Return type * **kwargs – Other arguments passed through to `extract_region()` method. * Yields * * get_image_shape(self, level=None) * Get the shape of the image on specific level. level (int) – Which level to get shape from. If `level is None` , returns the shape of the biggest level. Defaults to `None` . * Returns * Shape of image (i, j) at target level * Return type * * get_thumbnail(self, size=None) * Get a thumbnail of the image. Since there is no default thumbnail for multiparametric, volumetric images, this function supports downsampling of all image dimensions. size (Tuple[int, int]) – thumbnail size * Returns * Example Get 1000x1000 thumbnail of 7 channel fluorescent image. shape = data.slide.get_image_shape() thumb = data.slide.get_thumbnail(size=(1000,1000, shape[2], shape[3], shape[4])) ### DICOMBackend * class pathml.core.DICOMBackend(filename) * Interface with DICOM files on disk. Provides efficient access to individual Frame items contained in the Pixel Data element without loading the entire element into memory. Assumes that frames are non-overlapping. DICOM does not support multi-level images. filename (str) – Path to the DICOM Part10 file on disk * extract_region(self, location, size=None, level=None) * Extract a single frame from the DICOM image. location (Union[int, Tuple[int, int]]) – coordinate location of top-left corner of frame, or integer index of frame. * size (Union[int, Tuple[int, int]]) – Size of each tile. May be a tuple of (height, width) or a single integer, in which case square tiles of that size are generated. Must be the same as the frame size. * Returns * * generate_tiles(self, shape, stride, pad, level=0, **kwargs) * Generator over tiles. For DICOMBackend, each tile corresponds to a frame. stride (int) – Ignored for DICOMBackend. Frames are yielded individually. * * static get_bot(fp) * Reads the value of the Basic Offset Table. This table is used to access individual frames without loading the entire file into memory fp (pydicom.filebase.DicomFile) – pydicom DicomFile object * Returns * Offset of each Frame of the Pixel Data element following the Basic Offset Table * Return type * list * get_image_shape(self) * Get the shape of the image. * abstract get_thumbnail(self, size, **kwargs) ## h5pathManager * class pathml.core.h5managers.h5pathManager(h5path=None, slidedata=None) * Interface between slidedata object and data management on disk by h5py. * add_mask(self, key, mask) * Add mask to h5. This manages slide-level masks. key (str) – mask key * * add_tile(self, tile) * Add a tile to h5path. tile (pathml.core.tile.Tile) – Tile object * get_mask(self, item, slicer=None) * get_slidetype(self) * get_tile(self, item) * Retrieve tile from h5manager by key or index. item (int, str, tuple) – key or index of tile to be retrieved * Returns * Tile(pathml.core.tile.Tile) * remove_mask(self, key) * Remove mask by key. * remove_tile(self, key) * Remove tile from self.h5 by key. * slice_masks(self, slicer) * Generator slicing all tiles, extending numpy array slicing. slicer – List where each element is an object of type slice https://docs.python.org/3/c-api/slice.html indicating how the corresponding dimension should be sliced. The list length should correspond to the dimension of the tile. For 2D H&E images, pass a length 2 list of slice objects. * Yields * key(str) – mask key val(np.ndarray): mask * update_mask(self, key, mask) * Update a mask. key (str) – key indicating mask to be updated * Date: 2009-01-01 Categories: Tags: ## Pipeline * class pathml.preprocessing.Pipeline(transform_sequence=None) * Compose a sequence of Transforms transform_sequence (list) – sequence of transforms to be consecutively applied. List of pathml.core.Transform objects * save(self, filename) * save pipeline to disk filename (str) – save path on disk ## Transforms * class pathml.preprocessing.MedianBlur(kernel_size=5) * Median blur kernel. * class pathml.preprocessing.GaussianBlur(kernel_size=5, sigma=5) * Gaussian blur kernel. sigma (float) – Variance of Gaussian kernel. Variance is assumed to be equal in X and Y axes. Defaults to 5. * class pathml.preprocessing.BoxBlur(kernel_size=5) * Box (average) blur kernel. kernel_size (int) – Width of kernel. Defaults to 5. * class pathml.preprocessing.BinaryThreshold(mask_name=None, use_otsu=True, threshold=0, inverse=False) * Binary thresholding transform to create a binary mask. If input image is RGB it is first converted to greyscale, otherwise the input must have 1 channel. use_otsu (bool) – Whether to use Otsu’s method to automatically determine optimal threshold. Defaults to True. * threshold (int) – Specified threshold. Ignored if `use_otsu is True` . Defaults to 0. * inverse (bool) – Whether to use inverse threshold. If using inverse threshold, pixels below the threshold will be returned as 1. Otherwise pixels below the threshold will be returned as 0. Defaults to `False` . References <NAME>., 1979. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), pp.62-66. * class pathml.preprocessing.MorphOpen(mask_name=None, kernel_size=5, n_iterations=1) * Morphological opening. First applies erosion operation, then dilation. Reduces noise by removing small objects from the background. Operates on a binary mask. * class pathml.preprocessing.MorphClose(mask_name=None, kernel_size=5, n_iterations=1) * Morphological closing. First applies dilation operation, then erosion. Reduces noise by closing small holes in the foreground. Operates on a binary mask. * class pathml.preprocessing.ForegroundDetection(mask_name=None, min_region_size=5000, max_hole_size=1500, outer_contours_only=False) * Foreground detection for binary masks. Identifies regions that have a total area greater than specified threshold. Supports including holes within foreground regions, or excluding holes above a specified area threshold. ``` outer_contours_only is True ``` <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., 2020. Data Efficient and Weakly Supervised Computational Pathology on Whole Slide Images. arXiv preprint arXiv:2004.09666. * class pathml.preprocessing.SuperpixelInterpolation(region_size=10, n_iter=30) * Divide input image into superpixels using SLIC algorithm, then interpolate each superpixel with average color. SLIC superpixel algorithm described in Achanta et al. 2012. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11), pp.2274-2282. * class pathml.preprocessing.StainNormalizationHE(target='normalize', stain_estimation_method='macenko', optical_density_threshold=0.15, sparsity_regularizer=1.0, angular_percentile=0.01, regularizer_lasso=0.01, background_intensity=245, stain_matrix_target_od=np.array([[0.5626, 0.2159], [0.7201, 0.8012], [0.4062, 0.5581]]), max_c_target=np.array([1.9705, 1.0308])) * Normalize H&E stained images to a reference slide. Also can be used to separate hematoxylin and eosin channels. H&E images are assumed to be composed of two stains, each one having a vector of its characteristic RGB values. The stain matrix is a 3x2 matrix where the first column corresponds to the hematoxylin stain vector and the second corresponds to eosin stain vector. The stain matrix can be estimated from a reference image in a number of ways; here we provide implementations of two such algorithms from Macenko et al. and Vahadane et al. After estimating the stain matrix for an image, the next step is to assign stain concentrations to each pixel. Each pixel is assumed to be a linear combination of the two stain vectors, where the coefficients are the intensities of each stain vector at that pixel. To solve for the intensities, we use least squares in Macenko method and lasso in vahadane method. The image can then be reconstructed by applying those pixel intensities to a stain matrix. This allows you to standardize the appearance of an image by reconstructing it using a reference stain matrix. Using this method of normalization may help account for differences in slide appearance arising from variations in staining procedure, differences between scanners, etc. Images can also be reconstructed using only a single stain vector, e.g. to separate the hematoxylin and eosin channels of an H&E image. This code is based in part on StainTools: https://github.com/Peter554/StainTools target (str) – one of ‘normalize’, ‘hematoxylin’, or ‘eosin’. Defaults to ‘normalize’ * stain_estimation_method (str) – method for estimating stain matrix. Must be one of ‘macenko’ or ‘vahadane’. Defaults to ‘macenko’. * optical_density_threshold (float) – Threshold for removing low-optical density pixels when estimating stain vectors. Defaults to 0.15 * ``` concentration_estimation_method != 'vahadane' ``` ``` concentration_estimation_method != 'macenko' ``` regularizer_lasso (float) – regularization parameter for lasso solver. Defaults to 0.01. Ignored if `method != 'lasso'` * background_intensity (int) – Intensity of background light. Must be an integer between 0 and 255. Defaults to 245. * stain_matrix_target_od (np.ndarray) – Stain matrix for reference slide. Matrix of H and E stain vectors in optical density (OD) space. Stain matrix is (3, 2) and first column corresponds to hematoxylin. Default stain matrix can be used, or you can also fit to a reference slide of your choosing by calling `fit_to_reference()` . * max_c_target (np.ndarray) – Maximum concentrations of each stain in reference slide. Default can be used, or you can also fit to a reference slide of your choosing by calling `fit_to_reference()` . Note If using ``` stain_estimation_method = "Vahadane" ``` , spams must be installed, along with all of its dependencies (i.e. libblas & liblapack). References * fit_to_reference(self, image_ref) * Fit `stain_matrix` and `max_c` to a reference slide. This allows you to use a specific slide as the reference for stain normalization. Works by first estimating stain matrix from input reference image, then estimating pixel concentrations. Newly computed stain matrix and maximum concentrations are then used for any future color normalization. image_ref (np.ndarray) – RGB reference image * class pathml.preprocessing.NucleusDetectionHE(mask_name=None, stain_estimation_method='vahadane', superpixel_region_size=10, n_iter=30, **stain_kwargs) * Simple nucleus detection algorithm for H&E stained images. Works by first separating hematoxylin channel, then doing interpolation using superpixels, and finally using Otsu’s method for binary thresholding. stain_estimation_method (str) – Method for estimating stain matrix. Defaults to “vahadane” * stain_kwargs (dict) – other arguments passed to ``` StainNormalizationHE() ``` <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>., 2018. Unsupervised learning for cell-level visual representation in histopathology images with generative adversarial networks. IEEE journal of biomedical and health informatics, 23(3), pp.1316-1328. * class pathml.preprocessing.TissueDetectionHE(mask_name=None, use_saturation=True, blur_ksize=17, threshold=None, morph_n_iter=3, morph_k_size=7, min_region_size=5000, max_hole_size=1500, outer_contours_only=False) * Detect tissue regions from H&E stained slide. First applies a median blur, then binary thresholding, then morphological opening and closing, and finally foreground detection. use_saturation (bool) – Whether to convert to HSV and use saturation channel for tissue detection. If False, convert from RGB to greyscale and use greyscale image_ref for tissue detection. Defaults to True. * blur_ksize (int) – kernel size used to apply median blurring. Defaults to 15. * threshold (int) – threshold for binary thresholding. If None, uses Otsu’s method. Defaults to None. * morph_n_iter (int) – number of iterations of morphological opening and closing to apply. Defaults to 3. * morph_k_size (int) – kernel size for morphological opening and closing. Defaults to 7. * mask_name (str) – name for new mask * class pathml.preprocessing.LabelArtifactTileHE(label_name=None) * Applies a rule-based method to identify whether or not an image contains artifacts (e.g. pen marks). Based on criteria from Kothari et al. 2012 ACM-BCB 218-225. <NAME>., <NAME>., <NAME>. and <NAME>., 2012, October. Biological interpretation of morphological patterns in histopathological whole-slide images. In Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine (pp. 218-225). * class pathml.preprocessing.LabelWhiteSpaceHE(label_name=None, greyscale_threshold=230, proportion_threshold=0.5) * Simple threshold method to label an image as majority whitespace. Converts image to greyscale. If the proportion of pixels exceeding the greyscale threshold is greater than the proportion threshold, then the image is labelled as whitespace. * class pathml.preprocessing.SegmentMIF(model='mesmer', nuclear_channel=None, cytoplasm_channel=None, image_resolution=0.5, preprocess_kwargs=None, postprocess_kwargs_nuclear=None, postprocess_kwargs_whole_cell=None) * Transform applying segmentation to MIF images. Input image must be formatted (c, x, y) or (batch, c, x, y). z and t dimensions must be selected before calling SegmentMIF Supported models: Mesmer: Mesmer uses human-in-the-loop pipeline to train a ResNet50 backbone w/ Feature Pyramid Network segmentation model on 1.3 million cell annotations and 1.2 million nuclear annotations (TissueNet dataset). Model outputs predictions for centroid and boundary of every nucleus and cell, then centroid and boundary predictions are used as inputs to a watershed algorithm that creates segmentation masks. * Cellpose: [coming soon] Mesmer model requires installation of deepcell dependency: `pip install deepcell` model (str) – string indicating which segmentation model to use. Currently only ‘mesmer’ is supported. * nuclear_channel (int) – channel that defines cell nucleus * cytoplasm_channel (int) – channel that defines cell membrane or cytoplasm * image_resolution (float) – pixel resolution of image in microns * preprocess_kwargs (dict) – keyword arguemnts to pass to pre-processing function * postprocess_kwargs_nuclear (dict) – keyword arguments to pass to post-processing function * postprocess_kwargs_whole_cell (dict) – keyword arguments to pass to post-processing function <NAME>., <NAME>., <NAME>. et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat Biotechnol (2021). https://doi.org/10.1038/s41587-021-01094-0 <NAME>., <NAME>., <NAME>. and <NAME>., 2021. Cellpose: a generalist algorithm for cellular segmentation. Nature Methods, 18(1), pp.100-106. * class pathml.preprocessing.QuantifyMIF(segmentation_mask) * Convert segmented image into anndata.AnnData counts object AnnData. Counts objects are used to interface with the Python single cell analysis ecosystem Scanpy. The counts object contains a summary of channel statistics in each cell along with its coordinate. segmentation_mask (str) – key indicating which mask to use as label image * F(self, img, segmentation, coords_offset=(0, 0)) * Functional implementation img (np.ndarray) – Input image of shape (i, j, n_channels) * segmentation (np.ndarray) – Segmentation map of shape (i, j) or (i, j, 1). Zeros are background. Regions should be labelled with unique integers. * coords_offset (tuple, optional) – Coordinates (i, j) used to convert tile-level coordinates to slide-level. Defaults to (0, 0) for no offset. * Returns * Counts matrix * class pathml.preprocessing.CollapseRunsVectra * Coerce Vectra output to standard format. For compatibility with transforms, tiles need to have their shape collapsed to (x, y, c) * class pathml.preprocessing.CollapseRunsCODEX(z) * Coerce CODEX output to standard format. CODEX format is (x, y, z, c, t) where c=4 (4 runs per cycle) and t is the number of cycles. Output format is (x, y, c) where all cycles are collapsed into c (c = 4 * # of cycles). z (int) – in-focus z-plane * class pathml.preprocessing.RescaleIntensity(in_range='image', out_range='dtype') * Return image after stretching or shrinking its intensity levels. The desired intensity range of the input and output, in_range and out_range respectively, are used to stretch or shrink the intensity range of the input image This function is a wrapper for ‘rescale_intensity’ function from scikit-image: https://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.rescale_intensity * class pathml.preprocessing.HistogramEqualization(nbins=256, mask=None) * Return image after histogram equalization. This function is a wrapper for ‘equalize_hist’ function from scikit-image: https://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.equalize_hist nbins (int, optional) – Number of gray bins for histogram. Note: this argument is ignored for integer images, for which each integer is its own bin. * mask (ndarray of bools or 0s and 1s, optional) – Array of same shape as image. Only points at which mask == True are used for the equalization, which is applied to the whole image. * class pathml.preprocessing.AdaptiveHistogramEqualization(kernel_size=None, clip_limit=0.3, nbins=256) * Contrast Limited Adaptive Histogram Equalization (CLAHE). An algorithm for local contrast enhancement, that uses histograms computed over different tile regions of the image. Local details can therefore be enhanced even in regions that are darker or lighter than most of the image. This function is a wrapper for ‘equalize_adapthist’ function from scikit-image: https://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.equalize_adapthist kernel_size (int or array_like, optional) – Defines the shape of contextual regions used in the algorithm. If iterable is passed, it must have the same number of elements as image.ndim (without color channel). If integer, it is broadcasted to each image dimension. By default, kernel_size is 1/8 of image height by 1/8 of its width. * clip_limit (float) – Clipping limit, normalized between 0 and 1 (higher values give more contrast). * nbins (int) – Number of gray bins for histogram (“data range”). ## PanNuke * class pathml.datasets.PanNukeDataModule(data_dir, download=False, shuffle=True, transforms=None, nucleus_type_labels=False, split=None, batch_size=8, hovernet_preprocess=False) * DataModule for the PanNuke Dataset. Contains 256px image patches from 19 tissue types with annotations for 5 nucleus types. For more information, see: https://warwick.ac.uk/fac/sci/dcs/research/tia/data/pannuke data_dir (str) – Path to directory where PanNuke data is * transforms (optional) – Data augmentation transforms to apply to images. Transform must accept two arguments: (mask and image) and return a dict with “image” and “mask” keys. See an example here: https://albumentations.ai/docs/getting_started/mask_augmentation/ * nucleus_type_labels (bool, optional) – Whether to provide nucleus type labels, or binary nucleus labels. If `True` , then masks will be returned with six channels, corresponding to Neoplastic cells * Inflammatory * Connective/Soft tissue cells * Dead Cells * Epithelial * Background If `False` , then the returned mask will have a single channel, with zeros for background pixels and ones for nucleus pixels (i.e. the inverse of the Background mask). Defaults to `False` . * split (int, optional) – How to divide the three folds into train, test, and validation splits. Must be one of {1, 2, 3, None} corresponding to the following splits: Training: Fold 3; Validation: Fold 2; Testing: Fold 1 If `None` , then the entire PanNuke dataset will be used. Defaults to `None` . * hovernet_preprocess (bool) – Whether to perform preprocessing specific to HoVer-Net architecture. If `True` , the center of mass of each nucleus will be computed, and an additional mask will be returned with the distance of each nuclear pixel to its center of mass in the horizontal and vertical dimensions. This corresponds to Gamma(I) from the HoVer-Net paper. Defaults to `False` . * References * ## DeepFocus * class pathml.datasets.DeepFocusDataModule(data_dir, download=False, shuffle=True, transforms=None, batch_size=8) * DataModule for the DeepFocus dataset. The DeepFocus dataset comprises four slides from different patients, each with four different stains (H&E, Ki67, CD21, and CD10) for a total of 16 whole-slide images. For each slide, a region of interest (ROI) of approx 6mm^2 was scanned at 40x magnification with an Aperio ScanScope on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. Tiles with offset values between [-0.5μm, 0.5μm] are labeled as in-focus and the rest of the images are labeled as blurry. See: https://github.com/cialab/DeepFocus data_dir (str) – file path to directory containing data. * transforms (optional) – Data augmentation transforms to apply to images. * * Reference: * <NAME>., <NAME>., <NAME>. and <NAME>., 2018. DeepFocus: detection of out-of-focus regions in whole slide digital images using deep learning. PloS one, 13(10), p.e0205387. ## h5path Dataset * class pathml.ml.TileDataset(file_path) * PyTorch Dataset class for h5path files Each item is a tuple of ( `tile_image` , `tile_masks` , `tile_labels` , `slide_labels` ) where: * `tile_image` is a torch.Tensor of shape (C, H, W) or (T, Z, C, H, W) * `tile_masks` is a torch.Tensor of shape (n_masks, tile_height, tile_width) * `tile_labels` is a dict * `slide_labels` is a dict This is designed to be wrapped in a PyTorch DataLoader for feeding tiles into ML models. Note that label dictionaries are not standardized, as users are free to store whatever labels they want. For that reason, PyTorch cannot automatically stack labels into batches. When creating a DataLoader from a TileDataset, it may therefore be necessary to create a custom `collate_fn` to specify how to create batches of labels. See: https://discuss.pytorch.org/t/how-to-use-collate-fn/27181 file_path (str) – Path to .h5path file on disk ## HoVer-Net * class pathml.ml.HoVerNet(n_classes=None) * Model for simultaneous segmentation and classification based on HoVer-Net. Can also be used for segmentation only, if class labels are not supplied. Each branch returns logits. * forward(self, inputs) ### Helper functions * pathml.ml.hovernet.compute_hv_map(mask) * Preprocessing step for HoVer-Net architecture. Compute center of mass for each nucleus, then compute distance of each nuclear pixel to its corresponding center of mass. Nuclear pixel distances are normalized to (-1, 1). Background pixels are left as 0. Operates on a single mask. Can be used in Dataset object to make Dataloader compatible with HoVer-Net. Based on https://github.com/vqdang/hover_net/blob/195ed9b6cc67b12f908285492796fb5c6c15a000/src/loader/augs.py#L192 mask (np.ndarray) – Mask indicating individual nuclei. Array of shape (H, W), where each pixel is in {0, …, n} with 0 indicating background pixels and {1, …, n} indicating n unique nuclei. * Returns * array of hv maps of shape (2, H, W). First channel corresponds to horizontal and second vertical. * Return type * * pathml.ml.hovernet.loss_hovernet(outputs, ground_truth, n_classes=None) * Compute loss for HoVer-Net. Equation (1) in Graham et al. outputs – Output of HoVer-Net. Should be a list of [np, hv] if n_classes is None, or a list of [np, hv, nc] if n_classes is not None. Shapes of each should be: np: (B, 2, H, W) * hv: (B, 2, H, W) * nc: (B, n_classes, H, W) * ground_truth – True labels. Should be a list of [mask, hv], where mask is a Tensor of shape (B, 1, H, W) if n_classes is `None` or (B, n_classes, H, W) if n_classes is not `None` . hv is a tensor of precomputed horizontal and vertical distances of nuclear pixels to their corresponding centers of mass, and is of shape (B, 2, H, W). * * pathml.ml.hovernet.remove_small_objs(array_in, min_size) * Removes small foreground regions from binary array, leaving only the contiguous regions which are above the size threshold. Pixels in regions below the size threshold are zeroed out. array_in (np.ndarray) – Input array. Must be binary array with dtype=np.uint8. * min_size (int) – Minimum size of each region. * Returns * * Array of labels for regions above the threshold. Each separate contiguous region is labelled with * a different integer from 1 to n, where n is the number of total distinct contiguous regions * Return type * * pathml.ml.hovernet.post_process_batch_hovernet(outputs, n_classes, small_obj_size_thresh=10, kernel_size=21, h=0.5, k=0.5) * Post-process HoVer-Net outputs to get a final predicted mask. See: Section B of HoVer-Net article and https://github.com/vqdang/hover_net/blob/14c5996fa61ede4691e87905775e8f4243da6a62/models/hovernet/post_proc.py#L27 outputs (list) – Outputs of HoVer-Net model. List of [np_out, hv_out], or [np_out, hv_out, nc_out] depending on whether model is predicting classification or not. np_out is a Tensor of shape (B, 2, H, W) of logit predictions for binary classification * hv_out is a Tensor of shape (B, 2, H, W) of predictions for horizontal/vertical maps * nc_out is a Tensor of shape (B, n_classes, H, W) of logits for classification * n_classes (int) – Number of classes for classification task. If `None` then only segmentation is performed. * small_obj_size_thresh (int) – Minimum number of pixels in regions. Defaults to 10. * kernel_size (int) – Width of Sobel kernel used to compute horizontal and vertical gradients. * h (float) – hyperparameter for thresholding nucleus probabilities. Defaults to 0.5. * k (float) – hyperparameter for thresholding energy landscape to create markers for watershed segmentation. Defaults to 0.5. * Returns * If n_classes is None, returns det_out. In classification setting, returns (det_out, class_out). det_out is np.ndarray of shape (B, H, W) * class_out is np.ndarray of shape (B, n_classes, H, W) Each pixel is labelled from 0 to n, where n is the number of individual nuclei detected. 0 pixels indicate background. Pixel values i indicate that the pixel belongs to the ith nucleus. * Return type * Documentation for various utilities from all modules. ## Logging Utils * class pathml.PathMLLogger * Convenience methods for turning on or off and configuring logging for PathML. Note that this can also be achieved by interfacing with loguru directly Example: > from pathml import PathMLLogger as pml # turn on logging for PathML pml.enable() # turn off logging for PathML pml.disable() # turn on logging and output logs to a file named 'logs.txt', with colorization enabled pml.enable(sink="logs.txt", colorize=True) * static disable() * Turn off logging for PathML * static enable(sink=sys.stderr, level='DEBUG', fmt='PathML:{level}:{time:HH:mm:ss} | {module}:{function}:{line} | {message}', **kwargs) * Turn on and configure logging for PathML sink (str or io._io.TextIOWrapper, optional) – Destination sink for log messages. Defaults to `sys.stderr` . * level (str) – level of logs to capture. Defaults to ‘DEBUG’. * fmt (str) – Formatting for the log message. Defaults to: ‘PathML:{level}:{time:HH:mm:ss} | {module}:{function}:{line} | {message}’ * **kwargs (dict, optional) – additional options passed to configure logger. See: loguru documentation ## Core Utils * pathml.core.utils.readtupleh5(h5, key) * Read tuple from h5. h5 (h5py.Dataset or h5py.Group) – h5 object that will be read from * key (str) – key where data to read is stored * pathml.core.utils.writedataframeh5(h5, name, df) * Write dataframe as h5 dataset. df (pd.DataFrame) – dataframe to be written * pathml.core.utils.writedicth5(h5, name, dic) * Write dict as attributes of h5py.Group. dic (str) – dict to be written * pathml.core.utils.writestringh5(h5, name, st) * Write string as h5 attribute. st (str) – string to be written * pathml.core.utils.writetupleh5(h5, name, tup) * Write tuple as h5 attribute. tup (str) – tuple to be written * pathml.core.utils.readcounts(h5) * Read counts using anndata h5py. h5 (h5py.Dataset) – h5 object that will be read * pathml.core.utils.writecounts(h5, counts) * Write counts using anndata h5py. tup (anndata.AnnData) – anndata object to be written ## Datasets Utils * pathml.datasets.utils.pannuke_multiclass_mask_to_nucleus_mask(multiclass_mask) * Convert multiclass mask from PanNuke to a single channel nucleus mask. Assumes each pixel is assigned to one and only one class. Sums across channels, except the last mask channel which indicates background pixels in PanNuke. Operates on a single mask. multiclass_mask (torch.Tensor) – Mask from PanNuke, in classification setting. (i.e. ``` nucleus_type_labels=True ``` ). Tensor of shape (6, 256, 256). * Returns * Tensor of shape (256, 256). ## ML Utils * pathml.ml.utils.center_crop_im_batch(batch, dims, batch_order='BCHW') * Center crop images in a batch. batch – The batch of images to be cropped * dims – Amount to be cropped (tuple for H, W) * pathml.ml.utils.dice_loss(true, logits, eps=0.001) * Computes the Sørensen–Dice loss. Note that PyTorch optimizers minimize a loss. In this case, we would like to maximize the dice loss so we return 1 - dice loss. From: https://github.com/kevinzakka/pytorch-goodies/blob/c039691f349be9f21527bb38b907a940bfc5e8f3/losses.py#L54 true – a tensor of shape [B, 1, H, W]. * logits – a tensor of shape [B, C, H, W]. Corresponds to the raw output or logits of the model. * eps – added to the denominator for numerical stability. * Returns * the Sørensen–Dice loss. * Return type * dice_loss * pathml.ml.utils.dice_score(pred, truth, eps=0.001) * Calculate dice score for two tensors of the same shape. If tensors are not already binary, they are converted to bool by zero/non-zero. pred (np.ndarray) – Predictions * truth (np.ndarray) – ground truth * eps (float, optional) – Constant used for numerical stability to avoid divide-by-zero errors. Defaults to 1e-3. * Returns * Dice score * Return type * float * pathml.ml.utils.get_sobel_kernels(size, dt=torch.float32) * Create horizontal and vertical Sobel kernels for approximating gradients Returned kernels will be of shape (size, size) * pathml.ml.utils.wrap_transform_multichannel(transform) * Wrapper to make albumentations transform compatible with a multichannel mask. Channel should be in first dimension, i.e. (n_mask_channels, H, W) transform – Albumentations transform. Must have ‘additional_targets’ parameter specified with a total of n_channels key,value pairs. All values must be ‘mask’ but the keys don’t matter. e.g. for a mask with 3 channels, you could use: additional targets = {‘mask1’ : ‘mask’, ‘mask2’ : ‘mask’, ‘pathml’ : ‘mask’} * Returns * function that can be called with a multichannel mask argument ## Miscellaneous Utils * pathml.utils.upsample_array(arr, factor) * Upsample array by a factor. Each element in input array will become a CxC block in the upsampled array, where C is the constant upsampling factor. From https://stackoverflow.com/a/32848377 arr (np.ndarray) – input array to be upsampled * factor (int) – Upsampling factor * Returns * * pathml.utils.pil_to_rgb(image_array_pil) * Convert PIL RGBA Image to numpy RGB array * pathml.utils.segmentation_lines(mask_in) * Generate coords of points bordering segmentations from a given mask. Useful for plotting results of tissue detection or other segmentation. * pathml.utils.plot_mask(im, mask_in, ax=None, color='red', downsample_factor=None) * plot results of segmentation, overlaying on original image_ref im (np.ndarray) – Original RGB image_ref * mask_in (np.ndarray) – Boolean array of segmentation mask, with True values for masked pixels. Must be same shape as im. * ax – Matplotlib axes object to plot on. If None, creates a new plot. Defaults to None. * color – Color to plot outlines of mask. Defaults to “red”. Must be recognized by matplotlib. * downsample_factor – Downsample factor for image_ref and mask to speed up plotting for big images * pathml.utils.contour_centroid(contour) * Return the centroid of a contour, calculated using moments. From OpenCV implementation contour (np.array) – Contour array as returned by cv2.findContours * Returns * (x, y) coordinates of centroid. * Return type * tuple * pathml.utils.sort_points_clockwise(points) * Sort a list of points into clockwise order around centroid, ordering by angle with centroid and x-axis. After sorting, we can pass the points to cv2 as a contour. Centroid is defined as center of bounding box around points. points (np.ndarray) – Array of points (N x 2) * Returns * Array of points, sorted in order by angle with centroid (N x 2) * Return type * Return sorted points * pathml.utils.pad_or_crop(array, target_shape) * Make dimensions of input array match target shape by either zero-padding or cropping each axis. array (np.ndarray) – Input array * target_shape (tuple) – Target shape of output * Returns * Input array cropped/padded to match target_shape * Return type * * pathml.utils.RGB_to_HSI(imarr) * Convert imarr from RGB to HSI colorspace. imarr (np.ndarray) – numpy array of RGB image_ref (m, n, 3) * Returns * numpy array of HSI image_ref (m, n, 3) * Return type * * pathml.utils.RGB_to_OD(imarr) * Convert input image from RGB space to optical density (OD) space. OD = -log(I), where I is the input image in RGB space. imarr (numpy.ndarray) – Image array, RGB format * Returns * Image array, OD format * Return type * numpy.ndarray * pathml.utils.RGB_to_HSV(imarr) * convert image from RGB to HSV * pathml.utils.RGB_to_LAB(imarr) * convert image from RGB to LAB color space * pathml.utils.RGB_to_GREY(imarr) * convert image_ref from RGB to HSV * pathml.utils.normalize_matrix_rows(A) * Normalize the rows of an array. A (np.ndarray) – Input array. * Returns * Array with rows normalized. * Return type * * pathml.utils.normalize_matrix_cols(A) * Normalize the columns of an array. A (np.ndarray) – An array * Returns * Array with columns normalized * Return type * * pathml.utils.plot_segmentation(ax, masks, palette=None, markersize=5) * Plot segmentation contours. Supports multi-class masks. ax – matplotlib axis * masks (np.ndarray) – Mask array of shape (n_masks, H, W). Zeroes are background pixels. * palette – color palette to use. if None, defaults to matplotlib.colors.TABLEAU_COLORS * markersize (int) – Size of markers used on plot. Defaults to 5 `PathML` is an open source project. Consider contributing to benefit the entire community! There are many ways to contribute to PathML, including: Submitting bug reports * Submitting feature requests * Writing documentation * Fixing bugs * Writing code for new features * Sharing trained model parameters [coming soon] * Sharing `PathML` with colleagues, students, etc. ## Submitting a bug report Report bugs or errors by filing an issue on GitHub. Make sure to include the following information: Short description of the bug * Minimum working example to reproduce the bug * Expected result vs. actual result * If a bug cannot be reproduced by someone else on a different machine, it will usually be hard to identify what is causing it. ## Requesting a new feature Request a new feature by filing an issue on GitHub. Make sure to include the following information: Description of the feature * Pseudocode of how the feature might work (if applicable) * ## For developers ### Coordinate system conventions With multiple tools for interacting with matrices/images, conflicting coordinate systems has been a common source of bugs. This is typically caused when mixing up (X, Y) coordinate systems and (i, j) coordinate systems. To avoid these issues, we have adopted the (i, j) coordinate convention throughout PathML. This follows the convention used by NumPy and many others, where `A[i, j]` refers to the element of matrix A in the ith row, jth column. Developers should be careful about coordinate systems and make the necessary adjustments when using third-party tools so that users of PathML can rely on a consistent coordinate system when using our tools. ### Setting up a local development environment Create a new fork of the `PathML` repository * Clone your fork to your local machine * Set up the PathML environment: ``` conda env create -f environment.yml; conda activate pathml ``` Install PathML: `pip install -e .` * Install pre-commit hooks: `pre-commit install` ### Running tests To run the full testing suite: `python -m pytest` Some tests are known to be very slow. To skip them, run instead: ``` python -m pytest -m "not slow" ``` ### Building documentation locally ``` cd docs # enter docs directory pip install -r readthedocs-requirements # install packages to build docs make html # build docs in html format ``` Then use your favorite web browser to open ``` pathml/docs/build/html/index.html ``` ### Checking code coverage ### How to contribute code, documentation, etc. Create a new GitHub issue for what you will be working on, if one does not already exist * Create a local development environment (see above) * Create a new branch from the dev branch and implement your changes * Write new tests as needed to maintain code coverage * Ensure that all tests pass * Push your changes and open a pull request on GitHub referencing the corresponding issue * Respond to discussion/feedback about the pull request, make changes as necessary ### Versioning and Distributing We use semantic versioning. The version is tracked in `pathml/_version.py` and should be updated there as required. When new code is merged to the master branch on GitHub, the version should be incremented and a new release should be pushed. Releases can be created using the GitHub website interface, and should be tagged in version format (e.g., “v1.0.0” for version 1.0.0) and include release notes indicating what has changed. Once a new release is created, GitHub Actions workflows will automatically build and publish the updated package on PyPI and TestPyPI, as well as build and publish the Docker image to Docker Hub. ### Code Quality We want PathML to be built on high-quality code. However, the idea of “code quality” is somewhat subjective. If the code works perfectly but cannot be read and understood by someone else, then it can’t be maintained, and this accumulated tech debt is something we want to avoid. Writing code that “works”, i.e. does what you want it to do, is therefore necessary but not sufficient. Good code also demands efficiency, consistency, good design, clarity, and many other factors. Here are some general tips and ideas: Strive to make code concise, but not at the expense of clarity. * Seek efficient and general designs, but avoid premature optimization. * Prefer informative variable names. * Encapsulate code in functions or objects. * Comment, comment, comment your code. All code should be reviewed by someone else before merging. We use Black to enforce consistency of code style. ### Documentation Standards All code should be documented, including docstrings for users AND inline comments for other developers whenever possible! Both are crucial for ensuring long-term usability and maintainability. Documentation is automatically generated using the Sphinx autodoc and napoleon extensions from properly formatted Google-style docstrings. All documentation (including docstrings) is written in reStructuredText format. See this docstring example to get started. ### Testing Standards All code should be accompanied by tests, whenever possible, to ensure that everything is working as intended. The type of testing required may vary depending on the type of contribution: New features should use tests to ensure that the code is working as intended, e.g. comparing output of a function with the expected output. * Bug fixes should first add a failing test, then make it pass by fixing the bug No pull request can be merged unless all tests pass. We aim to maintain good code coverage for the testing suite (target >90%). We use the pytest testing framework. To run the test suite and check code coverage: We suggest using test-driven development when applicable. I.e., if you’re fixing a bug or adding new features, write the tests first! (they should all fail). Then, write the actual code. When all tests pass, you know that your implementation is working. This helps ensure that all code is tested and that the tests are testing what we want them to. ## Thank You! Thank you for helping make `PathML` better!
TSdist
cran
R
Package ‘TSdist’ October 12, 2022 Type Package Title Distance Measures for Time Series Data Version 3.7.1 Encoding UTF-8 Date 2022-08-30 Depends R (>= 3.0.3), proxy Imports cluster, dtw, graphics, KernSmooth, locpol, longitudinalData, methods, pdc, stats, TSclust, xts, zoo Suggests testthat Description A set of commonly used distance measures and some additional functions which, although ini- tially not designed for this purpose, can be used to measure the dissimilarity between time se- ries. These measures can be used to perform clustering, classification or other data min- ing tasks which require the definition of a distance measure between time se- ries. <NAME>, <NAME> and <NAME> (2016), <doi:10.32614/RJ-2016-058>. License GPL (>= 2) NeedsCompilation yes Repository CRAN Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Date/Publication 2022-08-31 09:40:02 UTC R topics documented: TSdist-packag... 3 ACFDistanc... 5 ARLPCCepsDistanc... 6 ARMahDistanc... 7 ARPicDistanc... 9 CCorDistanc... 10 CDMDistanc... 12 CIDDistanc... 13 CorDistanc... 14 CortDistanc... 16 DissimDistanc... 17 DTWDistanc... 19 EDRDistanc... 20 ERPDistanc... 22 EuclideanDistanc... 24 example.databas... 25 example.database... 26 example.database... 28 example.serie... 29 FourierDistanc... 30 FrechetDistanc... 31 InfNormDistanc... 33 IntPerDistanc... 34 KMedoid... 35 LBKeoghDistanc... 37 LCSSDistanc... 38 LPDistanc... 40 ManhattanDistanc... 42 MindistSaxDistanc... 43 MinkowskiDistanc... 45 NCDDistanc... 46 OneN... 47 PACFDistanc... 49 PDCDistanc... 50 PerDistanc... 51 PredDistanc... 52 SpecGLKDistanc... 54 SpecISDDistanc... 55 SpecLLRDistanc... 56 STSDistanc... 58 TAMDistanc... 59 TquestDistanc... 61 TSDatabaseDistance... 63 TSDistance... 65 TSdist-package Distance Measures for Time Series in R. Description A complete set of distance measures specifically designed to deal with time series. Details Package: TSdist Type: Package Version: 3.1 Date: 2015-07-14 License: GPL (>=2) This package provides a comprehensive set of time series distance measures published in the liter- ature and some additional functions which, although initially not designed for this purpose, can be used to measure the dissimilarity between time series. These measures can be used to perform clus- tering, classification or other data mining tasks which require the definition of a distance measure between time series. Some of the measures are specifically implemented for this package while other are originally hosted in other R packages. The measures included are: • Lp distances LPDistance • Distance based on the cross-correlation CCorDistance • Short Time Series distance (STS) STSDistance • Dynamic Time Warping (DTW) DTWDistance • LB_Keogh lower bound for the Dynamic Time Warping distance LBKeoghDistance • Edit Distance for Real Sequences (EDR) EDRDistance • Longest Common Subsequence distance for real sequences(LCSS) LCSSDistance • Edit Distance based on Real Penalty (ERP) ERPDistance • Distance based on the Fourier Discrete Transform FourierDistance • TQuest distance TquestDistance • Dissim distance DissimDistance • Autocorrelation-based dissimilarity ACFDistance. • Partial autocorrelation-based dissimilarity PACFDistance. • Dissimilarity based on LPC cepstral coefficients ARLPCCepsDistance. • Model-based dissimilarity proposed by Maharaj (1996, 2000) ARMahDistance. • Model-based dissimilarity proposed by Piccolo (1990) ARPicDistance. • Compression-based dissimilarity measure CDMDistance. • Complexity-invariant distance measure CIDDistance. • Dissimilarities based on Pearson’s correlation CorDistance. • Dissimilarity index which combines temporal correlation and raw value behaviors CortDistance. • Integrated periodogram based dissimilarity IntPerDistance. • Periodogram based dissimilarity PerDistance. • Symbolic Aggregate Aproximation based dissimilarity MindistSaxDistance. • Normalized compression based distance NCDDistance. • Dissimilarity measure cased on nonparametric forecasts PredDistance. • Dissimilarity based on the integrated squared difference between the log-spectra SpecISDDistance. • General spectral dissimilarity measure using local-linear estimation of the log-spectra SpecLLRDistance. • Permutation Distribution Distance PDCDistance. • Frechet distance FrechetDistance. All the measures are implemented in separate functions but can also be invoked by means of the wrapper function TSDistances. Moreover, this distance enables the use of time series objects of type ts, zoo and xts. As an additional functionality of the package, pairwise distances between all the time series in a database can be easily computed by using the dist function from the proxy package or the TSDatabaseDistances function included in the TSdist package. Author(s) <NAME>, <NAME>, <NAME>. Maintainer: <<EMAIL>> References Esling, P., & Agon, C. (2012). Time-series data mining. ACM Computing Surveys, 45(1), 1-34. <NAME>. (2005). Clustering of time series data-a survey. Pattern Recognition, 38(11), 1857- 1874. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2012). Experimental comparison of representation methods and distance measures for time series data. Data Mining and Knowledge Discovery, 26(2), 275-309. <NAME> and <NAME> (2013). proxy: Distance and Similarity Measures. R package version 0.4-10. http://CRAN.R-project.org/package=proxy Examples library(TSdist); ACFDistance Autocorrelation-based Dissimilarity Description Computes the dissimilarity between a pair of numeric time series based on their estimated autocor- relation coefficients. Usage ACFDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.ACF for more information. Details This is simply a wrapper for the diss.ACF function of package TSclust. As such, all the function- alities of the diss.ACF function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. <NAME>., & <NAME>. (2000). Multivariate Analysis in Vector Time Series Pedro Galeano and <NAME>, the Journal of the Institute of Mathematics and Statistics of the University of Sao Paolo, 4, 383–403. <NAME>., & <NAME>. (2007). A Study on the Dynamic Time Warping in Kernel Machines. In 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System (pp. 839–845). See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the autocorrelation based distance between the two series using # the default parameters: ACFDistance(example.series3, example.series4) ARLPCCepsDistance Dissimilarity Based on LPC Cepstral Coefficients Description Computes the dissimilarity between two numeric time series in terms of their Linear Predictive Coding (LPC) ARIMA processes. Usage ARLPCCepsDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.AR.LPC.CEPS for more infor- mation. Details This is simply a wrapper for the diss.AR.LPC.CEPS function of package TSclust. As such, all the functionalities of the diss.AR.LPC.CEPS function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package obtained from an ARIMA(3,0,2) process. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the ar.lpc.ceps distance between the two series using # the default parameters. In this case an AR model is automatically # selected for each of the series: ARLPCCepsDistance(example.series3, example.series4) # Calculate the ar.lpc.ceps distance between the two series # imposing the order of the ARIMA model of each series: ARLPCCepsDistance(example.series3, example.series4, order.x=c(3,0,2), order.y=c(3,0,2)) ARMahDistance Model-based Dissimilarity Proposed by Maharaj (1996, 2000) Description Computes the model based dissimilarity proposed by Maharaj. Usage ARMahDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.AR.MAH for more information. Details This is simply a wrapper for the diss.AR.MAH function of package TSclust. As such, all the func- tionalities of the diss.AR.MAH function are also available when using this function. Value statistic The statistic of the homogeneity test. p-value The p-value issued by the homogeneity test. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package obtained from an ARIMA(3,0,2) process. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the ar.mah distance between the two series using # the default parameters. ARMahDistance(example.series3, example.series4) # The p-value is almost 1, which indicates that the two series come from the same # ARMA process. ARPicDistance Model-based Dissimilarity Measure Proposed by Piccolo (1990) Description Computes the model based dissimilarity proposed by Piccolo. Usage ARPicDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.AR.PIC for more information. Details This is simply a wrapper for the diss.AR.PIC function of package TSclust. As such, all the func- tionalities of the diss.AR.PIC function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package obtained from an ARIMA(3,0,2) process. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the Piccolo distance between the two series using # the default parameters. In this case an AR model is automatically # selected for each of the series: ARPicDistance(example.series3, example.series4) # Calculate the Piccolo distance between the two series # imposing the order of the ARMA model of each series: ARPicDistance(example.series3, example.series4, order.x=c(3,0,2), order.y=c(3,0,2)) CCorDistance Cross-correlation based distance. Description Computes the distance measure based on the cross-correlation between a pair of numeric time series. Usage CCorDistance(x, y, lag.max=(min(length(x), length(y))-1)) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. lag.max Positive integer that defines the maximum lag considered in the cross-correlation calculations (default=min(length(x), length(y))-1). Details The cross-correlation based distance between two numeric time series is calculated as follows: q X D= ((1 − CC(x, y, 0)2 )/ (1 − CC(x, y, k)2 )) where CC(x, y, k) is the cross-correlation between x and y at lag k. The summatory in the denominator goes from 1 to lag.max. In view of this, the parameter must be a positive integer no larger than the length of the series. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>. (2005). Clustering of time series data-a survey. Pattern Recognition, 38(11), 1857- 1874. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2014). On general pur- pose time series similarity measures and their use as kernel functions in support vector machines. Information Sciences, 281, 478–495. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the cross-correlation based distance # using the default lag.max. CCorDistance(example.series3, example.series4) # Calculate the cross-correlaion based distance # with lag.max=50. CCorDistance(example.series3, example.series4, lag.max=50) CDMDistance Compression-based Dissimilarity measure Description Computes the dissimilarity between two numeric series based on their size after compression. Usage CDMDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.CDM for more information. Details This is simply a wrapper for the diss.CDM function of package TSclust. As such, all the function- alities of the diss.CDM function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the compression based distance between the two series using # the default parameters. CDMDistance(example.series3, example.series4) CIDDistance Complexity-Invariant Distance Measure For Time Series Description Computes the dissimilarity between two numeric series of the same length by calculating a correc- tion of the Euclidean distance based on the complexity estimation of the series. Usage CIDDistance(x, y) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. Details This is simply a wrapper for the diss.CID function of package TSclust. As such, all the function- alities of the diss.CID function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the compression based distance between the two series using # the default parameters. CIDDistance(example.series1, example.series2) CorDistance Dissimilarities based on Pearson’s correlation Description Computes two different distance measure based on Pearson’s correlation between a pair of numeric time series of the same length. Usage CorDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.COR for more information. Details This is simply a wrapper for the diss.COR function of package TSclust. As such, all the function- alities of the diss.COR function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (1998). A new correlation- based fuzzy logic clustering algorithm for FMRI. Magnetic Resonance in Medicine, 40(2), 249–260. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the first correlation based distance between the series. CorDistance(example.series1, example.series2) # Calculate the second correlation based distance between the series # by specifying \eqn{beta}. CorDistance(example.series1, example.series2, beta=2) CortDistance Dissimilarity Index Combining Temporal Correlation and Raw Value Behaviors Description Computes the dissimilarity between two numeric series of the same length by combining the dis- similarity between the raw values and the dissimilarity between the temporal correlation behavior of the series. Usage CortDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.CORT for more information. Details This is simply a wrapper for the diss.CORT function of package TSclust. As such, all the function- alities of the diss.CORT function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. Chouakria, <NAME>., & <NAME>. (2007). Adaptive dissimilarity index for measuring time series proximity. Advances in Data Analysis and Classification, 1(1), 5–21. http://doi.org/10.1007/s11634- 006-0004-6 See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the first correlation based distance between the series using the default # parameters. CortDistance(example.series1, example.series2) DissimDistance The Dissim distance is calculated. Description Computes the Dissim distance between a pair of numeric series. Usage DissimDistance(x, y, tx, ty) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. tx If not constant, a numeric vector that specifies the sampling index of series x. ty If not constant, a numeric vector that specifies the sampling index of series y. Details The Dissim distance is obtained by calculating the integral of the Euclidean distance between the two series. The series are assumed to be linear between sampling points. The two series must start and end in the same interval but they may have different and non-constant sampling rates. These sampling rates must be positive and strictly increasing. For more information see reference below. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., <NAME>., & <NAME>. (2007). Index-based Most Similar Trajectory Search. In Proceeding of the IEEE 23rd International Conference on Data Engineering (pp. 816-825). <NAME>., & <NAME>. (2012). Time-series data mining. ACM Computing Surveys (CSUR), 45(1), 1–34. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples #The objects example.series1 and example.series2 are two #numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) #For information on their generation and shape see help #page of example.series. help(example.series) #Calculate the Dissim distance assuming even sampling: DissimDistance(example.series1, example.series2) #Calculate the Dissim distance assuming uneven sampling: tx<-unique(c(seq(2, 175, 2), seq(7, 175, 7))) tx <- tx[order(tx)] ty <- tx DissimDistance(example.series1, example.series2, tx, ty) DTWDistance Dynamic Time Warping distance. Description Computes the Dynamic Time Warping distance between a pair of numeric time series. Usage DTWDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See dtw for more information. Details This is simply a wrapper for the dtw function of package dtw. As such, all the functionalities of the dtw function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> (2009). Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package. Journal of Statistical Software, 31(7), pp. 1-24. URL:http://www.jstatsoft.org/v31/i07/ <NAME>. (2011). Fast Global Alignment Kernels. In Proceedings of the 28th International Con- ference on Machine Learning (pp. 929–936). <NAME>., <NAME>., & <NAME>. (2011). A time series kernel for action recognition. In BMVC 2011 - British Machine Vision Conference (pp. 63.1–63.11). <NAME>., & <NAME>. (2014). On Recursive Edit Distance Kernels With Applications To Time Series Classification. IEEE Transactions on Neural Networks and Learning Systems, PP(6), 1–13. <NAME>., & <NAME>. (2007). A Study on the Dynamic Time Warping in Kernel Machines. In 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System (pp. 839–845). <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2014). On general pur- pose time series similarity measures and their use as kernel functions in support vector machines. Information Sciences, 281, 478–495. See Also To calculate a lower bound of the DTW distance see LBKeoghDistance. To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the TSdist # package data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the basic DTW distance for two series of different length. DTWDistance(example.series3, example.series4) # Calculate the DTW distance for two series of different length # with a sakoechiba window of size 30: DTWDistance(example.series3, example.series4, window.type="sakoechiba", window.size=30) # Calculate the DTW distance for two series of different length # with an assymetric step pattern DTWDistance(example.series3, example.series4, step.pattern=asymmetric) EDRDistance Edit Distance for Real Sequences (EDR). Description Computes the Edit Distance for Real Sequences between a pair of numeric time series. Usage EDRDistance(x, y, epsilon, sigma) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. epsilon A positive threshold value that defines the distance. sigma If desired, a Sakoe-Chiba windowing contraint can be added by specifying a positive integer representing the window size. Details The basic Edit Distance for Real Sequences between two numeric series is calculated. The idea is to count the number of edit operations (insert, delete, replace) that are necessary to transform one series into the other. For that, if the Euclidean distance between two points xi and yi is smaller that epsilon they will be considered equal (d = 0) and if they are farther apart, they will be considered different (d = 1). As a last detail, this distance permits gaps or sequences of points that are not matched with any other point. The length of series x and y may be different. Furthermore, if desired, a temporal constraint may be added to the EDR distance. In this package, only the most basic windowing function, introduced by H.Sakoe and S.Chiba (1978), is implemented. This function sets a band around the main diagonal of the distance matrix and avoids the matching of the points that are farther in time than a specified σ. The size of the window must be a positive integer value. Furthermore, the following condition must be fulfilled: |length(x) − length(y)| < sigma Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., <NAME>., & <NAME>. (2005). Robust and Fast Similarity Search for Moving Object Trajectories. In Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data (pp. 491-502). <NAME>., & <NAME>. (2004). On The Marriage of Lp-norms and Edit Distance. In Proceedings of the Thirtieth International Conference on Very Large Data Bases (pp. 792–803). <NAME>. (2011). Fast Global Alignment Kernels. In Proceedings of the 28th International Con- ference on Machine Learning (pp. 929–936). <NAME>., <NAME>., & <NAME>. (2011). A time series kernel for action recognition. In BMVC 2011 - British Machine Vision Conference (pp. 63.1–63.11). <NAME>., & <NAME>. (2014). On Recursive Edit Distance Kernels With Applications To Time Series Classification. IEEE Transactions on Neural Networks and Learning Systems, PP(6), 1–13. <NAME>., & <NAME>. (2007). A Study on the Dynamic Time Warping in Kernel Machines. In 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System (pp. 839–845). <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2014). On general pur- pose time series similarity measures and their use as kernel functions in support vector machines. Information Sciences, 281, 478–495. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the TSdist # package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the EDR distance for two series of different length # with no windowing constraint: EDRDistance(example.series3, example.series4, epsilon=0.1) # Calculate the EDR distance for two series of different length # with a window of size 30: EDRDistance(example.series3, example.series4, epsilon=0.1, sigma=30) ERPDistance Edit Distance with Real Penalty (ERP). Description Computes the Edit Distance with Real Penalty between a pair of numeric time series. Usage ERPDistance(x, y, g, sigma) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. g The reference value used to penalize gaps. sigma If desired, a Sakoe-Chiba windowing contraint can be added by specifying a positive integer representing the window size. Details The basic Edit Distance with Real Penalty between two numeric series is calculated. Unlike other edit based distances included in this package, this distance is a metric and fulfills the triangle in- equality. The idea is to search for the minimal path in a distance matrix that describes the mapping between the two series. This distance matrix is built by using the Euclidean distance. However, unlike DTW, this distance permits gaps or sequences of points that are not matched with any other point. These gaps will be penalized based on the distance of the unmatched points from a reference value g. As with other edit based distances, the length of x and y may be different. Furthermore, if desired, a temporal constraint may be added to the ERP distance. In this package, only the most basic windowing function, introduced by H.Sakoe and S.Chiba (1978), is imple- mented. This function sets a band around the main diagonal of the distance matrix and avoids the matching of the points that are farther in time than a specified σ. The size of the window must be a positive integer value. Furthermore, the following condition must be fulfilled: |length(x) − length(y)| < sigma Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., & <NAME>. (2004). On The Marriage of Lp-norms and Edit Distance. In Proceedings of the Thirtieth International Conference on Very Large Data Bases (pp. 792-803). See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples #The objects example.series3 and example.series4 are two #numeric series of length 100 and 120 contained in the TSdist #package. data(example.series3) data(example.series4) #For information on their generation and shape see #help page of example.series. help(example.series) #Calculate the ERP distance for two series of different length #with no windowing constraint: ERPDistance(example.series3, example.series4, g=0) #Calculate the ERP distance for two series of different length #with a window of size 30: ERPDistance(example.series3, example.series4, g=0, sigma=30) EuclideanDistance Euclidean distance. Description Computes the Euclidean distance between a pair of numeric vectors. Usage EuclideanDistance(x, y) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. Details The Euclidean distance is computed between the two numeric series using the following formula: p D= (xi − yi )2 ) The two series must have the same length. This distance is calculated with the help of the dist function of the proxy package. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> and <NAME> (2015). proxy: Distance and Similarity Measures. R package version 0.4-14. http://CRAN.R-project.org/package=proxy See Also This function can also be invoked by the wrapper function LPDistance. Furthermore, to calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Compute the Euclidean distance between them: EuclideanDistance(example.series1, example.series2) example.database Example databases. Description Example database saved both as a numeric matrix and as a zoo object. Usage data(example.database); data(zoo.database); Format example.database is saved in a numerical matrix. zoo.database is saved as a zoo object with a given temporal index. Details example.database is a numerical matrix conformed by six ARMA(3,2) series of coefficients AR=(1, -0.24, 0.1) and MA=(1, 1.2) and length 100 that are situated in a row-wise format. They are generated from innovation vectors obtained randomly from a normal distribution of mean 0 and standard deviation 1, but by setting different random seeds. zoo.database is a copy of example.database but saved in a zoo object with a specific time index. The series are set in a column-wise format. Examples data(example.database); data(zoo.database); ## In example.database the series are set in a row-wise format. plot(example.database)[1,] ## In zoo.database the series are set in a column-wise format. plot(zoo.database)[,1] example.database2 Example synthetic database with series belonging to different classes. Description Example synthetic database with series belonging to 6 different classes. Usage data(example.database2); Format example.database2 a list conformed of the following two elements: • data The 100 time series are stored in a numeric matrix, row-wise. • classes A numerical vector of length 100 that takes values in {1,2,3,4,5,6}. Each element in the vector represents the class of one of the series. Details example.database2 is a database conformed of 100 series of length 100 obtained from 6 different classes. Each class is represented by the following function: The class to which each series belongs is given in the classes vector. • Class 1: random function f 1(t) = 80 + r(t) + n(t) • Class 2: periodic function f 2(t) = 80 + 15 sin( ) + n(t) T • Class 3: increasing linear trend f 3(t) = f3 (t) = 80 + 0.4t + n(t) + sh • Class 4: decreasing linear trend f 4(t) = 80 − 0.4t + n(t) + sh • Class 5: piecewise linear function which takes a value of 80 + n(t) for the first L/2+sh of the series and a value of 90 + n(t) for the rest of the points. • Class 6: piecewise linear function which takes a value of 90 + n(t) for the first L/2+sh of the series and a value of 80 + n(t) for the rest of the points. r(t) is a random value issued from a N (0, 3) distribution, L is the length of the series, 100 in this case, and T is the period and is defined as a third of the length of the series. n(t) is a random noise obtained from a N (0, 2.8) distribution.. Finally, sh is an integer value that takes a random value between (−7, 7) and shifts the series sh positions to the right or left, depending on the sign. Examples data(example.database2); ## The "data" element of the list contains the time series, set in a row-wise format. plot(example.database2$data)[1,] ## The "classes" element in example.database2 contains the classes of the series: example.database2$classes example.database3 Example synthetic database with series belonging to different classes. Description Example synthetic database with ARMA series belonging to 5 different classes. Usage data(example.database3); Format example.database3 a list conformed of the following two elements: • data The 50 time series are stored in a numeric matrix, row-wise. • classes A numerical vector of length 50 that takes values in {1,2,3,4,5}. Each element in the vector represents the class of one of the series. Details example.database3 is a database conformed of 50 series of length 100 obtained from 5 differ- ent classes. Each class is obtained from a different initializations of an ARMA(3,2) process of coefficients AR=(1,-0.24,0.1) and MA=(1,1.2). Random noise is added to all the series by sampling values from a N (0, 1.7) distribution. R is obtained from the following formula: Finally, all the series in the database are shifted sh positions to the right or left, sh being a random integer value extracted from −15, ..., 15 in each case. Examples data(example.database3); ## The "data" element of the list contains the time series, set in a row-wise format. plot(example.database3$data)[1,] ## The "classes" element in example.database3 contains the classes of the series: example.database3$classes example.series Example series. Description Example series saved as numeric vectors and as zoo objects. Usage data(example.series1); data(example.series2); data(example.series3); data(example.series4); data(zoo.series1); data(zoo.series2); Format example.series1, example.series2, example.series3 and example.series4 are saved in nu- merical vectors. zoo.series1 and zoo.series2 are saved as zoo objects with a given temporal index. Details example.series1 and example.series2 are generated based on the Two Patterns synthetic database introduced by Geurts (2002). example.series3 and example.series4 are two ARMA(3,2) series of coefficients AR=(1, -0.24, 0.1) and MA=(1, 1.2) and length 100 and 120 respectively. They are generated from a pair of innovation vectors obtained randomly from a normal distribution of mean 0 and standard deviation 1, but by setting different random seeds. zoo.series1 and zoo.series2 are copies of example.series1 and example.series2 but with a specific time index. References <NAME>. (2002). Contributions to decision tree induction: bias/variance tradeoff and time series classification. University of Liege, Belgium. Examples data(example.series1); data(example.series2); data(example.series3); data(example.series4); data(zoo.series1); data(zoo.series2); ## Plot series plot(example.series1, type="l") plot(example.series2, type="l") plot(example.series3, type="l") plot(example.series4, type="l") plot(zoo.series1) plot(zoo.series2) FourierDistance Fourier Coefficient based distance. Description Computes the distance between a pair of numerical series based on their Discrete Fourier Trans- forms. Usage FourierDistance(x, y, n = (floor(length(x) / 2) + 1)) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. n Positive integer that represents the number of Fourier Coefficients to consider. ( default=(floor(length(x) / 2) + 1) ) Details The Euclidean distance between the first n Fourier coefficients of series x and y is computed. The series must have the same length. Furthermore, n should not be larger than the length of the series. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., <NAME>., & <NAME>. (1993). Efficient similarity search in sequence databases. In Proceedings of the 4th International Conference of Foundations of Data Organization and Algo- rithms (Vol. 5, pp. 69-84). See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Calculate the Fourier coefficient based distance using # the default number of coefficients: FourierDistance(example.series1, example.series2) # Calculate the Fourier coefficient based distance using # only the first 20 Fourier coefficients: FourierDistance(example.series1, example.series2, n=20) FrechetDistance Frechet distance Description Computes the Frechet distance between two numerical trajectories. Usage FrechetDistance(x, y, tx, ty, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. tx If not constant, a numeric vector that specifies the sampling index of series x. ty If not constant, a numeric vector that specifies the sampling index of series y. ... Additional parameters for the function. See distFrechet for more information. Details This is essentially a wrapper for the distFrechet function of package longitudinalData. As such, all the functionalities of the distFrechet function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> (2014). longitudinalData: Longitudinal Data. R package version 2.2. http://CRAN.R- project.org/package=longitudinalData <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. Eiter, T., & <NAME>. (1994). Computing Discrete Frechet Distance. Technical Report. Retrieved from http://www.kr.tuwien.ac.at/staff/eiter/et-archive/cdtr9464.pdf See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.serie3 and example.series4 are two # numeric series of length 100 and 120, respectively. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the distance based on wavelet feature extraction between the series. ## Not run: FrechetDistance(example.series3, example.series4) InfNormDistance The infinite norm distance. Description Computes the infinite norm distance between a pair of numeric vectors. Usage InfNormDistance(x, y) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. Details The infinite norm distance is computed between the two numeric series using the following formula: D = max |xi − yi | The two series must have the same length. This distance is calculated with the help of the dist function of the proxy package. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> and <NAME> (2015). proxy: Distance and Similarity Measures. R package version 0.4-14. http://CRAN.R-project.org/package=proxy See Also This function can also be invoked by the wrapper function LPDistance. To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Compute the infinite norm distance between them: InfNormDistance(example.series1, example.series2) IntPerDistance Integrated Periodogram based dissimilarity Description Calculates the dissimilarity between two numerical series of the same length based on the distance between their integrated periodograms. Usage IntPerDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.INT.PER for more informa- tion. Details This is simply a wrapper for the diss.INT.PER function of package TSclust. As such, all the functionalities of the diss.INT.PER function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the ar.mah distance between the two series using # the default parameters. IntPerDistance(example.series1, example.series2) KMedoids K medoids clustering for a time series database using the selected distance measure. Description Given a specific distance measure and a time series database, this function provides the K-medoids clustering result. Furthermore, if the ground truth clustering is provided, and the associated F-value is also provided. Usage KMedoids(data, k, ground.truth, distance, ...) Arguments data Time series database saved in a numeric matrix, a list, an mts object, a zoo object or xts object. k Integer value which represents the number of clusters. ground.truth Numerical vector which indicates the ground truth clustering of the database. distance Distance measure to be used. It must be one of: "euclidean", "manhattan", "minkowski", "infnorm", "ccor", "sts", "dtw", "keogh_lb", "edr", "erp", "lcss", "fourier", "tquest", "dissimfull", "dissimapprox", "acf", "pacf", "ar.lpc.ceps", "ar.mah", "ar.mah.statistic", "ar.mah.pvalue", "ar.pic", "cdm", "cid", "cor", "cort", "wav", "int.per", "per", "mindist.sax", "ncd", "pred", "spec.glk", "spec.isd", "spec.llr", "pdc", "frechet") ... Additional parameters required by the chosen distance measure. Details This function is useful to evaluate the performance of different distance measures in the task of clustering time series. Value clustering Numerical vector providing the clustering result for the database. F F-value corresponding to the clustering result. Author(s) <NAME>, <NAME>, <NAME>. See Also To calculate the distance matrices of time series databases the TSDatabaseDistances is used. Examples # The example.database3 synthetic database is loaded data(example.database3) tsdata <- example.database3[[1]] groundt <- example.database3[[2]] # Apply K-medoids clusterning for different distance measures KMedoids(data=tsdata, ground.truth=groundt, k=5, "euclidean") KMedoids(data=tsdata, ground.truth=groundt, k=5, "cid") KMedoids(data=tsdata, ground.truth=groundt, k=5, "pdc") LBKeoghDistance LB_Keogh for DTW. Description Computes the Keogh lower bound for the Dynamic Time Warping distance between a pair of nu- meric time series. Usage LBKeoghDistance(x, y, window.size) Arguments x Numeric vector containing the first time series (query time series). y Numeric vector containing the second time series (reference time series). window.size Window size that defines the upper and lower envelopes. Details The lower bound introduced by Keogh and Ratanamahatana (2005) is calculated for the Dynamic Time Warping distance. Given window.size, the width of a Sakoe-Chiba band, an upper and lower envelope of the query time series is calculated in the following manner: U [i] = max(x[i − window.size], x[i + window.size]) L[i] = min(x[i − window.size], x[i + window.size]) Based on this, the Keogh_LB distance is calculated as the Euclidean distance between the points in the reference time series (y) that fall outside both the lower and upper envelopes, and their nearest point of the corresponding envelope. The series must have the same length. Furthermore, the width of the window should be even in order to assure a symmetric band around the diagonal and should not exceed the length of the series. Value d The Keogh lower bound of the Dynamic Time Warping distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References Keogh, E., & <NAME>. (2004). Exact indexing of dynamic time warping. Knowledge and Information Systems, 7(3), 358-386. <NAME>., & <NAME>. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1), 43-49. <NAME>., & <NAME>. (2012). Time-series data mining. ACM Computing Surveys (CSUR), 45(1), 1–34. See Also To calculate the full DTW distance see DTWDistance. To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Calculate the LB_Keogh distance measure for these two series # with a window of band of width 11: LBKeoghDistance(example.series1, example.series2, window.size=11) LCSSDistance Longest Common Subsequence distance for Real Sequences. Description Computes the Longest Common Subsequence distance between a pair of numeric time series. Usage LCSSDistance(x, y, epsilon, sigma) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. epsilon A positive threshold value that defines the distance. sigma If desired, a Sakoe-Chiba windowing contraint can be added by specifying a positive integer representing the window size. Details The Longest Common Subsequence for two real sequences is computed. For this purpose, the distances between the points of x and y are reduced to 0 or 1. If the Euclidean distance between two points xi and yj is smaller than epsilon they are considered equal and their distance is reduced to 0. In the opposite case, the distance between them is represented with a value of 1. Once the distance matrix is defined in this manner, the maximum common subsequence is seeked. Of course, as in other Edit Based Distances, gaps or unmatched regions are permitted and they are penalized with a value proportional to their length. Based on its definition, the length of series x and y may be different. If desired, a temporal constraint may be added to the LCSS distance. In this package, only the most basic windowing function, introduced by H.Sakoe and S.Chiba (1978), is implemented. This function sets a band around the main diagonal of the distance matrix and avoids the matching of the points that are farther in time than a specified σ. The size of the window must be a positive integer value. Furthermore, the following condition must be fulfilled: |length(x) − length(y)| < sigma Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., <NAME>., & <NAME>. (2002). Discovering similar multidimensional trajec- tories. In Proceedings 18th International Conference on Data Engineering (pp. 673-684). IEEE Comput. Soc. doi:10.1109/ICDE.2002.994784 <NAME>., & <NAME>. (2004). On The Marriage of Lp-norms and Edit Distance. In Proceedings of the Thirtieth International Conference on Very Large Data Bases (pp. 792–803). <NAME>. (2011). Fast Global Alignment Kernels. In Proceedings of the 28th International Con- ference on Machine Learning (pp. 929–936). <NAME>., <NAME>., & <NAME>. (2011). A time series kernel for action recognition. In BMVC 2011 - British Machine Vision Conference (pp. 63.1–63.11). <NAME>.-F., & <NAME>. (2014). On Recursive Edit Distance Kernels With Applications To Time Series Classification. IEEE Transactions on Neural Networks and Learning Systems, PP(6), 1–13. Lei, H., & <NAME>. (2007). A Study on the Dynamic Time Warping in Kernel Machines. In 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System (pp. 839–845). <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2014). On general pur- pose time series similarity measures and their use as kernel functions in support vector machines. Information Sciences, 281, 478–495. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the TSdist # package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the LCSS distance for two series of different length # with no windowing constraint: LCSSDistance(example.series3, example.series4, epsilon=0.1) # Calculate the LCSS distance for two series of different length # with a window of size 30: LCSSDistance(example.series3, example.series4, epsilon=0.1, sigma=30) LPDistance Lp distances. Description Computes the distance based on the chosen Lp norm between a pair of numeric vectors. Usage LPDistance(x, y, method="euclidean", ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. method A value in "euclidean", "manhattan", "infnorm", "minkowski". ... If method="minkowski" a positive integer value must be specified for p. Details The distances based on Lp norms are computed between two numeric vectors using the following formulas: p Euclidean distance: (xi − yi )2 ) P Manhattan distance: |xi − yi | Infinite norm distance: max |xi − yi | p Minkowski distance: p (xi − yi )p ) The two series must have the same length. Furthermore, in the case of the Minkowski distance, p must be specified as a positive integer value. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. See Also These distances are also implemeted in separate functions. For more information see EuclideanDistance, ManhattanDistance, MinkowskiDistance and InfNormDistance To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Compute the different Lp distances # Euclidean distance LPDistance(example.series1, example.series2, method="euclidean") # Manhattan distance LPDistance(example.series1, example.series2, method="manhattan") # Infinite norm distance LPDistance(example.series1, example.series2, method="infnorm") # Minkowski distance with p=3. LPDistance(example.series1, example.series2, method="minkowski", p=3) ManhattanDistance Manhattan distance. Description Computes the Manhattan distance between a pair of numeric vectors. Usage ManhattanDistance(x, y) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. Details The Manhattan distance is computed between the two numeric series using the following formula: X D= |xi − yi | The two series must have the same length. This distance is calculated with the help of the dist function of the proxy package. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> and <NAME> (2015). proxy: Distance and Similarity Measures. R package version 0.4-14. http://CRAN.R-project.org/package=proxy See Also This function can also be invoked by the wrapper function LPDistance. Furthermore, to calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Compute the Manhattan distance between them: ManhattanDistance(example.series1, example.series2) MindistSaxDistance Symbolic Aggregate Aproximation based dissimilarity Description Calculates the dissimilarity between two numerical series based on the distance between their SAX representations. Usage MindistSaxDistance(x, y, w, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. w The amount of equal sized windows that the series will be reduced to. ... Additional parameters for the function. See diss.MINDIST.SAX for more infor- mation. Details This is simply a wrapper for the diss.MINDIST.SAX function of package TSclust. As such, all the functionalities of the diss.MINDIST.SAX function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 respectively. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the mindist.sax distance between the two series using # 20 equal sized windows for each series. The rest of the parameters # are left in their default mode. MindistSaxDistance(example.series3, example.series4, w=20) MinkowskiDistance Minkowski distance. Description Computes the Minkowski distance between two numeric vectors for a given p. Usage MinkowskiDistance(x, y, p) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. p A strictly positive integer value that defines the chosen Lp norm. Details The Minkowski distance is computed between the two numeric series using the following formula: p p D= (xi − yi )p ) The two series must have the same length and p must be a positive integer value. This distance is calculated with the help of the dist function of the proxy package. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> and <NAME> (2015). proxy: Distance and Similarity Measures. R package version 0.4-14. http://CRAN.R-project.org/package=proxy See Also This function can also be invoked by the wrapper function LPDistance. Furthermore, to calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Compute the Minkowski distance between them: MinkowskiDistance(example.series1, example.series2, p=3) NCDDistance Normalized Compression based distance Description Calculates a normalized distance between two numerical series based on their compressed sizes. Usage NCDDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.NCD for more information. Details This is simply a wrapper for the diss.NCD function of package TSclust. As such, all the function- alities of the diss.NCD function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 respectively. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the normalized compression based distance between the two series # using default parameter. NCDDistance(example.series3, example.series4) OneNN 1NN classification for a pair of train/test time series datasets. Description Given a specific distance measure, this function provides the 1NN classification values and the associated error for a specific train/test pair of time series databases. Usage OneNN(train, trainc, test, testc, distance, ...) Arguments train Time series database saved in a numeric matrix, a list, an mts object, a zoo object or xts object. trainc Numerical vector which indicates the class of each of the series in the training set. test Time series database saved in a numeric matrix, a list, an mts object, a zoo object or xts object. testc Numerical vector which indicates the class of each of the series in the testing set. distance Distance measure to be used. It must be one of: "euclidean", "manhattan", "minkowski", "infnorm", "ccor", "sts", "dtw", "keogh_lb", "edr", "erp", "lcss", "fourier", "tquest", "dissimfull", "dissimapprox", "acf", "pacf", "ar.lpc.ceps", "ar.mah", "ar.mah.statistic", "ar.mah.pvalue", "ar.pic", "cdm", "cid", "cor", "cort", "wav", "int.per", "per", "mindist.sax", "ncd", "pred", "spec.glk", "spec.isd", "spec.llr", "pdc", "frechet") ... Additional parameters required by the chosen distance measure. Details This function is useful to evaluate the performance of different distance measures in the task of classification of time series. Value classes Numerical vector providing the predicted class values for the series in the test set. error Error obtained in the 1NN classification process. Author(s) Us<NAME>, <NAME>, <NAME>. See Also To calculate the distance matrices of time series databases the TSDatabaseDistances is used. Examples # The example.database2 synthetic database is loaded data(example.database2) # Create train/test by dividing the dataset 70%-30% set.seed(100) trainindex <- sample(1:100, 70, replace=FALSE) train <- example.database2[[1]][trainindex, ] test <- example.database2[[1]][-trainindex, ] trainclass <- example.database2[[2]][trainindex] testclass <- example.database2[[2]][-trainindex] # Apply the 1NN classifier for different distance measures OneNN(train, trainclass, test, testclass, "euclidean") OneNN(train, trainclass, test, testclass, "pdc") PACFDistance Partial Autocorrelation-based Dissimilarity Description Computes the dissimilarity between a pair of numeric time series based on their estimated partial autocorrelation coefficients. Usage PACFDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.PACF for more information. Details This is simply a wrapper for the diss.PACF function of package TSclust. As such, all the function- alities of the diss.PACF function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the # TSdist package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the autocorrelation based distance between the two series using # the default parameters: PACFDistance(example.series3, example.series4) PDCDistance Permutation Distribution Distance Description Calculates the permutation distribution distance between two numerical series of the same length. Usage PDCDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See pdcDist for more information. Details This is simply a wrapper for the pdcDist function of package pdc. As such, all the functionalities of the pdcDist function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME> (2015). pdc: An R package for Complexity-Based Clustering of Time Series. Journal of Statistical Software, Vol 67, Issue 5. <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the normalized compression based distance between the two series # using the default parameters. PDCDistance(example.series1, example.series2) PerDistance Periodogram based dissimilarity Description Calculates the dissimilarity between two numerical series of the same length based on the distance between their periodograms. Usage PerDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.PER for more information. Details This is simply a wrapper for the diss.PER function of package TSclust. As such, all the function- alities of the diss.PER function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the ar.mah distance between the two series using # the default parameters. PerDistance(example.series1, example.series2) PredDistance Dissimilarity Measure Based on Nonparametric Forecasts Description The dissimilarity of two numerical series of the same length is calculated based on the L1 distance between the kernel estimators of their forecast densities at a given time horizon. Usage PredDistance(x, y, h, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. h Integer value representing the prediction horizon. ... Additional parameters for the function. See diss.PRED for more information. Details This is simply a wrapper for the diss.PRED function of package TSclust. As such, all the function- alities of the diss.PRED function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the prediction based distance between the two series using # the default parameters. PredDistance(example.series1, example.series2) SpecGLKDistance Dissimilarity based on the Generalized Likelihood Ratio Test Description The dissimilarity of two numerical series of the same length is calculated based on an adaptation of the generalized likelihood ratio test. Usage SpecGLKDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.PER for more information. Details This function simply intends to be a wrapper for the diss.SPEC.GLK function of package TSclust. However, in the 1.2.3 version of the TSclust package we have found an error in the call to this function. As such, in this version, the more general diss function, designed for distance matrix calculations of time series databases, is used to calculate the spec.glk distance between two series. Once this bug is fixed in the original package, we will update our call procedure. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the ar.mah distance between the two series using # the default parameters. SpecGLKDistance(example.series1, example.series2) SpecISDDistance Dissimilarity Based on the Integrated Squared Difference between the Log-Spectra Description The dissimilarity of two numerical series of the same length is calculated based on the integrated squared difference between the non-parametric estimators of their log-spectra. Usage SpecISDDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.SPEC.ISD for more informa- tion. Details This is simply a wrapper for the diss.SPEC.ISD function of package TSclust. As such, all the functionalities of the diss.SPEC.ISD function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the spec.isd distance between the two series using # the default parameters. SpecISDDistance(example.series1, example.series2) SpecLLRDistance General Spectral Dissimilarity Measure Using Local-Linear Estima- tion of the Log-Spectra Description The dissimilarity of two numerical series of the same length is calculated based on the ratio between local linear estimations of the log-spectras. Usage SpecLLRDistance(x, y, ...) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. ... Additional parameters for the function. See diss.SPEC.LLR for more informa- tion. Details This is simply a wrapper for the diss.SPEC.LLR function of package TSclust. As such, all the functionalities of the diss.SPEC.LLR function are also available when using this function. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>, <NAME> (2014). TSclust: An R Package for Time Series Clustering. Journal of Statistical Software, 62(1), 1-43. URL http://www.jstatsoft.org/v62/i01/. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100. data(example.series1) data(example.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the spec.isd distance between the two series using # the default parameters. SpecLLRDistance(example.series1, example.series2) STSDistance Short time series distance (STS). Description Computes the Short Time Series Distance between a pair of numeric time series. Usage STSDistance(x, y, tx, ty) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. tx If not constant, a numeric vector that specifies the sampling index of series x. ty If not constant, a numeric vector that specifies the sampling index of series y. Details The short time series distance between two series is designed specially for series with an equal but uneven sampling rate. However, it can also be used for time series with a constant sampling rate. It is calculated as follows: s X ST S = (((yk+1 − yk )/(txk+1 − txk ) − (xk+1 − xk )/(tyk+1 − tyk ))2 ) k={1,...,N −1} where N is the length of series x and y. tx and ty must be positive and strictly increasing. Furthermore, the sampling rate in both indexes must be equal: tx[k + 1] − tx[k] = ty[k + 1] − ty[k], f or k = 0, ..., N − 1 Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References Möller-<NAME>., <NAME>., <NAME>., & <NAME>. (2003). Fuzzy Clustering of Short Time-Series and Unevenly Distributed Sampling Points. In Proceedings of the 5th International Symposium on Intelligent Data Analysis. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Calculate the STS distance assuming even sampling: STSDistance(example.series1, example.series2) # Calculate the STS distance providing an uneven sampling: tx<-unique(c(seq(2, 175, 2), seq(7, 175, 7))) tx <- tx[order(tx)] ty <- tx STSDistance(example.series1, example.series2, tx, ty) TAMDistance Time Alignment Measurement (TAM) distance. Description Computes the Time Alignment Measurement between a pair of numeric time series. Usage TAMDistance(x, y) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. Details The Time Alignment Measurement (TAM) between two numeric series is calculated. Quantifies the degree of temporal distortion between two time series. The main idea behind TAM is to measure the warping cost between a given time series and another. TAM is calculated from the optimal alignment warping path between two time series provided by dtw, which allows characterizing the intervals when the series are in phase, advance or delay. This distance penalizes signals where advance or delay is present and benefits series that are in phase with each other. As the distance increases, the dissimilarity between both signals also increases. The distance is bounded between 0 (both series are in phase) and 3 (both series are completely out-of-phase). The length of series x and y may be different. Value d The computed distance between the pair of series. Author(s) <NAME> References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>- boa (2018). Time Alignment Measurement for Time Series. Pattern Recognition 81, pp. 268-279. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the TSdist # package. data(example.series3) data(example.series4) # For information on their generation and shape see # help page of example.series. help(example.series) # Calculate the TAM distance for two series of the same length: TAMDistance(example.series1, example.series2) # Calculate the TAM distance for two series of different length: TAMDistance(example.series3, example.series4) TquestDistance Tquest distance. Description Computes the Tquest distance between a pair of numeric vectors. Usage TquestDistance(x, y, tx, ty, tau) Arguments x Numeric vector containing the first time series. y Numeric vector containing the second time series. tx If not constant, temporal index of series x. ty If not constant, temporal index of series y. tau Parameter (threshold) used to define the threshold passing intervals. Details The TQuest distance represents the series based on a set of intervals that fulfill the following con- ditions: 1. All the values that the time series takes during these time intervals must be strictly above a user specified threshold tau. 2. They are the largest possible intervals that satisfy the previous condition. The final distance between two series is defined in terms of the similarity between their threshold passing interval sets. For more information, see references. Value d The computed distance between the pair of series. Author(s) <NAME>, <NAME>, <NAME>. References <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2006). Similarity Search on Time Series based on Threshold Queries. In Proceedings of the 10th international conference on Advances in Database Technology (pp. 276-294). <NAME>., & <NAME>. (2012). Time-series data mining. ACM Computing Surveys (CSUR), 45(1), 1–34. See Also To calculate this distance measure using ts, zoo or xts objects see TSDistances. To calculate distance matrices of time series databases using this measure see TSDatabaseDistances. Examples # The objects example.series1 and example.series2 are two # numeric series of length 100 contained in the TSdist package. data(example.series1) data(example.series2) # For information on their generation and shape see help # page of example.series. help(example.series) # Calculate the Tquest distance assuming even sampling: TquestDistance(example.series1, example.series2, tau=2.5) # The objects example.series3 and example.series4 are two # numeric series of length 100 and 120 contained in the TSdist # package. data(example.series3) data(example.series4) # Calculate the Tquest distance for two series of different length: TquestDistance(example.series3, example.series4, tau=2.5) TSDatabaseDistances TSdist distance matrix computation. Description TSdist distance matrix computation for time series databases. Usage TSDatabaseDistances(X, Y=NULL, distance, ...) Arguments X Time series database saved in a numeric matrix, a list, an mts object, a zoo object or xts object. Y Time series database saved in a numeric matrix, a list, an mts object, a zoo object or xts object. Should only be defined for calculation of distance matrices between two different databases so default value is NULL. distance Distance measure to be used. It must be one of: "euclidean", "manhattan", "minkowski", "infnorm", "ccor", "sts", "dtw", "keogh.lb", "edr", "erp", "lcss", "fourier", "tquest", "dissim", "acf", "pacf", "ar.lpc.ceps", "ar.mah", "ar.mah.statistic", "ar.mah.pvalue", "ar.pic", "cdm", "cid", "cor", "cort", "wav", "int.per", "per", "mindist.sax", "ncd", "pred", "spec.glk", "spec.isd", "spec.llr", "pdc", "frechet", "tam") f ... Additional parameters required by the chosen distance measure. Details The distance matrix of a time series database is calculated by providing the pair-wise distances between the series that conform it. x can be saved in a numeric matrix, a list or a mts, zoo or xts object. The following distance methods are supported: • "euclidean": Euclidean distance. EuclideanDistance • "manhattan": Manhattan distance. ManhattanDistance • "minkowski": Minkowski distance. MinkowskiDistance • "infnorm": Infinite norm distance. InfNormDistance • "ccor": Distance based on the cross-correlation. CCorDistance • "sts": Short time series distance. STSDistance • "dtw": Dynamic Time Warping distance. DTWDistance. Uses the dtw package (see dtw). • "lb.keogh": LB_Keogh lower bound for the Dynamic Time Warping distance. LBKeoghDistance • "edr": Edit distance for real sequences. EDRDistance • "erp": Edit distance with real penalty. ERPDistance • "lcss": Longest Common Subsequence Matching. LCSSDistance • "fourier": Distance based on the Fourier Discrete Transform. FourierDistance • "tquest": TQuest distance. TquestDistance • "dissim": Dissim distance. DissimDistance • "acf": Autocorrelation-based dissimilarity ACFDistance. Uses the TSclust package (see diss.ACF). • "pacf": Partial autocorrelation-based dissimilarity PACFDistance. Uses the TSclust package (see diss.PACF). • "ar.lpc.ceps": Dissimilarity based on LPC cepstral coefficients ARLPCCepsDistance. Uses the TSclust package (see diss.AR.LPC.CEPS). • "ar.mah": Model-based dissimilarity proposed by Maharaj (1996, 2000) ARMahDistance. Uses the TSclust package (see diss.AR.MAH). • "ar.pic": Model-based dissimilarity measure proposed by Piccolo (1990) ARPicDistance. Uses the TSclust package (see diss.AR.PIC). • "cdm": Compression-based dissimilarity measure CDMDistance. Uses the TSclust package (see diss.CDM). • "cid": Complexity-invariant distance measure CIDDistance. Uses the TSclust package (see diss.CID). • "cor": Dissimilarities based on Pearson’s correlation CorDistance. Uses the TSclust package (see diss.COR). • "cort": Dissimilarity index which combines temporal correlation and raw value behaviors CortDistance. Uses the TSclust package (see diss.CORT). • "int.per": Integrated periodogram based dissimilarity IntPerDistance. Uses the TSclust package (see diss.INT.PER). • "per": Periodogram based dissimilarity PerDistance. Uses the TSclust package (see diss.PER). • "mindist.sax": Symbolic Aggregate Aproximation based dissimilarity MindistSaxDistance. Uses the TSclust package (see diss.MINDIST.SAX). • "ncd": Normalized compression based distance NCDDistance. Uses the TSclust package (see diss.NCD). • "pred": Dissimilarity measure cased on nonparametric forecasts PredDistance. Uses the TSclust package (see diss.PRED). • "spec.glk": Dissimilarity based on the generalized likelihood ratio test SpecGLKDistance. Uses the TSclust package (see diss.SPEC.GLK). • "spec.isd": Dissimilarity based on the integrated squared difference between the log-spectra SpecISDDistance. Uses the TSclust package (see diss.SPEC.ISD). • "spec.llr": General spectral dissimilarity measure using local-linear estimation of the log- spectra SpecLLRDistance. Uses the TSclust package (see diss.SPEC.LLR). • "pdc": Permutation Distribution Distance PDCDistance. Uses the pdc package (see pdcDist). • "frechet": Frechet distance FrechetDistance. Uses the longitudinalData package (see distFrechet). • "tam": Time Aligment Measurement TAMDistance. Some distance measures may require additional arguments. See the individual help pages (detailed above) for more information about each method. These parameters should be named in order to avoid mismatches. Finally, for options dissim, dissimapprox and sts, databases conformed of series with different sampling rates can be introduced as a list of zoo, xts or ts objects, where each element in the list is a time series with its own time index. Value D The computed distance matrix of the time series database. In some cases, such as ar.mahDistance or predDistance, some additional information is also provided. Author(s) <NAME>, <NAME>, <NAME>. Examples # The object example.database is a numeric matrix that saves # 6 ARIMA time series in a row-wise format. For more information # see help page of example.databases: help(example.database) data(example.database) # To calculate the distance matrix of this database: TSDatabaseDistances(example.database, distance="manhattan") TSDatabaseDistances(example.database, distance="edr", epsilon=0.2) TSDatabaseDistances(example.database, distance="fourier", n=20) # The object zoo.database is a zoo object that saves # the same 6 ARIMA time series saved in example.database. data(zoo.database) # To calculate the distance matrix of this database: TSDatabaseDistances(zoo.database, distance="manhattan") TSDatabaseDistances(zoo.database, distance="edr", epsilon=0.2) TSDatabaseDistances(zoo.database, distance="fourier", n=20) TSDistances TSdist distance computation. Description TSdist distance calculation between two time series. Usage TSDistances(x, y, tx, ty, distance, ...) Arguments x Numeric vector or ts, zoo or xts object containing the first time series. y Numeric vector or ts, zoo or xts object containing the second time series. tx Optional temporal index of series x. Only necessary if x is a numeric vector and the sampling index is not constant. ty Optional temporal index of series y. Only necessary if y is a numeric vector and the sampling index is not constant. distance Distance measure to be used. It must be one of: "euclidean", "manhattan", "minkowski", "infnorm", "ccor", "sts", "dtw", "keogh.lb", "edr", "erp", "lcss", "fourier", "tquest", "dissim", "acf", "pacf", "ar.lpc.ceps", "ar.mah", "ar.mah.statistic", "ar.mah.pvalue", "ar.pic", "cdm", "cid", "cor", "cort", "int.per", "per", "mindist.sax", "ncd", "pred", "spec.glk", "spec.isd", "spec.llr", "pdc", "frechet","tam") ... Additional parameters required by the distance method. Details The distance between the two time series x and y is calculated. x and y can be saved in a numeric vector or a ts, zoo or xts object. The following distance methods are supported: • "euclidean": Euclidean distance. EuclideanDistance • "manhattan": Manhattan distance. ManhattanDistance • "minkowski": Minkowski distance. MinkowskiDistance • "infnorm": Infinite norm distance. InfNormDistance • "ccor": Distance based on the cross-correlation. CCorDistance • "sts": Short time series distance. STSDistance • "dtw": Dynamic Time Warping distance. DTWDistance. Uses the dtw package (see dtw). • "lb.keogh": LB_Keogh lower bound for the Dynamic Time Warping distance. LBKeoghDistance • "edr": Edit distance for real sequences. EDRDistance • "erp": Edit distance with real penalty. ERPDistance • "lcss": Longest Common Subsequence Matching. LCSSDistance • "fourier": Distance based on the Fourier Discrete Transform. FourierDistance • "tquest": TQuest distance. TquestDistance • "dissim": Dissim distance. DissimDistance • "acf": Autocorrelation-based dissimilarity ACFDistance. Uses the TSclust package (see diss.ACF). • "pacf": Partial autocorrelation-based dissimilarity PACFDistance. Uses the TSclust package (see diss.PACF). • "ar.lpc.ceps": Dissimilarity based on LPC cepstral coefficients ARLPCCepsDistance. Uses the TSclust package (see diss.AR.LPC.CEPS). • "ar.mah": Model-based dissimilarity proposed by Maharaj (1996, 2000) ARMahDistance. Uses the TSclust package (see diss.AR.MAH). • "ar.pic": Model-based dissimilarity measure proposed by Piccolo (1990) ARPicDistance. Uses the TSclust package (see diss.AR.PIC). • "cdm": Compression-based dissimilarity measure CDMDistance. Uses the TSclust package (see diss.CDM). • "cid": Complexity-invariant distance measure CIDDistance. Uses the TSclust package (see diss.CID). • "cor": Dissimilarities based on Pearson’s correlation CorDistance. Uses the TSclust package (see diss.COR). • "cort": Dissimilarity index which combines temporal correlation and raw value behaviors CortDistance. Uses the TSclust package (see diss.CORT). • "int.per": Integrated periodogram based dissimilarity IntPerDistance. Uses the TSclust package (see diss.INT.PER). • "per": Periodogram based dissimilarity PerDistance. Uses the TSclust package (see diss.PER). • "mindist.sax": Symbolic Aggregate Aproximation based dissimilarity MindistSaxDistance. Uses the TSclust package (see diss.MINDIST.SAX). • "ncd": Normalized compression based distance NCDDistance. Uses the TSclust package (see diss.NCD). • "pred": Dissimilarity measure cased on nonparametric forecasts PredDistance. Uses the TSclust package (see diss.PRED). • "spec.glk": Dissimilarity based on the generalized likelihood ratio test SpecGLKDistance. Uses the TSclust package (see diss.SPEC.GLK). • "spec.isd": Dissimilarity based on the integrated squared difference between the log-spectra SpecISDDistance. Uses the TSclust package (see diss.SPEC.ISD). • "spec.llr": General spectral dissimilarity measure using local-linear estimation of the log- spectra SpecLLRDistance. Uses the TSclust package (see diss.SPEC.LLR). • "pdc": Permutation Distribution Distance PDCDistance. Uses the pdc package (see pdcDist). • "frechet": Frechet distance FrechetDistance. Uses the longitudinalData package (see distFrechet). • "tam": Time Aligment Measurement TAMDistance. Some distance measures may require additional arguments. See the individual help pages (detailed above) for more information about each method. Value d The computed distance between the pair of time series. Author(s) <NAME>, <NAME>, <NAME>. Examples # The objects zoo.series1 and zoo.series2 are two # zoo objects that save two series of length 100. data(zoo.series1) data(zoo.series2) # For information on their generation and shape see # help page of example.series. help(example.series) # The distance calculation for these two series is done # as follows: TSDistances(zoo.series1, zoo.series2, distance="infnorm") TSDistances(zoo.series1, zoo.series2, distance="cor", beta=3) TSDistances(zoo.series1, zoo.series2, distance="dtw", sigma=20)
ecoregime
cran
R
Package ‘ecoregime’ September 10, 2023 Title Analysis of Ecological Dynamic Regimes Version 0.1.3 Description A toolbox for implementing the Ecological Dynamic Regime framework (Sánchez-Pinillos et al., 2023 <doi:10.1002/ecm.1589>) to characterize and compare groups of ecological trajectories in multidimensional spaces defined by state variables. The package includes the RETRA-EDR algorithm to identify representative trajectories, functions to generate, summarize, and visualize representative trajectories, and several metrics to quantify the distribution and heterogeneity of trajectories in an ecological dynamic regime and quantify the dissimilarity between two or more ecological dynamic regimes. License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 URL https://mspinillos.github.io/ecoregime/, https://github.com/MSPinillos/ecoregime BugReports https://github.com/MSPinillos/ecoregime/issues Depends R (>= 3.4.0) LazyData true Imports ape, data.table, ecotraj, GDAtools, graphics, methods, shape, smacof, stats, stringr Suggests knitr, primer, RColorBrewer, rmarkdown, testthat (>= 3.0.0), vegan Config/testthat/edition 3 VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre, cph] (<https://orcid.org/0000-0002-1499-4507>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-10 17:10:02 UTC R topics documented: define_retr... 2 dist_ed... 6 EDR_dat... 8 EDR_metric... 9 plot.RETR... 12 retra_ed... 15 summary.RETR... 19 define_retra Define representative trajectories from trajectory features Description Generate an object of class RETRA from a data frame containing trajectory states to define represen- tative trajectories in Ecological Dynamic Regimes (EDR). Usage define_retra(data, d = NULL, trajectories = NULL, states = NULL, retra = NULL) Arguments data A data frame of four columns indicating identifiers for the new representative trajectories, the individual trajectories or sites to which the states belong, the order of the states in the individual trajectories, and the identifier of the rep- resentative trajectory to which the states belong (only if !is.null(retra)). Alternatively, ’data’ can be a vector or a list of character vectors including the sequence of segments forming the new representative trajectory. See Details for further clarifications to define data. d Either a symmetric matrix or an object of class dist containing the dissimilari- ties between each pair of states of all trajectories in the EDR. If NULL (default), the length (Length) of the new representative trajectories and the distances be- tween states of different trajectories or sites (Link_distance) are not calculated. trajectories Only needed if !is.null(d). Vector indicating the trajectory or site to which each state in d belongs. states Only needed if !is.null(d). Vector of integers indicating the order of the states in d for each trajectory. retra Object of class RETRA returned from retra_edr(). If NULL (default), minSegs and Seg_density are not provided for the new representative trajectories. Details Each representative trajectory returned by the function retra_edr() corresponds to the longest sequence of representative segments that can be linked according to the criteria defined in the RETRA-EDR algorithm (Sánchez-Pinillos et al., 2023). One could be interested in splitting the ob- tained trajectories, considering only a fraction of the returned trajectories, or defining representative trajectories following different criteria than those in RETRA-EDR. The function define_retra() allows generating an object of class RETRA that can be used in other functions of ecoregime (e.g., plot()). For that, it is necessary to provide information about the set of segments or trajectory states that form the new representative trajectory through the argument data: • data can be defined as a data frame with as many rows as the number of states in all repre- sentative trajectories and the following columns: RT A string indicating the identifier of the new representative trajectories. Each identifier needs to appear as many times as the number of states forming each representative trajec- tory. RT_traj A vector indicating the individual trajectories in the EDR to which each state of the new representative trajectory belongs. RT_states A vector of integers indicating the identifier of the states forming the new repre- sentative trajectories. Each integer must refer to the order of the states in the individual trajectories of the EDR to which they belong. RT_retra Only if the new trajectories are defined from representative trajectories returned by retra_edr() (when !is.null(retra)). A vector of strings indicating the representative trajectory in retra to which each state belongs. • Alternatively, data can be defined as either a vector (if there is one representative trajectory) or a list of character vectors (with as many elements as the number of representative trajec- tories desired) containing the sequence of segments of the representative trajectories. In any case, each segment needs to be specified in the form traj[st1-st2], where traj is the iden- tifier of the original trajectory to which the segment belongs and st1 and st2 are identifiers of the initial and final states defining the segment. If only one state of an individual trajec- tory is considered to form the representative trajectory, the corresponding segment needs to be defined as traj[st-st]. Value An object of class RETRA, which is a list of length equal to the number of representative trajectories defined. For each trajectory, the following information is returned: minSegs Value of the minSegs parameter used in retra_edr(). If retra is NULL, minSegs = NA. Segments Vector of strings including the sequence of segments forming the representative trajec- tory. Each segment is identified by a string of the form traj[st1-st2], where traj is the identifier of the original trajectory to which the segment belongs and st1 and st2 are identi- fiers of the initial and final states defining the segment. The same format traj[st1-st2] is maintained when only one state of an individual trajectory is considered (st1 = st2). traj, st1, and st2 are recycled from data. Size Integer indicating the number of states forming the representative trajectory. Length Numeric value indicating the length of the representative trajectory, calculated as the sum of the dissimilarities in d between every pair of consecutive states. If d is NULL, Length = NA. Link_distance Data frame of two columns indicating artificial links between two segments (Link) and the dissimilarity between the connected states (Distance). When two representative seg- ments are linked by a common state or by two consecutive states of the same trajectory, the link distance is zero or equal to the length of a real segment, respectively. In both cases, the link is not considered in the returned data frame. If d is NULL, Link_distance = NA. Seg_density Data frame of two columns and one row for each representative segment. Density contains the number of segments in the EDR that is represented by each segment of the repre- sentative trajectory. kdTree_depth contains the depth of the k-d tree for each leaf represented by the corresponding segment. That is, the number of partitions of the ordination space until finding a region with minSegs segments or less. If retra is NULL, Seg_density = NA. Author(s) <NAME> See Also retra_edr() for identifying representative trajectories in EDRs through RETRA-EDR. summary() for summarizing the characteristics of the representative trajectories. plot() for plotting representative trajectories in an ordination space representing the state space of the EDR. Examples # Example 1 ----------------------------------------------------------------- # Define representative trajectories from the outputs of retra_edr(). # Identify representative trajectories using retra_edr() d <- EDR_data$EDR1$state_dissim trajectories <- EDR_data$EDR1$abundance$traj states <- EDR_data$EDR1$abundance$state old_retra <- retra_edr(d = d, trajectories = trajectories, states = states, minSegs = 5) # retra_edr() returns three representative trajectories old_retra # Keep the last five segments of trajectories "T2" and "T3" selected_segs <- old_retra$T2$Segments[4:length(old_retra$T2$Segments)] # Identify the individual trajectories for each state... selected_segs selected_traj <- rep(c(15, 4, 4, 1, 14), each = 2) # ...and the states (in the same order as the representative trajectory). selected_states <- c(1, 2, 2, 3, 3, 4, 1, 2, 2, 3) # Generate the data frame with the format indicated in the documentation df <- data.frame(RT = rep("A", length(selected_states)), RT_traj = selected_traj, RT_states = as.integer(selected_states), RT_retra = rep("T2", length(selected_states))) # Remove duplicates (trajectory 4, state 3) df <- unique(df) # Generate a RETRA object using define_retra() new_retra <- define_retra(data = df, d = d, trajectories = trajectories, states = states, retra = old_retra) # Example 2 ----------------------------------------------------------------- # Define representative trajectories from sequences of segments # Select all segments in T1, split T2 into two new trajectories, and include # a trajectory composed of states belonging to trajectories "5", "6", and "7" data <- list(old_retra$T1$Segments, old_retra$T2$Segments[1:3], old_retra$T2$Segments[4:8], c("5[1-2]", "5[2-3]", "7[4-4]", "6[4-5]")) # Generate a RETRA object using define_retra() new_retra <- define_retra(data = data, d = d, trajectories = trajectories, states = states, retra = old_retra) # Example 3 ----------------------------------------------------------------- # Define two representative trajectories from individual trajectories in EDR1. # Define trajectory "A" from states in trajectories 3 and 4 data_A <- data.frame(RT = rep("A", 4), RT_traj = c(3, 3, 4, 4), RT_states = c(1:2, 4:5)) # Define trajectory "B" from states in trajectories 5, 6, and 7 data_B <- data.frame(RT = rep("B", 5), RT_traj = c(5, 5, 7, 6, 6), RT_states = c(1, 2, 4, 4, 5)) # Compile data for both trajectories in a data frame df <- rbind(data_A, data_B) df$RT_states <- as.integer(df$RT_states) # Generate a RETRA object using define_retra() new_retra <- define_retra(data = df, d = EDR_data$EDR1$state_dissim, trajectories = EDR_data$EDR1$abundance$traj, states = EDR_data$EDR1$abundance$state) dist_edr Dissimilarities between Ecological Dynamic Regimes Description Generate a matrix containing dissimilarities between one or more pairs of Ecological Dynamic Regimes (EDR). dist_edr() computes different dissimilarity indices, all of them based on the dissimilarities between the trajectories of two EDRs. Usage dist_edr( d, d.type, trajectories = NULL, states = NULL, edr, metric = "dDR", symmetrize = NULL, ... ) Arguments d Symmetric matrix or object of class dist containing the dissimilarities between each pair of states of all trajectories in the EDR or the dissimilarities between each pair of trajectories. d.type One of "dStates" (if d contains state dissimilarities) or "dTraj" (if d contains trajectory dissimilarities). trajectories Only if d.type = "dStates". Vector indicating the trajectory or site corre- sponding to each entry in d. states Only if d.type = "dStates". Vector of integers indicating the order of the states in d for each trajectory. edr Vector indicating the EDR to which each trajectory/state in d belongs. metric A string indicating the dissimilarity index to be used: "dDR" (default), "minDist", "maxDist". symmetrize String naming the function to be called to symmetrize the resulting dissimilarity matrix ("mean", "min", "max, "lower", "upper"). If NULL (default), the matrix is not symmetrized. ... Only if d.type = "dStates". Further arguments to calculate trajectory dissim- ilarities. See ecotraj::trajectoryDistances(). Details The implemented metrics are: Pn "dDR" dDR (R1 , R2 ) = n1 i=1 dT R (T1i , R2 ) "minDist" dDRmin (R1 , R2 ) = minni=1 {dT R (T1i , R2 )} "maxDist" dDRmax (R1 , R2 ) = maxni=1 {dT R (T1i , R2 )} where R1 and R2 are two EDRs composed of n and m ecological trajectories, respectively, and dT R (T1i , R2 ) is the dissimilarity between the trajectory T1i of R1 and the closest trajectory of R2 : dT R (T1i , R2 ) = min{dT (T1i , T21 ), ..., dT (T1i , T2m )} The metrics calculated are not necessarily symmetric. That is, dDR (R1 , R2 ) is not necessarily equal to dDR (R2 , R1 ). It is possible to symmetrize the returned matrix by indicating the name of the function to be used in symmetrize: dDR (R1 ,R2 )+dDR (R2 ,R1 ) "mean" dDRsym = 2 "min" dDRsym = min{dDR (R1 , R2 ), dDR (R2 , R1 )} "max" dDRsym = max{dDR (R1 , R2 ), dDR (R2 , R1 )} "lower" The lower triangular part of the dissimilarity matrix is used. "upper" The upper triangular part of the dissimilarity matrix is used. Value Matrix including the dissimilarities between every pair of EDRs. Author(s) <NAME> References <NAME>., <NAME>., <NAME>., <NAME>. 2023. Ecological Dynamic Regimes: Identification, characterization, and comparison. Ecological Monographs. doi:10.1002/ecm. 1589 Examples # Load species abundances and compile in a data frame abun1 <- EDR_data$EDR1$abundance abun2 <- EDR_data$EDR2$abundance abun3 <- EDR_data$EDR3$abundance abun <- data.frame(rbind(abun1, abun2, abun3)) # Define row names in abun to keep the reference of the EDR, trajectory, and # state row.names(abun) <- paste0(abun$EDR, "_", abun$traj, "_", abun$state) # Calculate dissimilarities between every pair of states # For example, Bray-Curtis index dStates <- vegan::vegdist(abun[, -c(1, 2, 3)], method = "bray") # Use the labels in dStates to define the trajectories to which each state # belongs id_traj <- vapply(strsplit(labels(dStates), "_"), function(x){ paste0(x[1], "_", x[2]) }, character(1)) id_state <- vapply(strsplit(labels(dStates), "_"), function(x){ as.integer(x[3]) }, integer(1)) id_edr <- vapply(strsplit(labels(dStates), "_"), function(x){ paste0("EDR", x[1]) }, character(1)) # Calculate dissimilarities between every pair of trajectories dTraj <- ecotraj::trajectoryDistances(d = dStates, sites = id_traj, surveys = id_state, distance.type = "DSPD") # Use labels in dTraj to identify EDRs id_edr_traj <- vapply(strsplit(labels(dTraj), "_"), function(x){ paste0("EDR", x[1]) }, character(1)) # Compute dissimilarities between EDRs: # 1.1) without symmetrizing the matrix and using state dissimilarities dEDR <- dist_edr(d = dStates, d.type = "dStates", trajectories = id_traj, states = id_state, edr = id_edr, metric = "dDR", symmetrize = NULL) # 1.2) without symmetrizing the matrix and using trajectory dissimilarities dEDR <- dist_edr(d = dTraj, d.type = "dTraj", edr = id_edr_traj, metric = "dDR", symmetrize = NULL) # 2) symmetrizing by averaging elements on and below the diagonal dEDR <- dist_edr(d = dTraj, d.type = "dTraj", edr = id_edr_traj, metric = "dDR", symmetrize = "mean") EDR_data Ecological Dynamic Regime data Description Example datasets to characterize and compare EDRs, including abundance data, state, segment, and trajectory dissimilarity matrices for 90 artificial communities belonging to three different EDRs. Usage EDR_data Format List of three nested sublists ("EDR1", "EDR2", and "EDR3"), each associated with one EDR, includ- ing the following elements: • abundance: Data table with 15 columns and one row for each community state: – EDR: Integer indicating the identifier of the EDR. – traj: Integer containing the identifier of the trajectory for each artificial community in the corresponding EDR. Each trajectory represents a different sampling unit. – state: Integer indicating the observations or states of each community. The sequence of states of a given community forms a trajectory. – sp1, ..., sp12: Vectors containing species abundances for each community state. • state_dissim: Object of class dist containing Bray-Curtis dissimilarities between every pair of states in abundance (see Details). • segment_dissim: Object of class dist containing the dissimilarities between every pair of trajectory segments in abundance (see Details). • traj_dissim: Object of class dist containing the dissimilarities between every pair of com- munity trajectories in abundance (see Details). Details Artificial data was generated following the procedure explained in Box 1 in Sánchez-Pinillos et al. (2023) In particular, the initial state of each community was defined using a hypothetical environ- mental space with optimal locations for 12 species. Community dynamics were simulated using a general Lotka-Volterra model. State dissimilarities were calculated using the Bray-Curtis metric. Segment and trajectory dissimi- larities were calculated using the package ’ecotraj’. References Sánchez-Pinillos, M., <NAME>., <NAME>., <NAME>. 2023. Ecological Dynamic Regimes: Identification, characterization, and comparison. Ecological Monographs. doi:10.1002/ecm. 1589 EDR_metrics Metrics of Ecological Dynamic Regimes Description Set of metrics to analyze the distribution and variability of trajectories in Ecological Dynamic Regimes (EDR), including dynamic dispersion (dDis), dynamic beta diversity (dBD), and dynamic evenness (dEve). Usage dDis( d, d.type, trajectories, states = NULL, reference, w.type = "none", w.values, ... ) dBD(d, d.type, trajectories, states = NULL, ...) dEve(d, d.type, trajectories, states = NULL, w.type = "none", w.values, ...) Arguments d Symmetric matrix or object of class dist containing the dissimilarities between each pair of states of all trajectories in the EDR or the dissimilarities between each pair of trajectories. To compute dDis, d needs to include the dissimilarities between all states/trajectories and the states/trajectory of reference. d.type One of "dStates" (if d contains state dissimilarities) or "dTraj" (if d contains trajectory dissimilarities). trajectories Vector indicating the trajectory or site corresponding to each entry in d. states Only if d.type = "dStates". Vector of integers indicating the order of the states in d for each trajectory. reference Vector of the same class as trajectories and length equal to one, indicating the reference trajectory to compute dDis. w.type Method used to weight individual trajectories: • "none": All trajectories are considered equally relevant (default). • "length": Trajectories are weighted by their length, calculated as the sum of the dissimilarities between every pair of consecutive states. d must con- tain dissimilarities between trajectory states and d.type = "dStates". • "size": Trajectories are weighted by their size, calculated as the number of states forming the trajectory. d must contain dissimilarities between tra- jectory states and d.type = "dStates". • "precomputed": Trajectories weighted according to different criteria. w.values Only if w.type = "precomputed". Numeric vector of length equal to the num- ber of trajectories containing the weight of each trajectory. ... Only if d.type = "dStates". Further arguments to calculate trajectory dissim- ilarities. See ecotraj::trajectoryDistances(). Details Dynamic dispersion (dDis()) dDis is calculated as the average dissimilarity between each trajectory in an EDR and a target trajectory taken as reference (Sánchez-Pinillos et al., 2023). Pm diα dDis = i=1 m where diα is the dissimilarity between trajectory i and the trajectory of reference α, and m is the number of trajectories. Alternatively, it is possible to calculate a weighted mean of the dissimilarities by assigning a weight to each trajectory. Pm i=1 wi diα dDis = P m i=1 wi where wi is the weight assigned to trajectory i. Dynamic beta diversity (dBD()) dBD quantifies the overall variation of the trajectories in an EDR and is equivalent to the average distance to the centroid of the EDR (De Cáceres et al., 2019). Pm−1 Pm i=1 j=i+1 d2ij dBD = m(m−1) Dynamic evenness (dEve()) dEve quantifies the regularity with which an EDR is filled by the individual trajectories (Sánchez- Pinillos et al., 2023). Pm−1 d ij 1 1 l=1 min( Pm−1 , m−1 )− m−1 dij dEve = l=1 1− 1−1 1 where dij is the dissimilarity between trajectories i and j linked in a minimum spanning tree by the link l. Optionally, it is possible to weight the trajectories of the EDR. In that case, dEve becomes analogous to the functional evenness index proposed by Villéger et al. (2008). Pm−1 EW l=1 min( Pm−1 ij 1 , m−1 1 EWij dEvew = l=1 1− 1−11 where EWij is the weighted evenness: dij EWij = wi +wj Value • dDis() returns the value of dynamic dispersion for a given trajectory taken as a reference. • dBD() returns the value of dynamic beta diversity. • dEve() returns the value of dynamic evenness. Author(s) <NAME> References <NAME>, M, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> & <NAME>. (2019). Trajectory analysis in community ecology. Ecological Monographs. <NAME>., <NAME>., <NAME>., <NAME>. 2023. Ecological Dynamic Regimes: Identification, characterization, and comparison. Ecological Monographs. doi:10.1002/ecm. 1589 <NAME>., <NAME>., <NAME>. (2008) New multidimensional functional diversity indices for a multifaced framework in functional ecology. Ecology. Examples # Data to compute dDis, dBD, and dEve dStates <- EDR_data$EDR1$state_dissim dTraj <- EDR_data$EDR1$traj_dissim trajectories <- paste0("T", EDR_data$EDR1$abundance$traj) states <- EDR_data$EDR1$abundance$state # Dynamic dispersion taking the first trajectory as reference dDis(d = dTraj, d.type = "dTraj", trajectories = unique(trajectories), reference = "T1") # Dynamic dispersion weighting trajectories by their length dDis(d = dStates, d.type = "dStates", trajectories = trajectories, states = states, reference = "T1", w.type = "length") # Dynamic beta diversity using trajectory dissimilarities dBD(d = dTraj, d.type = "dTraj", trajectories = unique(trajectories)) # Dynamic evenness dEve(d = dStates, d.type = "dStates", trajectories = trajectories, states = states) # Dynamic evenness considering that the 10 first trajectories are three times # more relevant than the rest w.values <- c(rep(3, 10), rep(1, length(unique(trajectories))-10)) dEve(d = dTraj, d.type = "dTraj", trajectories = unique(trajectories), w.type = "precomputed", w.values = w.values) plot.RETRA Plot representative trajectories of Ecological Dynamic Regimes Description Plot representative trajectories of an Ecological Dynamic Regime (EDR) in the state space distin- guishing between the segments belonging to real trajectories of the EDR and the artificial links between segments. Usage ## S3 method for class 'RETRA' plot( x, d, trajectories, states, select_RT = NULL, traj.colors = NULL, RT.colors = NULL, sel.color = NULL, link.color = NULL, link.lty = 2, axes = c(1, 2), ... ) Arguments x Object of class RETRA. d Symmetric matrix or dist object containing the dissimilarities between each pair of states of all trajectories in the EDR or data frame containing the coordi- nates of all trajectory states in an ordination space. trajectories Vector indicating the trajectory or site to which each state in d belongs. states Vector of integers indicating the order of the states in d for each trajectory. select_RT Optional string indicating the name of a representative trajectory that must be highlighted in the plot. By default (select_RT = NULL), all representative tra- jectories are represented with the same color. traj.colors Specification for the color of all individual trajectories (defaults "grey") or a vector with length equal to the number of trajectories indicating the color for each individual trajectory. RT.colors Specification for the color of representative trajectories (defaults "black"). sel.color Specification for the color of the selected representative trajectory (defaults "red"). Only if !is.null(select_RT). link.color Specification for the color of the links between trajectory segments forming rep- resentative trajectories. By default, the same color than RT.colors is used. link.lty The line type of the links between trajectory segments forming representative trajectories. Defaults 2 = "dashed" (See graphics::par). axes An integer vector indicating the pair of axes in the ordination space to be plotted. ... Arguments for generic plot(). Value The function plot() plots a set of individual trajectories and the representative trajectories in an ordination space defined through d or calculated by applying metric multidimensional scaling (mMDS; Borg and Groenen, 2005) to d. Author(s) <NAME> References <NAME>., & <NAME>. (2005). Modern Multidimensional Scaling (2nd ed.). Springer. <NAME>., <NAME>., <NAME>., <NAME>. 2023. Ecological Dynamic Regimes: Identification, characterization, and comparison. Ecological Monographs. doi:10.1002/ecm. 1589 See Also retra_edr() for identifying representative trajectories in EDRs applying RETRA-EDR. define_retra() for defining representative trajectories from a subset of segments or trajectory features. summary() for summarizing representative trajectories in EDRs. Examples # Example 1 ----------------------------------------------------------------- # d contains the dissimilarities between trajectory states d <- EDR_data$EDR1$state_dissim # trajectories and states are defined according to `d` entries. trajectories <- EDR_data$EDR1$abundance$traj states <- EDR_data$EDR1$abundance$state # x defined from retra_edr(). We obtain three representative trajectories. RT <- retra_edr(d = d, trajectories = trajectories, states = states, minSegs = 5) summary(RT) # Plot individual trajectories in blue and representative trajectories in orange, # "T2" will be displayed in green. Artificial links will be displayed with a # dotted line. plot(x = RT, d = d, trajectories = trajectories, states = states, select_RT = "T2", traj.colors = "lightblue", RT.colors = "orange", sel.color = "darkgreen", link.lty = 3, main = "Representative trajectories in EDR1") # Example 2 ----------------------------------------------------------------- # d contains the coordinates in an ordination space. For example, we use # the coordinates of the trajectory states after applying a principal component # analysis (PCA) to an abundance matrix. abun <- EDR_data$EDR1$abundance pca <- prcomp(abun[, -c(1:3)]) coord <- data.frame(pca$x) # trajectories and states are defined according to the abundance matrix # used in the PCA trajectories <- EDR_data$EDR1$abundance$traj states <- EDR_data$EDR1$abundance$state # Instead of using the representative trajectories obtained from `retra_edr()`, # we will define the set of trajectories that we want to highlight. For example, # we can select the trajectories whose initial and final states are in the # extremes of the first axis. T1 <- trajectories[which.max(coord[, 1])] T2 <- trajectories[which.min(coord[, 1])] RT_traj <- c(trajectories[trajectories %in% T1], trajectories[trajectories %in% T2]) RT_states <- c(states[which(trajectories %in% T1)], states[which(trajectories %in% T2)]) # Create a data frame to generate a RETRA object using define_retra RT_df <- data.frame(RT = c(rep("T1", sum(trajectories %in% T1)), rep("T2", sum(trajectories %in% T2))), RT_traj = RT_traj, RT_states = as.integer(RT_states)) RT_retra <- define_retra(data = RT_df) # Plot the defined trajectories with the default graphic values plot(x = RT_retra, d = coord, trajectories = trajectories, states = states, main = "Extreme trajectories in EDR1") retra_edr Representative trajectories in Ecological Dynamic Regimes (RETRA- EDR) Description retra_edr() applies the algorithm RETRA-EDR (Sánchez-Pinillos et al., 2023) to identify repre- sentative trajectories summarizing the main dynamical patterns of an Ecological Dynamic Regime (EDR). Usage retra_edr( d, trajectories, states, minSegs, dSegs = NULL, coordSegs = NULL, traj_Segs = NULL, state1_Segs = NULL, state2_Segs = NULL, Dim = NULL, eps = 0 ) Arguments d Either a symmetric matrix or an object of class dist containing the dissimilari- ties between each pair of states of all trajectories in the EDR. trajectories Vector indicating the trajectory or site to which each state in d belongs. states Vector of integers indicating the order of the states in d for each trajectory. minSegs Integer indicating the minimum number of segments in a region of the EDR represented by a segment of the representative trajectory. dSegs Either a symmetric matrix or an object of class dist containing the dissimilari- ties between every pair of trajectory segments (see Details). coordSegs Matrix containing the coordinates of trajectory segments (rows) in each axis (columns) of an ordination space (see Details). traj_Segs Vector indicating the trajectory to which each segment in dSeg and/or coordSegs belongs. Only required if dSegs or coordSegs are not NULL. state1_Segs Vector indicating the initial state of each segment in dSegs and/or coordSegs according to the values given in states. Only required if dSegs or coordSegs are not NULL. state2_Segs Vector indicating the final state of each segment in dSegs and/or coordSegs according to the values given in states. Only required if dSegs or coordSegs are not NULL. Dim Optional integer indicating the number of axes considered to partition the seg- ment space and generate a k-d tree. By default (Dim = NULL), all axes are con- sidered. eps Numeric value indicating the minimum length in the axes of the segment space to be partitioned when the k-d tree is generated. If eps = 0 (default), partitions are made regardless of the size. Details The algorithm RETRA-EDR is based on a partition-and-group approach by which it identifies re- gions densely crossed by ecological trajectories in an EDR, selects a representative segment in each dense region, and joins the representative segments by a set of artificial Links to generate a net- work of representative trajectories. For that, RETRA-EDR splits the trajectories of the EDR into segments and uses an ordination space generated from a matrix containing the dissimilarities be- tween trajectory segments. Dense regions are identified by applying a k-d tree to the ordination space. By default, RETRA-EDR calculates segment dissimilarities following the approach by De Cáceres et al. (2019) and applies metric multidimensional scaling (mMDS, Borg and Groenen, 2005) to gen- erate the ordination space. It is possible to use other dissimilarity metrics and/or ordination methods and reduce the computational time by indicating the dissimilarity matrix and the coordinates of the segments in the ordination space through the arguments dSegs and coordSegs, respectively. • If !is.null(dSegs) and is.null(coordSegs), RETRA-EDR is computed by applying mMDS to dSegs. • If !is.null(dSegs) and !is.null(coordSegs), RETRA-EDR is directly computed from the coordinates provided in coordSegs and representative segments are identified using dSegs. coordSegs should be calculated by the user from dSegs. • If is.null(dSegs) and !is.null(coordSegs) (not recommended), RETRA-EDR is di- rectly computed from the coordinates provided in coordSegs. As dSegs is not provided, retra_edr() assumes that the ordination space is metric and identifies representative seg- ments using the Euclidean distance. Value The function retra_edr() returns an object of class RETRA, which is a list of length equal to the number of representative trajectories identified. For each trajectory, the following information is returned: minSegs Value of the minSegs parameter. Segments Vector of strings including the sequence of segments forming the representative trajec- tory. Each segment is identified by a string of the form traj[st1-st2], where traj is the identifier of the original trajectory to which the segment belongs and st1 and st2 are identi- fiers of the initial and final states defining the segment. Size Numeric value indicating the number of states forming the representative trajectory. Length Numeric value indicating the length of the representative trajectory, calculated as the sum of the dissimilarities in d between every pair of consecutive states. Link_distance Data frame of two columns indicating artificial links between representative seg- ments (Link) and the dissimilarity between the connected states (Distance). When two rep- resentative segments are linked by a common state or by two consecutive states of the same trajectory, the link distance is zero or equal to the length of a real segment, respectively. In both cases, the link is not considered in the returned data frame. Seg_density Data frame of two columns and one row for each representative segment. Density contains the number of segments in the EDR that is represented by each segment of the repre- sentative trajectory. kdTree_depth contains the depth of the k-d tree for each leaf represented by the corresponding segment. That is, the number of partitions of the ordination space until finding a region with minSegs segments or less. Author(s) <NAME> References <NAME>., & Groenen, <NAME>. (2005). Modern Multidimensional Scaling (2nd ed.). Springer. <NAME>, M, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> & <NAME>. (2019). Trajectory analysis in community ecology. Ecological Monographs. <NAME>., <NAME>., <NAME>., <NAME>. 2023. Ecological Dynamic Regimes: Identification, characterization, and comparison. Ecological Monographs. doi:10.1002/ecm. 1589 See Also summary() for summarizing the characteristics of the representative trajectories. plot() for plotting representative trajectories in an ordination space representing the state space of the EDR. define_retra() for defining representative trajectories from a subset of segments or trajectory features. Examples # Example 1 ----------------------------------------------------------------- # Identify representative trajectories from state dissimilarities # Calculate state dissimilarities (Bray-Curtis) from species abundances abundance <- data.frame(EDR_data$EDR1$abundance) d <- vegan::vegdist(abundance[, -c(1:3)], method = "bray") # Identify the trajectory (or site) and states in d trajectories <- abundance$traj states <- as.integer(abundance$state) # Compute RETRA-EDR RT1 <- retra_edr(d = d, trajectories = trajectories, states = states, minSegs = 5) # Example 2 ----------------------------------------------------------------- # Identify representative trajectories from segment dissimilarities # Calculate segment dissimilarities using the Hausdorff distance dSegs <- ecotraj::segmentDistances(d = d, sites = trajectories, surveys = states, distance.type = "Hausdorff") dSegs <- dSegs$Dseg # Identify the trajectory (or site) and states in dSegs: # Split the labels of dSegs (traj[st1-st2]) into traj, st1, and st2 seg_components <- strsplit(gsub("\\]", "", gsub("\\[", "-", labels(dSegs))), "-") traj_Segs <- sapply(seg_components, "[", 1) state1_Segs <- as.integer(sapply(seg_components, "[", 2)) state2_Segs <- as.integer(sapply(seg_components, "[", 3)) # Compute RETRA-EDR RT2 <- retra_edr(d = d, trajectories = trajectories, states = states, minSegs = 5, dSegs = dSegs, traj_Segs = traj_Segs, state1_Segs = state1_Segs, state2_Segs = state2_Segs) summary.RETRA Summarize representative trajectories Description Summarize the properties of representative trajectories returned by retra_edr() or define_retra() Usage ## S3 method for class 'RETRA' summary(object, ...) Arguments object An object of class RETRA. ... (not used) Value Data frame with nine columns and one row for each representative trajectory in object. The columns in the returned data frame contain the following information: ID Identifier of the representative trajectories. Size Number of states forming each representative trajectory. Length Sum of the dissimilarities in d between every pair of consecutive states forming the repre- sentative trajectories. Avg_link Mean value of the dissimilarities between consecutive states of the representative trajec- tories that do not belong to the same ecological trajectory or site (i.e., artificial links). Sum_link Sum of the dissimilarities between consecutive states of the representative trajectories that do not belong to the same ecological trajectory or site (i.e., artificial links). Avg_density Mean value of the number of segments represented by each segment of the represen- tative trajectory (excluding artificial links). Max_density Maximum number of segments represented by at least one of the segments of the representative trajectory (excluding artificial links). Avg_depth Mean value of the k-d tree depths, that is, the number of partitions of the ordination space until finding a region with minSegs segments or less. Max_depth Maximum depth in the k-d tree, that is, the number of partitions of the ordination space until finding a region with minSegs segments or less. See Also retra_edr() for identifying representative trajectories in EDRs applying RETRA-EDR. define_retra() for generating an object of class RETRA from trajectory features. Examples # Apply RETRA-EDR to identify representative trajectories d = EDR_data$EDR1$state_dissim trajectories = EDR_data$EDR1$abundance$traj states = EDR_data$EDR1$abundance$state RT <- retra_edr(d = d, trajectories = trajectories, states = states, minSegs = 5) # Summarize the properties of the representative trajectories in a data frame summary(RT)
twital
readthedoc
HTML
Twital 1.0-alpha documentation [Twital](index.html#document-index) --- What is Twital?[¶](#what-is-twital) === Twital is a small “plugin” for [Twig](http://twig.sensiolabs.org/) (a template engine for PHP) that adds some shortcuts and makes Twig’s syntax more suitable for HTML based (XML, HTML5, XHTML, SGML) templates. Twital takes inspiration from [PHPTal](http://phptal.org/), [TAL](http://en.wikipedia.org/wiki/Template_Attribute_Language) and [AngularJS](http://angularjs.org/) (just for some aspects), mixing their language syntaxes with the powerful Twig templating engine system. To better understand the Twital’s benefits, consider the following **Twital** template, which simply shows a list of users from an array: ``` <ul t:if="users"> <li t:for="user in users"> {{ user.name }} </li> </ul> ``` To do the same thing using Twig, you need: ``` {% if users %} <ul> {% for user in users %} <li> {{ user.name }} </li> {% endfor %} </ul> {% endif %} ``` As you can see, the Twital template is **more readable**, **less verbose** and and **you don’t have to worry about opening and closing block instructions** (they are inherited from the HTML structure). One of the main advantages of Twital is the *implicit* presence of control statements, which makes templates more readable and less verbose. Furthermore, it has all Twig functionalities, such as template inheritance, translations, looping, filtering, escaping, etc. Here you can find a [complete list of Twital attributes and nodes.](index.html#document-tags/index). If some Twig functionality is not directly available for Twital, you can **freely mix Twig and Twital** syntaxes. In the example below, we have mixed Twital and Twig syntaxes to use Twig custom tags: ``` <h1 t:if="users"> {% custom_tag %} {{ someUnsafeVariable }} {% endcustom_tag %} </h1> ``` Installation[¶](#installation) --- There are two recommended ways to install Twital via [Composer](https://getcomposer.org/): * using the `composer require` command: ``` composer require 'goetas/twital:0.1.*' ``` * adding the dependency to your `composer.json` file: ``` "require": { .. "goetas/twital":"0.1.*", .. } ``` Getting started[¶](#getting-started) --- First, you have to create a file that contains your template (named for example `demo.twital.html`): ``` <div t:if="name"> Hello {{ name }} </div> ``` Afterwards, you have to create a PHP script that instantiate the required objects: ``` <?php require_once '/path/to/composer/vendor/autoload.php'; use Goetas\Twital\TwitalLoader; $fileLoader = new Twig_Loader_Filesystem('/path/to/templates'); $twitalLoader = new TwitalLoader($fileLoader); $twig = new Twig_Environment($twitalLoader); echo $twig->render('demo.twital.html', array('name' => 'John')); ``` That’s all! Note Since Twital uses Twig to compile and render templates, their performance is the same. Contents[¶](#contents) --- ### Tags reference[¶](#tags-reference) #### `if`[¶](#if) The Twital instruction for Twig’s `if` tag is the``t:if`` attribute. ``` <p t:if="online == false"> Our website is in maintenance mode. Please, come back later. </p> ``` `elseif` and `else` are not *well* supported, but you can always combine Twital with Twig. ``` <p t:if="online_users > 0"> {%if online_users == 1%} one user {% else %} {{online_users}} users {% endif %} </p> ``` But if you are really interested to use `elseif` and `else` tags with Twital you can do it anyway. ``` <p t:if="online"> I'm online </p> <p t:elseif="invisible"> I'm invisible </p> <p t:else=""> I'm offline </p> ``` This syntax will work if there are no non-space charachters between the `p` tags. This example will not work: ``` <p t:if="online"> I'm online </p> <hr /> <p t:else=""> I'm offline </p<p t:if="online"> I'm online </p> some text... <p t:else=""> I'm offline </p> ``` Note To learn more about the Twig `if` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/if.html). #### `for`[¶](#for) The Twital instruction for Twig’s `for` tag is the `t:for` attribute. Loop over each item in a sequence. For example, to display a list of users provided in a variable called `users`: ``` <h1>Members</h1> <ul> <li t:for="user in users"> {{ user.username }} </li> </ul> ``` Note For more information about the `for` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/for.html). #### `set`[¶](#set) The Twital instruction for Twig `set` tag is the `t:set` attribute. You can use `set` to assign variables. The syntax to use the `set` attribute is: ``` <p t:set=" name = 'tommy' ">Hello {{ name }}</p> <p t:set=" foo = {'foo': 'bar'} ">Hello {{ foo.bas }}</p> <p t:set=" name = 'tommy', surname='math' "> Hello {{ name }} {{ surname }} </p> ``` Note For more information about the `set` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/set.html). #### `block`[¶](#block) The Twital instruction for Twig `block` tag is `t:block` node. To see how to use it, consider the following base template named `layout.html.twital`: ``` <html> <head> <title>Hello world!</title> </head> <body t:block="content"> Hello! </div> </html> ``` To improve the greeting message, we can extend it using the `t:textends` node, so we can create a new template called `hello.html.twital`. ``` <t:extends from="layout.html.twital"> <t:block name="content"> Hello {{name}}! </t:block> </t:extends> ``` As you can see, we have overwritten the content of the `content` block with a new one. To do this, we have used a `t:block` node. Of course, if needed, you can also **call the parent block** from inside. It is simple: ``` <t:extends from="layout.html.twital"> <t:block name="content"> {{parent()}} Hello {{name}}! </t:block> </t:extends> ``` Note To learn more about template inheritance, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/tags/block.html) #### `extends`[¶](#extends) The Twital instruction for Twig `extends` tag is `t:extends` node. To see how to use it, take a look at this example: Consider the following base template named `layout.html.twital`. Here we are creating a simple page that says hello to someone. With the t:block attribute we mark the body content as extensibile. ``` <html> <head> <title>Hello world!</title> </head> <body> <div t:block="content"> Hello! </div> </div> </html> ``` To improve the greating message, we can extend it using the `t:textends` node, so we can create a new template called `hello.html.twital`. ``` <t:extends from="layout.html.twital"> <t:block name="content"> Hello {{name}}! </t:block> </t:extends> ``` As you can see, we have overwritten the content of the `content` block with a new one. To do this, we have used a `t:block` node. You can also **extend a Twig Template**, so you can mix Twig and Twital Templates. ``` <t:extends from="layout.twig"> <t:block name="content"> Hello {{name}}! </t:block> </t:extends> ``` Sometimes it’s useful to obtain the layout **template name from a variable**: to do this you have to add the Twital namespace to attribute name: ``` <t:extends t:from="layoutVar"> <t:block name="content"> Hello {{name}}! </t:block> </t:extends> ``` Now `hello.html.twital` can inherit dynamically from different templates. Now the tempalte name can be any valid Twig expression. Note To learn more about template inheritance, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/tags/extends.html). #### `embed`[¶](#embed) The Twital instruction for Twig `embed` tag is `t:embed` node. The embed tag combines the behaviour of include and extends. It allows you to include another template’s contents, just like include does. But, it also allows you to override any block defined inside the included template, like when extending a template. To learn about the usefulness of embed, you can read the official documentation. Now, let’s see how to use it, take a look to this example: ``` <t:embed from="teasers_skeleton.html.twital"> <t:block name="left_teaser"> Some content for the left teaser box </t:block> <t:block name="right_teaser"> Some content for the right teaser box </t:block> <t:embed> ``` You can add additional variables by passing them after the `with` attribute: ``` <t:embed from="header.html" with="{'foo': 'bar'}"> ... </t:embed> ``` You can disable the access to the current context by using the `only` attribute: ``` <t:embed from="header.html" with="{'foo': 'bar'} only="true"> ... </t:embed> ``` You can mark an include with `ignore-missing` attribute in which case Twital will ignore the statement if the template to be included does not exist. ``` <t:embed from="header.html" with="{'foo': 'bar'} ignore-missing="true"> ... </t:embed> ``` `ignore-missing` can not be an expression; it has to be evaluated only at compile time. To use Twig expressions as template name you have to use a namespace prefix on ‘form’ attribute: ``` <t:embed t:from="ajax ? 'ajax.html' : 'not_ajax.html' "> ... </t:embed> <t:embed t:from="['one.html','two.html']"> ... </t:embed> ``` Note For more information about the `embed` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/embed.html). See also [include](index.html#document-tags/include) #### `include`[¶](#include) The `include` statement includes a template and returns the rendered content of that file into the current namespace: ``` <t:include from="header.html"/> Body <t:include from="footer.html"/> ``` A little bit different syntax to include a template can be: ``` <div class="content" t:include="news.html"> <h1>Fake news content</h1> <p>Lorem ipsum</p> </div> ``` In this case, the content of div will be replaced with the content of template ‘news.html’. You can add additional variables by passing them after the `with` attribute: ``` <t:include from="header.html" with="{'foo': 'bar'}"/> ``` You can disable the access to the current context by using the `only` attribute: ``` <t:include from="header.html" with="{'foo': 'bar'} only="true"/> ``` You can mark an include with the `ignore-missing` attribute in which case Twital will ignore the statement if the template to be included does not exist. ``` <t:include from="header.html" with="{'foo': 'bar'} ignore-missing="true"/> ``` `ignore-missing` can not be an expression; it has to be evauluated only at compile time. To use Twig expressions as template name you have to use a namespace prefix on ‘form’ attribute: ``` <t:include t:from="ajax ? 'ajax.html' : 'not_ajax.html' " /> <t:include t:from="['one.html','two.html']" /> ``` Note For more information about the `include` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/include.html). #### `import`[¶](#import) The Twital instruction for Twig `import` tag is `t:import` node. Since Twig supports putting often used code into [macros](index.html#document-tags/macro). These macros can go into different templates and get imported from there. There are two ways to import templates: (1) you can import the complete template into a variable or (2) request specific macros from it. Imagine that we have a helper module that renders forms (called `forms.html`): ``` <t:macro name="input" args="name, value, type"> <input type="{{ type|default('text') }}" name="{{ name }}" value="{{ value|e }}" /> </t:macro<t:macro name="textarea" args="name, value"> <textarea name="{{ name }}">{{ value|e }}</textarea> </t:macro> ``` To use your macro, you can do something like this: ``` <t:import from="forms.html" alias="forms"/> <dl> <dt>Username</dt> <dd>{{ forms.input('username') }}</dd> <dt>Password</dt> <dd>{{ forms.input('password', null, 'password') }}</dd> {{ forms.textarea('comment') }} </dl> ``` If you want to import your macros directly into your template (without referring to it with a variable): ``` <t:import from="forms.html" as="input as input_field, textarea"/> <dl> <dt>Username</dt> <dd>{{ input_field('username') }}</dd> <dt>Password</dt> <dd>{{ input_field('password', '', 'password') }}</dd> </dl> <p>{{ textarea('comment') }}</p> ``` Tip To import macros from the current file, use the special `_self` variable for the source. Note For more information about the `import` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/import.html). See also [macro](index.html#document-tags/macro) #### `macro`[¶](#macro) The Twital instruction for Twig `macro` tag is `t:macro` node. To declare a macro inside Twital, the syntax is: ``` <t:macro name="input" args="value, type, size"> <input type="{{ type|default('text') }}" name="{{ name }}" value="{{ value|e }}" size="{{ size|default(20) }}" /> </t:macro> ``` To use a macro inside your Twital template, take a look at the :doc:`import<../tags/import>` attribute. Note For more information about the `macro` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/macro.html). #### `use`[¶](#use) The Twital instruction for Twig `use` tag is `t:use` node. This is a fature that allows a/the horizontal reuse of templates. To learn more about it, you can read the official documentation. Let’s see how does it work: ``` <t:use from="bars.html"/<t:block name="sidebar"> ... </t:block> ``` You can create some aliases for block inside “used” template to avoid name conflicting: ``` <t:extends from="layout.html.twig"> <t:use from="bars.html" aliases="sidebar as sidebar_original, footer as old_footer"/ <t:block name="sidebar"> {{ block('sidebar_original') }} </t:block> </t:extends> ``` Note For more information about the `use` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/use.html). #### `sandbox`[¶](#sandbox) The Twital instruction for Twig `import` tag is `t:sandbox` node or the `t:sandbox` attribute. The `sandbox` tag can be used to enable the sandboxing mode for an included template, when sandboxing is not enabled globally for the Twig environment: ``` <t:sandbox> {% include 'user.html' %} </t:sandbox<div t:sandbox=""> {% include 'user.html' %} </div> ``` Note For more information about the `sandbox` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/sandbox.html). #### `autoescape`[¶](#autoescape) The Twital instruction for Twig `autoescape` tag is `t:autoescape` attribute. Whether automatic escaping is enabled or not, you can mark a section of a template to be escaped or not by using the `autoescape` tag. To see how to use it, take a look at this example: ``` <div t:autoescape="true"> Everything will be automatically escaped in this block using the HTML strategy </div<div t:autoescape="html"> Everything will be automatically escaped in this block using the HTML strategy </div<div t:autoescape="js"> Everything will be automatically escaped in this block using the js escaping strategy </div<div t:autoescape="false"> Everything will be outputted as is in this block </div> ``` When automatic escaping is enabled, everything is escaped by default, except for values explicitly marked as safe. Those can be marked in the template by using the Twig `raw` filter: ``` <div t:autoescape="false"> {{ safe_value|raw }} </div> ``` #### `capture`[¶](#capture) This attribute acts as a `set` tag and allows to ‘capture’ chunks of text into a variable: ``` <div id="pagination" t:capture="foo"> ... any content .. </div> ``` All contents inside “pagination” div will be captured and saved inside a variable named foo. Note For more information about the `set` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/set.html). #### `filter`[¶](#filter) The Twital instruction for Twig `filter` tag is `t:filter` attribute. To see how to use it, take a look at this example: ``` <div t:filter="upper"> This text becomes uppercase </div<div t:filter="upper|escape"> This text becomes uppercase </div> ``` Note To learn more about the filter tab, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/tags/filter.html). #### `spaceless`[¶](#spaceless) The Twital instruction for Twig `spaceless` tag is `t:spaceless` node or the `t:spaceless` attribute. ``` <t:spaceless> {% include 'user.html' %} </t:spaceless<div t:spaceless=""> {% include 'user.html' %} </div> ``` Note For more information about the `spaceless` tag, please refer to [Twig official documentation](http://twig.sensiolabs.org/doc/tags/spaceless.html). #### `omit`[¶](#omit) This attribute asks the Twital parser to ignore the elements’ open and close tag, its content will still be evaluated. ``` <a href="/private" t:omit="false"> {{ username }} </a<t:omit> {{ username }} </t:omit> ``` This attribute is useful when you want to create element optionally, e.g. hide a link if certain condition is met. #### `attr`[¶](#attr) Twital allows you to create HTML/XML attributes in a very simple way. You do not have to mess up with control structures inside HTML tags. Let’s see how does it work: ``` <div t:attr=" condition ? class='header'"> My Company </div> ``` Here we add conditionally an attribute based on the value of the condition expression. You can use any Twig test expression as **condition** and **attribute value**, but the attribute name must be a litteral. ``` <div t:attr=" users | length ? class='header'|upper , item in array ? class=item"> Here wins the last class that condition will be evaluated to true. </div> ``` When not needed, you can omit the condition instruction. ``` <div t:attr="class='row'"> Class will be "row" </div> ``` Tip attr-append To set an HTML5 boolean attribute, just use booleans as `true` or `false`. ``` <option t:attr="selected=true"> My Company </option> ``` The previous template will be rendered as: ``` <option selected> My Company </option> ``` Note Since XML does not have the concept of “boolean attributes”, this feature may break your output if you are using XML. To to remove and already defined attribute, use `false` as attribute value ``` <div class="foo" t:attr="class=false"> My Company </div> ``` The previous template will be rendered as: ``` <div> My Company </div> ``` #### `attr-append`[¶](#attr-append) Twital allows you to create HTML/XML attributes in a very simple way. t:attr-append is a different version of t:attr: it allows to append content to existing attributes instead of replacing it. ``` <div class="row" t:attr-append=" condition ? class=' even'"> class will be "row even" if 'i' is odd. </div> ``` In the same way of t:attr, condition and the value of attribute can be any valid Twig expression. ``` <div class="row" t:attr-append=" i mod 2 ? class=' even'|upper"> class will be "row EVEN" if 'i' is odd. </div> ``` When not needed, you can omit the condition instruction. ``` <div class="row" t:attr-append=" class=' even'"> Class will be "row even" </div> ``` #### `content`[¶](#content) This attribute allows to replace the content of a note with the content of a variable. Suppose to have a variable `foo` with a value `<NAME> John` and the following template: ``` <div id="pagination" t:content="foo"> This <b>content</b> will be removed </div> ``` The output will be: ``` <div id="pagination">My name is John</div> ``` This can be useful to put come “test” content in your templates that will have a nice aspect on WYSIWYG editors, but at runtime will be replaced by real data coming from variables. #### `replace`[¶](#replace) This attribute acts in a similar way to `content` attribute, instead of replacing the content of a node, will replace the node itself. Suppose to have a variable `foo` with a value `My name is John` and the following template: ``` <div id="pagination" t:replace="foo"> This <b>content</b> will be removed </div> ``` The output will be: ``` My name is John ``` This can be useful to put come “test” content in your templates that will have a nice aspect on WYSIWYG editors, but at runtime will be replaced by real data coming from variables. ### Twital for Template Designers[¶](#twital-for-template-designers) This document gives you an overview on Twital principles: how to write a template and what to bear in mind for making it work well. #### Introduction[¶](#introduction) A template is simply a text file. Twital can generate any HTML/XML format. To make it work, your templates must match the configured file extension. By default, Twital compiles only templates whose name ends with `.twital.xml`, `.twital.html`, `.twital.xhtml` (using respectively XML, HTML5 and XHTML rules to format the output). A Twital template is basically a Twig template that takes advantage of the natural HTML/XML tree structure (avoiding redundant control flow instructions). All expressions are completely Twig compatible; control flow structures (Twig calls them *tags*) are just replaced by some Twital *tags* or *attributes*. Here is a minimal template that illustrates a few basics: ``` <!DOCTYPE html> <html> <head> <title>My Webpage</title> </head> <body> <ul id="navigation"> <li t:for="item in navigation"> <a href="{{ item.href }}">{{ item.caption }}</a> </li> </ul <h1>My Webpage</h1> {{ a_variable }} </body> </html> ``` Tip See [here](index.html#document-api) how to use specific output formats as XML or XHTML and HTML5. #### IDEs Integration[¶](#ides-integration) Any IDE that supports Twig syntax highlighting and auto-completion should be configured to support Twital. Here you can find a list of [IDEs that support Twig/Twital](http://twig.sensiolabs.org/doc/templates.html#ides-integration) #### Variables[¶](#variables) To print the content of variables, you can use exactly the same Twig syntax, using Twig functions, filters etc. ``` {{ foo.bar }} {{ foo['bar'] }} {{ attribute(foo, 'data-foo') }} ``` Note To learn more about Twig variables, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#variables) ##### Setting Variables[¶](#setting-variables) The `t:set` attribute acts in the same way as the Twig’s `set` tag and allows you to set a variable from a template. ``` <div t:set="name = 'Tom'"> Hello {{ tom }} </div<t:omit t:set="numbers = [1,2], items = {'item':'one'}"/> {# t:omit tag will not be output-ed, but t:set will work #} ``` Note To learn more about Twig `set`, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/tags/set.html) #### Filters[¶](#filters) You can use all Twig filters directly into Twital. Here is just an example: ``` {{ name|striptags|title }} {{ list|join(', ') }} ``` You can also use the Twital attribute `t:filter` to filter the content of an element. ``` <div t:filter="upper"> This text becomes uppercase </div> ``` Note To learn more about Twig filters, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#filters) #### Functions[¶](#functions) You can use all Twig functions directly from Twital. For instance, the `range` function returns a list containing an arithmetic progression of integers: ``` <div t:for="i in range(0, 3)"> {{ i }}, </div> ``` Note To learn more about Twig filters, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#functions) #### Control Structure[¶](#control-structure) Almost all Twig control structures have a Twital equivalent node or attribute. For example, to display a list of users, provided in a variable called `users`, use the [for](index.html#document-tags/for) attribute: ``` <h1>Members</h1> <ul> <li t:for="user in users"> {{ user.username|e }} </li> </ul> ``` The [if](index.html#document-tags/if) attribute can be used to test an expression: ``` <ul t:if="users|length"> <li t:for="user in users"> {{ user.username|e }} </li> </ul> ``` Go to the [tags](index.html#document-tags/index) page to learn more about the built-in attributes and nodes. To learn more about Twig control structures, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#control-structure) #### Attributes[¶](#attributes) To create HTML/XML attributes, you do not have to mess up HTML tags with control structures. Twital makes things really easy! Take a look at the following example: ``` <div t:attr=" condition ? class='header'"> My Company </div> ``` Using the [t:attr](index.html#document-tags/attr) attribute, you can conditionally add an attribute depending on the value of the `condition` expression. You can use any Twig expression as a condition or attribute value. The attribute name must be a literal. ``` <div t:attr=" users | length ? class='header'|upper , item in array ? class=item"> Here wins the last condition, which will be evaluated as true. </div> ``` You can also append some content to existing attributes using the [t:attr-append](index.html#document-tags/attr-append). ``` <div class="row" t:attr-append=" i mod 2 ? class=' even'"> class will be "row even" if 'i' is odd. </div> ``` If not needed, you can omit the condition instruction. ``` <div t:attr="class='row'" t:attr-append=" class=' even'"> Class will be "row even" </div> ``` To remove an attribute: ``` <div t:attr=" condition ? class=null"> Class will be "row even" </div> ``` #### Comments[¶](#comments) To comment-out part of a line in a template, you can use the Twig comment syntax `{# ... #}`. #### Including other Templates[¶](#including-other-templates) The [include](index.html#document-tags/include) tag is useful for including a template and returning the rendered content of that template into the current one: ``` <t:include from="sidebar.html"/> ``` Inclusions work exactly as in Twig. Note To learn more about Twig inclusion techniques, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#including-other-templates) #### Template Inheritance[¶](#template-inheritance) Twital’s template inheritance is almost identical to Twig. Twital adds just some features useful to define new blocks. Here we define a base template, `base.html`, which defines a simple HTML skeleton document that you might use for a simple two-column page: ``` <!DOCTYPE html> <html> <head t:block="head"> <link rel="stylesheet" href="style.css" /> <title t:block="title">My Webpage</title> </head> <body> <div id="content" t:block="content"> </div> <div id="footer" t:block="footer"> &copy; Copyright 2011 by <a href="http://domain.invalid/">you</a>. </div> </body> </html> ``` In this example, the [t:block](index.html#document-tags/block) attributes define four blocks that child templates can fill in. All the `t:block` attributes tell the template engine that a child template may override those portions of the template. A child template might look like this: ``` <t:extends from="base.html" <t:block name="title">Index</t:block <t:block name="head"> {{ parent() }} <style type="text/css"> .important { color: #336699; } </style> </t:block <t:block name="content"> <h1>Index</h1> <p class="important"> Welcome to my awesome homepage. </p> </t:block</t:extends> ``` The [t:extends](index.html#document-tags/extends) node tells the template engine that the template “extends” another template. When the template system evaluates the template, it first locates the parent. The extends tag should be the first tag in the template. Note that, since the child template does not define the `footer` block, the value from the parent template is used instead. To render the contents of the parent block, use the [parent](http://twig.sensiolabs.org/doc/functions/parent.html) Twig function. The following template gives back the results of the parent block: ``` <t:block name="sidebar"> <h3>Table Of Contents</h3> ... {{ parent() }} </t:block> ``` Tip The documentation page for the [extends](index.html#document-tags/extends) tag describes more advanced features like block nesting, scope, dynamic inheritance, and conditional inheritance. Note To learn more about Twig inheritance, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#template-inheritance) #### Macros[¶](#macros) Twital also supports Twig macros. A macro is defined via the [macro](index.html#document-tags/macro) tag. Note To learn more about Twig macros, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/templates.html#macros) #### Expressions, Literals and Operators[¶](#expressions-literals-and-operators) All expressions, literals and operators that can be used with Twig, can also be used with Twital. Note Pay attention to HTML/XML escaping rules (eg: &lt; or > inside attributes). #### Whitespace Control[¶](#whitespace-control) Twital will try to respect almost all whitespaces that you type. To remove whitespaces between HTML tags, you can use the `t:spaceless` attribute: ``` <div t:spaceless=""> <strong>foo bar</strong> </div{# output will be <div><strong>foo bar</strong></div> #} ``` Twital behaves the same as Twig in whitespaces handling. #### Extensions[¶](#extensions) Twital can be easily extended. To learn how to create your own extension, you can read the Extending Twital chapter. ### Twital for Developers[¶](#twital-for-developers) This chapter describes the PHP API to Twital and not the template language. It is mostly aimed to developers who want to integrate Twital in their projects. #### Basics[¶](#basics) Twital is a Twig Loader that pre-compiles some templates before sending them back to Twig, which compiles and runs the templates. The first step to using Twital is to configure a valid Twig instance. Later, we can configure the Twital object. ``` <?php use Goetas\Twital\TwitalLoader; $loader = new Twig_Loader_Filesystem('/path/to/templates'); $twitalLoader = new TwitalLoader($loader); $twig = new Twig_Environment($twitalLoader, array( 'cache' => '/path/to/compilation_cache', )); ``` By default, Twital compiles only templates whose name ends with .twital.xml, .twital.html, .twital.xhtml (by using the right source adapter). If you want to change it, adding more supported file formats, you can do something like this: ``` <?php $twital = new TwitalLoader($loader); $twital->addSourceAdapter('/\.wsdl$/', new XMLAdapter()); // handle .wsdl files as XML $twital->addSourceAdapter('/\.htm$/', new HTML5Adapter()); // handle .htm files as HTML5 ``` Note Built in adapters are: XMLAdapter, XHTMLAdapter and HTML5Adpater. Note To learn more about adapters, you can read the dedicated chapter :ref``Creating a SourceAdapter``. Finally, to render a template with some variables, simply call the `render()` method on Twig instance: ``` <?php echo $twig->render('index.twital.html', array('the' => 'variables', 'go' => 'here')); ``` ##### How does Twital work?[¶](#how-does-twital-work) Twital uses Twig to render templates, but before passing a template to Twig, Twital pre-compiles it in its own way. The rendering of a template can be summarized into the following steps: * **Load** the template (done by Twig): if the template has already been compiled, Twig loads it and goes to the *evaluation* step. Otherwise: + A SourceAdapter is chosen (from a set of configured adapters); + The CompilerEvents::PRE_LOAD event is fired; Here, listeners can transform the template source code before DOM loading; + The SourceAdapter will load the source code into a valid [DOMDocument](http://www.php.net/manual/en/class.domdocument.php) object; + The CompilerEvents::POST_LOAD event is fired. + The compiler transforms recognized attributes and nodes into the relative Twig code; + The CompilerEvents::PRE_DUMP event is fired. + The SourceAdapter will dump the compiled DOMDocument into Twig source code; + The CompilerEvents::POST_DUMP event is fired. Here, listeners can perform some non-DOM transformations to the new template source code; + Twital passes the final source code to Twig (Finally Twig compiles the Twig source code into PHP code) * **Evaluate** the template: Twig calls the `display()` method of the compiled template by passing a context. #### Extending Twital[¶](#extending-twital) As Twig, Twital is very extensible and you can hook into it. The best way to extend Twital is to create your own “extension” and provide your functionalities. ##### Creating a SourceAdapter[¶](#creating-a-sourceadapter) Source adapters adapt a resource representation (usually a file or a string) to something that can be converted into a PHP [DOMDocument](http://www.php.net/manual/en/class.domdocument.php) object. Note that, the same object has to be “re-adapted” into its original representation. If you want to provide a source adapter, there is no need to create an extension; you can simply implement the `Goetas\Twital\SourceAdapter` interface and use it. To enable an adapter, you have to add it to Twital’s loader instance by using the `addSourceAdapter()` method: ``` <?php use Goetas\Twital\TwitalLoader; $twital = new TwitalLoader($fileLoader); $twital->addSourceAdapter('/.*.xml$/i', new MyXMLAdapter()); ``` A “naive” implementation of MyXMLAdapter can be: ``` <?php use Goetas\Twital\SourceAdapter; use Goetas\Twital\Template; class MyXMLAdapter implements SourceAdapter { public function load($source) { $dom = new \DOMDocument('1.0', 'UTF-8'); $someMetadata = null; // you can also extract some metadata from original source return new Template($dom, $someMetadata); } public function dump(Template $template) { $metedata = $template->getMetadata(); $dom = $template->getDocument(); return $dom->saveXML(); } } ``` * As you can see, `load` takes a string (containing the Twital template source code), and returns a `Goetas\Twital\Template` object. > * `Goetas\Twital\Template` is an object that requires a [DOMDocument](http://www.php.net/manual/en/class.domdocument.php) as first argument and a generic variable as second argument (useful to hold some metadata extracted from the original source, which can be used later during the “dump” phase). * The `dump` method takes a `Goetas\Twital\Template` instance and returns a string. The returned string contains the template source code that will be passed to Twig. ##### Creating an Extension[¶](#creating-an-extension) An extension is simply a container of functionalities that can be added to Twital. The functionalities are node parsers, attribute parses and generic event listeners. To create an extension, you have to implement the `Goetas\Twital\Extension` interface or extend the GoetasTwitalExtensionAbstractExtension class. This is the `Goetas\Twital\Extension` interface: To enable your extensions, you have to add them to your Twital instance by using the `Goetas\Twital\Twital::addExtension()` method: ``` <?php use Goetas\Twital\Twital; use Goetas\Twital\TwitalLoader; $twital = new Twital($twig); $twital->addExtension(new MyNewCustomExtension()); $fsLoader = new Twig_Loader_Filesystem('/path/to/templates'); $twitalLoader = new TwitalLoader($fsLoader, $twital); ``` Tip The bundled extensions are great examples of how extensions work. Note In some special cases you may need to create a Twig extension instead of a Twital one. To learn how to create a Twig extension, you can read the [Twig official documentation](http://twig.sensiolabs.org/doc/advanced.html) ##### Creating a Node parser[¶](#creating-a-node-parser) Node parsers are aimed at handling any custom XML/HTML tag. Suppose that you want to create an extension to handle a tag `<my:hello>` that simply prints “Hello {name}”: ``` <div class="red" xmlns:my="http://www.example.com/namespace"> <my:hello name="John"/> </div> ``` First, you have to create your node parser, which handles this “new” tag. To do this, you have to implement the `Goetas\Twital\Node` interface. The `HelloNode` class can be something like this: ``` <?php use Goetas\Twital\Node; use Goetas\Twital\Compiler; class HelloNode implements Node { function visit(\DOMElement $node, Compiler $twital) { $helloNode = $node->ownerDocument->createTextNode("hello"); $nameNode = $twital->createPrintNode( $node->ownerDocument, "'".$node->getAttribute("name")."'" ); $node->parentNode->replaceChild($nameNode, $node); $node->parentNode->insertBefore($helloNode, $nameNode); } } ``` Let’s take a look at the `Goetas\Twital\Node::visit` method signature: * `$node` gets the [`DOMElement`_](#system-message-1) node of your `my:hello` tag; * `$twital` gets the Twital compiler; * No return value for the `visit` method is required. The aim of the `Goetas\Twital\Node::visit` method is to transform the Twital template representation into the Twig template syntax. Tip `$compiler->applyTemplatesToChilds()`, `$compiler->applyTemplates()` or `$compiler->applyTemplatesToAttributes()` can be very useful when you need to process recursively the content of a node. Finally, you have to create an extension that ships your node parser. ``` <?php class MyExtension extends AbstractExtension { public function getNodes() { return array( 'http://www.example.com/namespace'=>array( 'hello' => new HelloNode() ) ); } } ``` As you can see, the `getNodes` method has to return a two-level hash. * The first level is the node namespace; * The second level is the node name. Of course, an extension can ship nodes that work with multiple namespaces. Tip To make the `xmlns:my` declaration optional, you can also use the event listener as `Goetas\Twital\EventSubscriber\CustomNamespaceRawSubscriber`. ##### Creating an Attribute parser[¶](#creating-an-attribute-parser) An attribute parser aims at handling custom XML/HTML attributes. Suppose that we want to create an extension to handle an attribute that simply appends some text inside a node, removing its original content. ``` <div class="red" xmlns:my="http://www.example.com/namespace"> <p my:replace="rawHtmlVar"> This text will be replaced with the content of the "rawHtmlVar" variable. </p> </div> ``` To add your attribute parser, first you have to implement the `Goetas\Twital\Attribute` interface. The `HelloAttribute` class can be something like this: ``` <?php class HelloAttribute implements Attribute { function visit(\DOMAttr $attr, Compiler $twital) { $printNode = $twital->createPrintNode($attr->ownerNode->ownerDocument, $attr." | raw"); $attr->ownerNode->appendChild($printNode); $node->parentNode->insertBefore($helloNode, $nameNode); return Attribute::STOP_NODE; } } ``` Let’s take a look at the `Goetas\Twital\Attribute::visit` method: * `$attr` gets the DOMAttr node of your attribute; * `$twital` gets the Twital compiler. The `visit` method has to transform the custom attribute into a valid Twig code. The `visit` method can also return one of the following constants: * `Attribute::STOP_NODE`: instructs the compiler to jump to the next node (go to next sibling), stopping the processing of possible node childs; * `Attribute::STOP_ATTRIBUTE`: instructs the compiler to stop processing attributes of the current node (continue normally with child and sibling nodes). Finally, you have to create an extension that ships your attribute parser. ``` <?php class MyExtension extends AbstractExtension { public function getAttributes() { return array( 'http://www.example.com/namespace'=>array( 'replace' => new HelloAttribute() ) ); } } ``` As you can see, the `getAttributes` method has to return a two-level hash. - The first level is the attribute namespace; - The second level is the attribute name. Of course, an extension can ship nodes that work with multiple namespaces. Tip To make the `xmlns:my` declaration optional, you can also use the event listener as `Goetas\Twital\EventSubscriber\CustomNamespaceRawSubscriber`. ##### Event Listeners[¶](#event-listeners) Another convenient way to hook into Twital is to create an event listener. The possible entry points for listeners are: * `Twital\EventDispatcher\CompilerEvents::PRE_LOAD`, fired before the source has been passed to the source adapter; * `Twital\EventDispatcher\CompilerEvents::POST_LOAD`, fired after the source has been loaded into a DOMDocument; * `Twital\EventDispatcher\CompilerEvents::PRE_DUMP`, fired before the DOMDocument has been passed to the source adapter for the dumping phase; * `Twital\EventDispatcher\CompilerEvents::POST_DUMP`, fired after the DOMDocument has been dumped into a string by the source adapter. A valid listener must implement the `Symfony\Component\EventDispatcher\EventSubscriberInterface` interface. This is an example for a valid listener: ``` <?php class MySubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { return array( CompilerEvents::POST_DUMP => 'modifySource' CompilerEvents::PRE_DUMP => 'modifyDOM' CompilerEvents::POST_LOAD => 'modifyDOM', CompilerEvents::PRE_LOAD => 'modifySource' ); } public function modifyDOM(TemplateEvent $event) { $event->getTemplate(); // do something with template (returns a Template instance) } public function modifySource(SourceEvent $event) { $event->getTemplate(); // do something with template (returns a string) } } ``` ###### Event `CompilerEvents::PRE_LOAD`[¶](#event-compilerevents-pre-load) This event is fired just before a SourceAdapter tries to load the source code into a [DOMDocument](http://www.php.net/manual/en/class.domdocument.php). Here you can modify the source code, adapting it for a source adapter. Here an example: ``` <?php class MySubscriber implements EventSubscriberInterface { public static function getSubscribedEvents(){ return array( CompilerEvents::PRE_LOAD => 'modifySource' ); } public function modifySource(SourceEvent $event) { $str = $event->getTemplate(); $str = str_replace("&nbsp;", "&#160;", $str); $event->setTemplate($str); } } ``` Tip Take a look at `Goetas\Twital\EventSubscriber\CustomNamespaceRawSubscriber` to see what can be done using this event. ###### Event `CompilerEvents::POST_LOAD`[¶](#event-compilerevents-post-load) This event is fired just after a `Goetas\Twital\SourceAdapeter::load()` call. Here you can modify the [DOMDocument](http://www.php.net/manual/en/class.domdocument.php) object; it is a good point where to apply modifications that can’t be done by node parsers. You can also add nodes that will be parsed by Twital (eg: `t:if` attribute, `t:include` nodes, etc). Here an example: ``` <?php class MySubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { return array( CompilerEvents::POST_LOAD => 'modifyDOM' ); } public function modifyDOM(TemplateEvent $event) { $template = $event->getTemplate(); $dom = $template->getTemplate(); $nodes = $dom->getElementsByTagName('mynode'); // do something with $nodes } } ``` Tip Take a look at `Goetas\Twital\EventSubscriber\CustomNamespaceSubscriber` to see what can be done using this event. ###### Event `CompilerEvents::PRE_DUMP`[¶](#event-compilerevents-pre-dump) This event is fired when the Twital compilation process ends. It is similar to the `CompilerEvents::POST_LOAD` event, but you can not add elements that need to be parsed by Twital. Here an example: ``` <?php class MySubscriber implements EventSubscriberInterface { public static function getSubscribedEvents(){ return array( CompilerEvents::PRE_DUMP => 'modifyDOM' ); } public function modifyDOM(TemplateEvent $event) { $template = $event->getTemplate(); $dom = $template->getTemplate(); $body = $dom->getElementsByTagName('body')->item(0); // do something with body node... } } ``` ###### Event `CompilerEvents::POST_DUMP`[¶](#event-compilerevents-post-dump) This event is fired just after the `Goetas\Twital\SourceAdapeter::dump()` call. Here you can modify the final source code, which will be passed to Twig. Here an example: ``` <?php class MySubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { return array( CompilerEvents::POST_DUMP => 'modifySource' ); } public function modifySource(SourceEvent $event) { $str = $event->getTemplate(); $str.=" {# generated by Twital #}"; $event->setTemplate($str); } } ``` Tip Take a look at `Goetas\Twital\EventSubscriber\DOMMessSubscriber` to see what can be done using this event. ###### Ship your listeners[¶](#ship-your-listeners) If you have created your listeners, add them to Tiwtal. To do this, you have to create an extension that ships your listeners. ``` <?php class MyExtension extends AbstractExtension { public function getSubscribers() { return array( new MySubscriber(), new MyNewSubscriber() ); } } ``` ### Common mistakes and tricks[¶](#common-mistakes-and-tricks) Since Twital internally uses XML, you need to pay attention to some aspects while writing a template. All templates must be XML valid (some exceptions are allowed…). * All templates must have **one** root node. When needed, you can use the t:omit node to enclose other nodes. ``` <t:omit> <div>one</div> <div>two</div> </t:omit> ``` * A template must be well formatted (opening and closing nodes, entities, DTD, etc…). Some aspects as namespaces, HTML5 & HTML entities, non-self closing tags can be “repaired”, but it is recommended to be closer to XML as much as possible. The example below lacks the br self closing slash, but using the HTML5 source adapter it can be omitted. ``` <div> <br> </div> ``` * The usage of & must follow XML syntax rules. ``` <div> &amp; <!-- to output "&" you have to write "&amp;" --> &lt; <!-- to output "<" you have to write "&lt;" --> &gt; <!-- to output ">" you have to write "&gt;" -- <!-- you can use all numeric entities --> &#160; &#160; <!-- you should not use named entities (&euro;)--> </div> ``` * To be compatible with all browsers, the use of the script tag should be combined with CDATA sections and script comments. ``` <script> //<![CDATA[ if ( 1 > 2 && 2 < 0){ alert(' ok ') } //]]> </script> <style> /*<![CDATA[*/ head { color: red; } /*]]>*/ </style> ``` ### Symfony Users[¶](#symfony-users) If you are a [Symfony](https://symfony.com) user, the most convenient way to integrate Twital into your project is using the [TwitalBundle](https://github.com/goetas/twital-bundle). The bundle integrates all most common Symfony functionalities as Assetic, Forms, Translations etc. Contributing[¶](#contributing) --- This is an open source project: contributions are welcome. If your are interested, you can contribute to documentation, source code, test suite or anything else! To start contributing right now, go to <https://github.com/goetas/twital> and fork it! To improve your contributing experience, you can take a look into <https://github.com/goetas/twital/blob/master/CONTRIBUTING.md> inside the root directory of Twital GIT repository. Symfony2 Users[¶](#symfony2-users) --- If you are a [Symfony2](http://symfony.com) user, you can add Twital to your project using the [TwitalBundle](https://github.com/goetas/twital-bundle). The bundle integrates all most common functionalities as Assetic, Forms, Translations, Routing, etc. Note[¶](#note) --- I’m sorry for the *terrible* english fluency used inside the documentation, I’m trying to improve it. Pull Requests are welcome.
ECOTOXr
cran
R
Package ‘ECOTOXr’ October 9, 2023 Type Package Title Download and Extract Data from US EPA's ECOTOX Database Version 1.0.5 Date 2023-10-09 Author <NAME> [aut, cre, dtc] (<https://orcid.org/0000-0002-7961-6646>) Maintainer <NAME> <<EMAIL>> Description The US EPA ECOTOX database is a freely available database with a treasure of aquatic and terrestrial ecotoxicological data. As the online search interface doesn't come with an API, this package provides the means to easily access and search the database in R. To this end, all raw tables are downloaded from the EPA website and stored in a local SQLite database. Depends R (>= 3.5.0), RSQLite Imports crayon, dbplyr, dplyr, httr, jsonlite, lifecycle, purrr, rappdirs, readr, readxl, rlang, rvest, stringr, tibble, tidyr, tidyselect, utils Suggests DBI, standartox, testthat (>= 3.0.0), webchem URL https://github.com/pepijn-devries/ECOTOXr BugReports https://github.com/pepijn-devries/ECOTOXr/issues License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 Config/testthat/edition 3 NeedsCompilation no Repository CRAN Date/Publication 2023-10-09 18:30:08 UTC R topics documented: build_ecotox_sqlit... 2 ca... 4 check_ecotox_availabilit... 7 cite_ecoto... 8 dbConnectEcoto... 9 download_ecotox_dat... 10 get_ecotox_inf... 11 get_ecotox_sqlite_fil... 12 get_ecotox_ur... 13 list_ecotox_field... 14 search_ecoto... 15 websearch_compto... 18 websearch_ecoto... 21 %>... 22 build_ecotox_sqlite Build an SQLite database from zip archived tables downloaded from EPA website Description [Stable] This function is called automatically after download_ecotox_data(). The database files can also be downloaded manually from the EPA website from which a local database can be build using this function. Usage build_ecotox_sqlite(source, destination = get_ecotox_path(), write_log = TRUE) Arguments source A character string pointing to the directory path where the text files with the raw tables are located. These can be obtained by extracting the zip archive from https://cfpub.epa.gov/ecotox/ and look for ’Download ASCII Data’. destination A character string representing the destination path for the SQLite file. By default this is get_ecotox_path(). write_log A logical value indicating whether a log file should be written in the destina- tion path TRUE. The log contains information on the source and destination path, the version of this package, the creation date, and the operating system on which the database was created. Details Raw data downloaded from the EPA website is in itself not very efficient to work with in R. The files are large and would put a large strain on R when loading completely into the system’s memory. Instead use this function to build an SQLite database from the tables. That way, the data can be queried without having to load it all into memory. EPA provides the raw table from the ECOTOX database as text files with pipe-characters (’|’) as table column separators. Although not documented, the tables appear not to contain comment or quotation characters. There are records containing the reserved pipe-character that will confuse the table parser. For these records, the pipe-character is replaced with a dash character (’-’). In addition, while reading the tables as text files, this package attempts to decode the text as UTF8. Unfortunately, this process appears to be platform-dependent, and may therefore result in different end-results on different platforms. This problem only seems to occur for characters that are listed as ’control characters’ under UTF8. This will have consequences for reproducibility, but only if you build search queries that look for such special characters. It is therefore advised to stick to common (non-accented) alpha-numerical characters in your searches, for the sake of reproducibility. Use ’suppressMessages()’ to suppress the progress report. Value Returns NULL invisibly. Author(s) <NAME> Examples ## Not run: ## This example will only work properly if 'dir' points to an existing directory ## with the raw tables from the ECOTOX database. This function will be called ## automatically after a call to 'download_ecotox_data()'. test <- check_ecotox_availability() if (test) { files <- attributes(test)$files[1,] dir <- gsub(".sqlite", "", files$database, fixed = T) path <- files$path if (dir.exists(file.path(path, dir))) { ## This will build the database in your temp directory: build_ecotox_sqlite(source = file.path(path, dir), destination = tempdir()) } } ## End(Not run) cas Functions for handling chemical abstract service (CAS) registry num- bers Description [Stable] Functions for handling chemical abstract service (CAS) registry numbers Usage cas(length = 0L) is.cas(x) as.cas(x) ## S3 method for class 'cas' x[[i]] ## S3 method for class 'cas' x[i] ## S3 replacement method for class 'cas' x[[i]] <- value ## S3 replacement method for class 'cas' x[i] <- value ## S3 method for class 'cas' format(x, hyphenate = TRUE, ...) ## S3 method for class 'cas' as.character(x, ...) show.cas(x, ...) ## S3 method for class 'cas' print(x, ...) ## S3 method for class 'cas' as.list(x, ...) ## S3 method for class 'cas' as.double(x, ...) ## S3 method for class 'cas' as.integer(x, ...) ## S3 method for class 'cas' c(...) ## S3 method for class 'cas' as.data.frame(...) Arguments length A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. x Object from which data needs to be extracted or replaced, or needs to be coerced into a specific format. For nearly all of the functions documented here, this needs to be an object of the S3 class ’cas’, which can be created with as.cas. For as.cas, x can be a character (CAS registry number with or without hy- phenation) or a numeric value. Note that as.cas will only accept correctly formatted and valid CAS registry numbers. i Index specifying element(s) to extract or replace. See also base::Extract(). value A replacement value, can be anything that can be converted into an S3 cas-class object with as.cas. hyphenate A logical value indicating whether the formatted CAS number needs to be hyphenated. Default is TRUE. ... Arguments passed to other functions Details In the database CAS registry numbers are stored as text (type character). As CAS numbers can consist of a maximum of 10 digits (plus two hyphens) this means that each CAS number can con- sume up to 12 bytes of memory or disk space. By storing the data numerically, only 5 bytes are required. These functions provide the means to handle CAS registry numbers and coerce from and to different formats and types. Value Functions cas, c and as.cas return S3 class ’cas’ objects. Coercion functions (starting with ’as’) return the object as specified by their respective function names (i.e., integer, double, character, list and data.frame). The show.cas and print functions also return formatted charaters. The function is.cas will return a single logical value, indicating whether x is a valid S3 cas-class object. The square brackets return the selected index/indices, or the vector of cas objects where the selected elements are replaced by value. Author(s) <NAME> Examples ## This will generate a vector of cas objects containing 10 ## fictive (0-00-0), but valid registry numbers: cas(10) ## This is a cas-object: is.cas(cas(0L)) ## This is not a cas-object: is.cas(0L) ## Three different ways of creating a cas object from ## Benzene's CAS registry number (the result is the same) as.cas("71-43-2") as.cas("71432") as.cas(71432L) ## This is one way of creating a vector with multiple CAS registry numbers: cas_data <- as.cas(c("64175", "71432", "58082")) ## This is how you select a specific element(s) from the vector: cas_data[2:3] cas_data[[2]] ## You can also replace specific elements in the vector: cas_data[1] <- "7440-23-5" cas_data[[2]] <- "129-00-0" ## You can format CAS numbers with or without hyphens: format(cas_data, TRUE) format(cas_data, FALSE) ## The same can be achieved using as.character as.character(cas_data, TRUE) as.character(cas_data, FALSE) ## There are also show and print methods available: show(cas_data) print(cas_data) ## Numeric values can be obtained from CAS using as.numeric, as.double or as.integer as.numeric(cas_data) ## Be careful, however. Some CAS numbers cannot be represented by R's 32 bit integers ## and will produce NA's. This will work OK: huge_cas <- as.cas("9999999-99-5") ## Not run: ## This will not: as.integer(huge_cas) ## End(Not run) ## The trick applied by this package is that the final ## validation digit is stored separately as attribute: unclass(huge_cas) ## This is how cas objects can be concatenated: cas_data <- c(huge_cas, cas_data) ## This will create a data.frame as.data.frame(cas_data) ## This will create a list: as.list(cas_data) check_ecotox_availability Check whether a ECOTOX database exists locally Description [Stable] Tests whether a local copy of the US EPA ECOTOX database exists in get_ecotox_path(). Usage check_ecotox_availability(target = get_ecotox_path()) Arguments target A character string specifying the path where to look for the database file. Details When arguments are omitted, this function will look in the default directory (get_ecotox_path()). However, it is possible to build a database file elsewhere if necessary. Value Returns a logical value indicating whether a copy of the database exists. It also returns a files attribute that lists which copies of the database are found. Author(s) <NAME> Examples check_ecotox_availability() cite_ecotox Cite the downloaded copy of the ECOTOX database Description [Stable] Cite the downloaded copy of the ECOTOX database and this package for reproducible results. Usage cite_ecotox(path = get_ecotox_path(), version) Arguments path A character string with the path to the location of the local database \(default is get_ecotox_path()\). version A character string referring to the release version of the database you wish to locate. It should have the same format as the date in the EPA download link, which is month, day, year, separated by underscores ("%m_%d_%Y"). When missing, the most recent available copy is selected automatically. Details When you download a copy of the EPA ECOTOX database using download_ecotox_data(), a BibTex file is stored that registers the database release version and the access (= download) date. Use this function to obtain a citation to that specific download. In order for others to reproduce your results, it is key to cite the data source as accurately as possible. Value Returns a vector of bibentry()’s, containing a reference to the downloaded database and this package. Author(s) <NAME> Examples ## Not run: ## In order to cite downloaded database and this package: cite_ecotox() ## End(Not run) dbConnectEcotox Open or close a connection to the local ECOTOX database Description [Stable] Wrappers for dbConnect() and dbDisconnect() methods. Usage dbConnectEcotox(path = get_ecotox_path(), version, ...) dbDisconnectEcotox(conn, ...) Arguments path A character string with the path to the location of the local database (default is get_ecotox_path()). version A character string referring to the release version of the database you wish to locate. It should have the same format as the date in the EPA download link, which is month, day, year, separated by underscores ("%m_%d_%Y"). When missing, the most recent available copy is selected automatically. ... Arguments that are passed to dbConnect() method or dbDisconnect() method. conn An open connection to the ECOTOX database that needs to be closed. Details Open or close a connection to the local ECOTOX database. These functions are only required when you want to send custom queries to the database. For most searches the search_ecotox() function will be adequate. Value A database connection in the form of a DBI::DBIConnection-class() object. The object is tagged with: a time stamp; the package version used; and the file path of the SQLite database used in the connection. These tags are added as attributes to the object. Author(s) <NAME> Examples ## Not run: ## This will only work when a copy of the database exists: con <- dbConnectEcotox() ## check if the connection works by listing the tables in the database: dbListTables(con) ## Let's be a good boy/girl and close the connection to the database when we're done: dbDisconnectEcotox(con) ## End(Not run) download_ecotox_data Download and extract ECOTOX database files and compose database Description [Stable] In order for this package to fully function, a local copy of the ECOTOX database needs to be build. This function will download the required data and build the database. Usage download_ecotox_data( target = get_ecotox_path(), write_log = TRUE, ask = TRUE, verify_ssl = getOption("ECOTOXr_verify_ssl"), ... ) Arguments target Target directory where the files will be downloaded and the database compiled. Default is get_ecotox_path(). write_log A logical value indicating whether a log file should be written to the target path TRUE. ask There are several steps in which files are (potentially) overwritten or deleted. In those cases the user is asked on the command line what to do in those cases. Set this parameter to FALSE in order to continue without warning and asking. verify_ssl When set to FALSE the SSL certificate of the host (EPA) is not verified. Can also be set as option: options(ECOTOXr_verify_ssl = TRUE). Default is TRUE. ... Arguments passed on to httr::GET(). Details This function will attempt to find the latest download url for the ECOTOX database from the EPA website (see get_ecotox_url()). When found it will attempt to download the zipped archive containing all required data. This data is then extracted and a local copy of the database is build. Use ’suppressMessages()’ to suppress the progress report. Value Returns NULL invisibly. Known issues On some machines this function fails to connect to the database download URL from the EPA website due to missing SSL certificates. Unfortunately, there is no easy fix for this in this package. A work around is to download and unzip the file manually using a different machine or browser that is less strict with SSL certificates. You can then call build_ecotox_sqlite() and point the source location to the manually extracted zip archive. For this purpose get_ecotox_url() can be used. Alternatively, one could try to call download_ecotox_data() by setting verify_ssl = FALSE; but only do so when you trust the download URL from get_ecotox_URL(). Author(s) <NAME> Examples ## Not run: ## This will download and build the database in your temp dir: download_ecotox_data(tempdir()) ## End(Not run) get_ecotox_info Get information on the local ECOTOX database when available Description [Stable] Get information on how and when the local ECOTOX database was build. Usage get_ecotox_info(path = get_ecotox_path(), version) Arguments path A character string with the path to the location of the local database (default is get_ecotox_path()). version A character string referring to the release version of the database you wish to locate. It should have the same format as the date in the EPA download link, which is month, day, year, separated by underscores ("%m_%d_%Y"). When missing, the most recent available copy is selected automatically. Details Get information on how and when the local ECOTOX database was build. This information is re- trieved from the log-file that is (optionally) stored with the local database when calling download_ecotox_data() or build_ecotox_sqlite(). Value Returns a vector of characters, containing a information on the selected local ECOTOX database. Author(s) <NAME> Examples ## Not run: ## Show info on the current database (only works when one is downloaded and build): get_ecotox_info() ## End(Not run) get_ecotox_sqlite_file The local path to the ECOTOX database (directory or sqlite file) Description [Stable] Obtain the local path to where the ECOTOX database is (or will be) placed. Usage get_ecotox_sqlite_file(path = get_ecotox_path(), version) get_ecotox_path() Arguments path When you have a copy of the database somewhere other than the default direc- tory (get_ecotox_path()), you can provide the path here. version A character string referring to the release version of the database you wish to locate. It should have the same format as the date in the EPA download link, which is month, day, year, separated by underscores ("%m_%d_%Y"). When missing, the most recent available copy is selected automatically. Details It can be useful to know where the database is located on your disk. This function returns the loca- tion as provided by rappdirs::app_dir(), or as specified by you using options(ECOTOXr_path = "mypath"). Value Returns a character string of the path. get_ecotox_path will return the default directory of the database. get_ecotox_sqlite_file will return the path to the sqlite file when it exists. Author(s) <NAME> Examples get_ecotox_path() ## Not run: ## This will only work if a local database exists: get_ecotox_sqlite_file() ## End(Not run) get_ecotox_url Get ECOTOX download URL from EPA website Description [Stable] This function downloads the webpage at https://cfpub.epa.gov/ecotox/index.cfm. It then searches for the download link for the complete ECOTOX database and extract its URL. Usage get_ecotox_url(verify_ssl = getOption("ECOTOXr_verify_ssl"), ...) Arguments verify_ssl When set to FALSE the SSL certificate of the host (EPA) is not verified. Can also be set as option: options(ECOTOXr_verify_ssl = TRUE). Default is TRUE. ... arguments passed on to httr::GET() Details This function is called by download_ecotox_data() which tries to download the file from the resulting URL. On some machines this fails due to issues with the SSL certificate. The user can try to download the file by using this URL in a different browser (or on a different machine). Alternatively, the user could try to use [download_ecotox_data](verify_ssl = FALE) when the download URL is trusted. Value Returns a character string containing the download URL of the latest version of the EPA ECOTOX database. Author(s) <NAME> Examples ## Not run: get_ecotox_url() ## End(Not run) list_ecotox_fields List the field names that are available from the ECOTOX database Description [Stable] List the field names (table headers) that are available from the ECOTOX database Usage list_ecotox_fields( which = c("default", "extended", "full", "all"), include_table = TRUE ) Arguments which A character string that specifies which fields to return. Can be any of: ’default’: returns default output field names; ’all’: returns all fields; ’extended’: returns all fields of the default tables; or ’full’: returns all fields except those from tables ’chemical_carriers’, ’media_characteristics’, ’doses’, ’dose_responses’, ’dose_response_details’, ’dose_response_links’ and ’dose_stat_method_codes’. include_table A logical value indicating whether the table name should be included as prefix. Default is TRUE. Details This can be useful when specifying a search_ecotox(), to identify which fields are available from the database, for searching and output. Not that when requesting ’all’ fields, you will get all fields available from the latest EPA release of the ECOTOX database. This means that not necessarily all fields are available in your local build of the database. Value Returns a vector of type character containing the field names from the ECOTOX database. Author(s) <NAME> Examples ## Fields that are included in search results by default: list_ecotox_fields("default") ## All fields that are available from the ECOTOX database: list_ecotox_fields("all") ## All except fields from the tables 'chemical_carriers', 'media_characteristics', ## 'doses', 'dose_responses', 'dose_response_details', 'dose_response_links' and ## 'dose_stat_method_codes' that are available from the ECOTOX database: list_ecotox_fields("full") search_ecotox Search and retrieve toxicity records from the database Description [Stable] Create (and execute) an SQL search query based on basic search terms and options. This allows you to search the database, without having to understand SQL. Usage search_ecotox( search, output_fields = list_ecotox_fields("default"), group_by_results = TRUE, compute = FALSE, as_data_frame = TRUE, ... ) search_ecotox_lazy( search, output_fields = list_ecotox_fields("default"), compute = FALSE, ... ) search_query_ecotox(search, output_fields = list_ecotox_fields("default"), ...) Arguments search A named list containing the search terms. The names of the elements should refer to the field (i.e. table header) in which the terms are searched. Use list_ecotox_fields() to obtain a list of available field names. Each element in that list should contain another list with at least one element named ’terms’. This should contain a vector of character strings with search terms. Optionally, a second element named ’method’ can be provided which should be set to either ’contains’ (default, when missing) or ’exact’. In the first case the query will match any record in the indicated field that contains the search term. In case of ’exact’ it will only return exact matches. Note that searches are not case sensitive, but are picky with special (accented) characters. While building the local database (see build_ecotox_sqlite) such special char- acters may be treated differently on different operating systems. For the sake of reproducibility, the user is advised to stick with non-accented alpha-numeric characters. Search terms for a specific field (table header) will be combined with ’or’. Meaning that any record that matches any of the terms are returned. For instance when ’latin_name’ ’Daphnia magna’ and ’Skeletonema costatum’ are searched, results for both species are returned. Search terms across fields (table headers) are combined with ’and’, which will narrow the search. For instance if ’chem- ical_name’ ’benzene’ is searched in combination with ’latin_name’ ’Daphnia magna’, only tests where Daphnia magna are exposed to benzene are returned. When this search behaviour described above is not desirable, the user can ei- ther adjust the query manually, or use this function to perform several separate searches and combine the results afterwards. Beware that some field names are ambiguous and occur in multiple tables (like cas_number' and code’). When searching such fields, the search result may not be as expected. output_fields A vector of character strings indicating which field names (table headers) should be included in the output. By default [list_ecotox_fields]("default") is used. Use [list_ecotox_fields]("all") to list all available fields. group_by_results Ecological test results are generally the most informative element in the ECO- TOX database. Therefore, this search function returns a table with unique results in each row. However, some tables in the database (such as ’chemical_carriers’ and ’dose_responses’) have a one to many relationship with test results. This means that multiple chem- ical carriers can be linked to a single test result, similarly, multiple doses can also be linked to a single test result. By default the search results are grouped by test results. As a result not all doses or chemical carriers may be displayed in the output. Set the group_by_results parameter to FALSE in order to force SQLite to output all data (e.g., all carriers). But beware that test results may be duplicated in those cases. compute The ECOTOXr package tries to construct database queries as lazy as possi- ble. Meaning that R moves as much of the heavy lifting as possible to the database. When your search becomes complicated (e.g., when including many output fields), you may run into trouble and hit the SQL parser limits. In those cases you can set this parameter to TRUE. Database queries are then computed in the process of joining tables. This is generally slower. Alternatively, you could try to include less output fields in order to simplify the query. as_data_frame [Experimental] logical value indicating whether the result should be con- verted into a data.frame (default is TRUE). When set to FALSE the data will be returned as a tbl_df(). ... Arguments passed to dbConnectEcotox() and other functions. You can use this when the database is not located at the default path (get_ecotox_path()). Details The ECOTOX database is stored locally as an SQLite file, which can be queried with SQL. These functions allow you to automatically generate an SQL query and send it to the database, without having to understand SQL. The function search_query_ecotox generates and returns the SQL query (which can be edited by hand if desired). You can also directly call search_ecotox, this will first generate the query, send it to the database and retrieve the result. Although the generated query is not optimized for speed, it should be able to process most com- mon searches within an acceptable time. The time required for retrieving data from a search query depends on the complexity of the query, the size of the query and the speed of your machine. Most queries should be completed within seconds (or several minutes at most) on modern ma- chines. If your search require optimisation for speed, you could try reordering the search fields. You can also edit the query generated with search_query_ecotox by hand and retrieve it with DBI::dbGetQuery(). Note that this package is actively maintained and this function may be revised in future versions. In order to create reproducible results the user must: always work with an official release from CRAN and document the package and database version that are used to generate specific results (see also cite_ecotox()). Value In case of search_query_ecotox, a character string containing an SQL query is returned. This query is built based on the provided search terms and options. In case of search_ecotox a data.frame is returned based on the search query built with search_query_ecotox. The data.frame is unmodified as returned by SQLite, meaning that all fields are returned as characters (even where the field types are ’date’ or ’numeric’). The results are tagged with: a time stamp; the package version used; and the file path of the SQLite database used in the search (when applicable). These tags are added as attributes to the output table or query. Author(s) <NAME> See Also Other search-functions: websearch_comptox(), websearch_ecotox() Examples ## Not run: ## let's find the ids of all ecotox tests on species ## where Latin names contain either of 2 specific genus names and ## where they were exposed to the chemical benzene if (check_ecotox_availability()) { search <- list( latin_name = list( terms = c("Skeletonema", "Daphnia"), method = "contains" ), chemical_name = list( terms = "benzene", method = "exact" ) ) ## rows in result each represent a unique test id from the database result <- search_ecotox(search) query <- search_query_ecotox(search) cat(query) } else { print("Sorry, you need to use 'download_ecotox_data()' first in order for this to work.") } ## End(Not run) websearch_comptox Search and retrieve substance information from https://comptox. epa.gov/dashboard Description [Experimental] Search https://comptox.epa.gov/dashboard for substances and their chemico- physical properties and meta-information. Usage websearch_comptox( searchItems, identifierTypes = c("chemical_name", "CASRN", "INCHIKEY", "dtxsid"), inputType = c("IDENTIFIER", "DTXCID", "INCHIKEY_SKELETON", "MSREADY_FORMULA", "EXACT_FORMULA", "MASS"), downloadItems = c("DTXCID", "CASRN", "INCHIKEY", "IUPAC_NAME", "SMILES", "INCHI_STRING", "MS_READY_SMILES", "QSAR_READY_SMILES", "MOLECULAR_FORMULA", "AVERAGE_MASS", "MONOISOTOPIC_MASS", "QC_LEVEL", "SAFETY_DATA", "EXPOCAST", "DATA_SOURCES", "TOXVAL_DATA", "NUMBER_OF_PUBMED_ARTICLES", "PUBCHEM_DATA_SOURCES", "CPDAT_COUNT", "IRIS_LINK", "PPRTV_LINK", "WIKIPEDIA_ARTICLE", "QC_NOTES", "ABSTRACT_SHIFTER", "TOXPRINT_FINGERPRINT", "ACTOR_REPORT", "SYNONYM_IDENTIFIER", "RELATED_RELATIONSHIP", "ASSOCIATED_TOXCAST_ASSAYS", "TOXVAL_DETAILS", "CHEMICAL_PROPERTIES_DETAILS", "BIOCONCENTRATION_FACTOR_TEST_PRED", "BOILING_POINT_DEGC_TEST_PRED", "48HR_DAPHNIA_LC50_MOL/L_TEST_PRED", "DENSITY_G/CM^3_TEST_PRED", "DEVTOX_TEST_PRED", "96HR_FATHEAD_MINNOW_MOL/L_TEST_PRED", "FLASH_POINT_DEGC_TEST_PRED", "MELTING_POINT_DEGC_TEST_PRED", "AMES_MUTAGENICITY_TEST_PRED", "ORAL_RAT_LD50_MOL/KG_TEST_PRED", "SURFACE_TENSION_DYN/CM_TEST_PRED", "THERMAL_CONDUCTIVITY_MW/(M*K)_TEST_PRED", "TETRAHYMENA_PYRIFORMIS_IGC50_MOL/L_TEST_PRED", "VISCOSITY_CP_CP_TEST_PRED", "VAPOR_PRESSURE_MMHG_TEST_PRED", "WATER_SOLUBILITY_MOL/L_TEST_PRED", "ATMOSPHERIC_HYDROXYLATION_RATE_(AOH)_CM3/MOLECULE*SEC_OPERA_PRED", "BIOCONCENTRATION_FACTOR_OPERA_PRED", "BIODEGRADATION_HALF_LIFE_DAYS_DAYS_OPERA_PRED", "BOILING_POINT_DEGC_OPERA_PRED", "HENRYS_LAW_ATM-M3/MOLE_OPERA_PRED", "OPERA_KM_DAYS_OPERA_PRED", "OCTANOL_AIR_PARTITION_COEFF_LOGKOA_OPERA_PRED", "SOIL_ADSORPTION_COEFFICIENT_KOC_L/KG_OPERA_PRED", "OCTANOL_WATER_PARTITION_LOGP_OPERA_PRED", "MELTING_POINT_DEGC_OPERA_PRED", "OPERA_PKAA_OPERA_PRED", "OPERA_PKAB_OPERA_PRED", "VAPOR_PRESSURE_MMHG_OPERA_PRED", "WATER_SOLUBILITY_MOL/L_OPERA_PRED", "EXPOCAST_MEDIAN_EXPOSURE_PREDICTION_MG/KG-BW/DAY", "NHANES", "TOXCAST_NUMBER_OF_ASSAYS/TOTAL", "TOXCAST_PERCENT_ACTIVE"), massError = 0, timeout = 300, verify_ssl = getOption("ECOTOXr_verify_ssl"), ... ) Arguments searchItems A vector of characters where each element is a substance descriptor (any of the selected identifierTypes) you wish to query. identifierTypes Substance identifiers for searching CompTox. Only used when inputType is set to "IDENTIFIER". inputType Type of input used for searching CompTox. See usage section for valid entries. downloadItems Output fields of CompTox data for requested substances massError Error tolerance when searching for substances based on their monoisotopic mass. Only used for inputType = "MASS". timeout Time in seconds (default is 300 secs), that the routine will wait for the download link to get ready. It will throw an error if it takes longer than the specified timeout. verify_ssl When set to FALSE the SSL certificate of the host (EPA) is not verified. Can also be set as option: options(ECOTOXr_verify_ssl = TRUE). Default is TRUE. ... Arguments passed on to httr::GET requests. Details The CompTox Chemicals Dashboard is a freely accessible on-line U.S. EPA database. It contains information on physico-chemical properties, environmental fate and transport, exposure, usage, in vivo toxicity, and in vitro bioassay of a wide range of substances. The function described here to search and retrieve records from the on-line database is experimen- tal. This is because this feature is not formally supported by the EPA, and it may break in future incarnations of the on-line database. The function forms an interface between R and the CompTox website and is therefore limited by the restrictions documented there. Value Returns a named list of dplyr::tibbles containing the search results for the requested output tables and fields. Results are unpolished and ‘as is’ returned by EPA’s web service. Author(s) <NAME> References Official US EPA CompTox website: https://comptox.epa.gov/dashboard/ <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2017), The CompTox Chemistry Dash- board: a community data resource for environmental chemistry. J Cheminform, 9(61) doi: 10.1186/ s1332101702476 See Also Other search-functions: search_ecotox(), websearch_ecotox() Examples ## Not run: ## search for substance name 'benzene' and CAS registration number 108-88-3 ## on https://comptox.epa.gov/: comptox_results <- websearch_comptox(c("benzene", "108-88-3")) ## search for substances with monoisotopic mass of 100+/-5: comptox_results2 <- websearch_comptox("100", inputType = "MASS", massError = 5) ## End(Not run) websearch_ecotox Search and retrieve toxicity records from the on-line database Description [Experimental] Functions to search and retrieve records from the on-line database at https:// cfpub.epa.gov/ecotox/search.cfm. Usage websearch_ecotox( fields = list_ecotox_web_fields(), habitat = c("aquire", "terrestrial"), verify_ssl = getOption("ECOTOXr_verify_ssl"), ... ) list_ecotox_web_fields(...) Arguments fields A named list of characters, used to build a search for for the on-line search query of https://cfpub.epa.gov/ecotox/search.cfm. Use list_ecotox_web_fields() to construct a valid list. habitat Use aquire (default) to retrieve aquatic data, terrestrial for, you’ve guessed it, terrestrial data. verify_ssl When set to FALSE the SSL certificate of the host (EPA) is not verified. Can also be set as option: options(ECOTOXr_verify_ssl = TRUE). Default is TRUE. ... In case of list_ecotox_web_fields() the dots can be used as search field values used to update the returned list of fields. In case of websearch_ecotox() the dots can be used to pass custom options to the underlying httr::POST() call. For available field names, use names(list_ecotox_web_fields()) Details The functions described here to search and retrieve records from the on-line database are exper- imental. This is because this feature is not formally supported by the EPA, and it may break in future iterations of the on-line database. The functions form an interface between R and the ECO- TOX website and is therefore limited by its restrictions as described in the package documentation: ECOTOXr. The functions should therefore be used with caution. Value Returns named list of dplyr::tibbles with search results. Results are unpolished and ‘as is’ returned by EPA’s web service. list_ecotox_web_fields() returns a named list with fields that can be used in a web search of EPA’s ECOTOX database, using websearch_ecotox(). Note IMPORTANT: when you plan to perform multiple adjacent searches (for instance in a loop), please insert a call to Sys.sleep(). This to avoid overloading the server and getting your IP address banned from the server. Author(s) <NAME> See Also Other search-functions: search_ecotox(), websearch_comptox() Examples ## Not run: search_fields <- list_ecotox_web_fields( txAdvancedSpecEntries = "daphnia magna", RBSPECSEARCHTYPE = "EXACT", txAdvancedChemicalEntries = "benzene", RBCHEMSEARCHTYPE = "EXACT") search_results <- websearch_ecotox(search_fields) ## End(Not run) %>% Objects exported from other packages Description Objects imported and exported from other packages. See original documentation for more details. Details dplyr \%>\%()
voteSim
cran
R
Package ‘voteSim’ June 23, 2023 Type Package Title Generate Simulated Data for Voting Rules using Evaluations Version 0.1.0 Maintainer <NAME> <<EMAIL>> Description Provide functions to generate random simulated evaluations on candidates by voters for evalua- tion-based elections. Functions are based on several models for continuous or discrete evaluations. License GPL-3 Encoding UTF-8 URL https://eric.univ-lyon2.fr/arolland/ Imports truncnorm, extraDistr, GenOrd Suggests testthat (>= 3.0.0) Config/testthat/edition 3 RoxygenNote 7.2.3 NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut] Repository CRAN Date/Publication 2023-06-23 08:50:02 UTC R topics documented: distanc... 2 distance_to_pre... 2 DistToScore... 3 generate_bet... 3 generate_beta_binomia... 4 generate_binomia... 5 generate_dirichle... 6 generate_discrete_copula_base... 6 generate_multino... 7 generate_nor... 8 generate_spatia... 9 generate_unif_continuou... 10 generate_unif_dis... 10 icd... 11 preferences_to_rank... 11 rename_row... 12 ScoresToDis... 12 distance Distance formula Description Distance formula Usage distance(votant, candidats) Arguments votant array candidats array Value distance distance_to_pref Distance formula Description Distance formula Usage distance_to_pref(distance_matrix) Arguments distance_matrix distance_matrix Value mat_inverse DistToScores Distance to score Description Distance to score Usage DistToScores(dist, dim = 2, method = "linear", lambda = 5) Arguments dist int dim dimension int method method string lambda lambdad int Value score generate_beta Generates a simulation of voting according to a beta law, returns vot- ers preferences Description Generates a simulation of voting according to a beta law, returns voters preferences Usage generate_beta( n_voters, n_candidates, beta_a = 0.5, beta_b = 0.5, lambda = 0, min = 0, max = 1 ) Arguments n_voters integer, represents the number of voters in the election n_candidates integer, represents the number of candidates in the election beta_a double, parameter of the Beta law (by default 0.5) beta_b double, parameter of the Beta law (by default 0.5) lambda double, alternative parameter of the Beta law min int, the minimum value of the range of possible scores (by default 0) max int, the maximum value of the range of possible scores (by default 1) Value scores Examples voting_situation<- generate_beta(n_voters=10, n_candidates=3, beta_a=1, beta_b=5) generate_beta_binomial Generate beta-binomial scores Description This function generates discrete scores following a beta-binomial distribution on a given scale Usage generate_beta_binomial( n_voters, n_candidates, min = 0, max = 10, alpha = 0.5, beta = 0.5 ) Arguments n_voters integer, the number of voters to generate scores for. n_candidates integer, The number of candidates to generate scores for. min The minimum value of the distribution, by default 0 max The maximum value of the distribution, by default 10 alpha The first parameter of the beta-binomial distribution, by default 0.5 beta The second parameter of the beta-binomial distribution, by default 0.5 Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation <- generate_beta_binomial(n_voters=10, n_candidates=3, max=7) generate_binomial Generate binomial scores Description This function generates discrete scores following a binomial distribution on a given scale Usage generate_binomial(n_voters, n_candidates, min = 0, max = 10, mean = 5) Arguments n_voters integer, the number of voters to generate scores for. n_candidates integer, The number of candidates to generate scores for. min The minimum value of the distribution, by default 0 max The maximum value of the distribution, by default 10 mean The mean value of the distribution, by default 5 Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation <- generate_binomial(n_voters=10, n_candidates=3, min=0, max=7, mean=5) generate_dirichlet Generate multinomial scores Description This function generates scores following a Dirichlet distribution Usage generate_dirichlet(n_candidates, n_voters, probs = 0) Arguments n_candidates integer, The number of candidates to generate scores for. n_voters integer, the number of voters to generate scores for. probs A vector of size n_candidates corresponding to the parameters of the Dirichlet distribution. By default all values are equal to 1. Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation <- generate_dirichlet(n_voters=10, n_candidates=3, probs=c(0.5, 0.3, 0.2)) generate_discrete_copula_based Discrete Copula based scores Description This function generates discrete scores following marginals distributions linked by a copula #’ Usage generate_discrete_copula_based( n_candidates, n_voters, min = 0, max = 10, margins = list("default"), cor_mat = 0 ) Arguments n_candidates integer, The number of candidates to generate scores for. n_voters integer, the number of voters to generate scores for. min The minimum value of the distribution, by default 0 max The maximum value of the distribution, by default 10 margins A list of n_candidates cumulative distribution vectors of length (max-min-1) : the last value of the cumulative distribution, 1, should be omitted. By default margin distribution are uniform distributions. cor_mat A matrix of correlation coefficients between the n_candidates distributions. By default all correlation coefficients are set up alternatively to 0.5 or -0.5. Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples # Example for 3 candidates, binomial distributions min=0 max=7 n_candidates<-3 distribution<-dbinom(x=(min:max), size=max, prob=0.7) distribution_cumul<-cumsum(distribution) distribution_cumul<-distribution_cumul[-length(distribution_cumul)] margins <- matrix(rep(distribution_cumul, n_candidates), ncol=n_candidates) margins <-as.list(as.data.frame(margins)) cor_mat<-matrix(c(1,0.8,0,0.8,1,0, 0,0,1), ncol=n_candidates) voting_situation <- generate_discrete_copula_based(3, 10, max=max, margins=margins, cor_mat=cor_mat) generate_multinom Generate multinomial scores Description This function generates discrete scores following a multinomial distribution on a given scale Usage generate_multinom(n_voters, n_candidates, max = 10, probs = 0) Arguments n_voters integer, the number of voters to generate scores for. n_candidates integer, The number of candidates to generate scores for. max The maximum value of the distribution, by default 10. It also corresponds to the sum of scores on all the candidates probs A vector of size n_candidates corresponding to the parameters of the multino- mial distribution. By default all values are equal to 1/n_candidates Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation <- generate_multinom(n_voters=10, n_candidates=3, max=100, probs=c(0.5, 0.3, 0.2)) generate_norm Generate truncated normal scores Description This function generates truncated normal scores using the ’rtruncnorm’ function from the ’trunc- norm’ package. Usage generate_norm(n_candidates, n_voters, min = 0, max = 1, mean = 0.5, sd = 0.25) Arguments n_candidates The number of candidates to generate scores for. n_voters The number of voters to generate scores for. min The minimum value of the truncated normal distribution. max The maximum value of the truncated normal distribution. mean The mean of the truncated normal distribution. sd The standard deviation of the truncated normal distribution. Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation<- generate_norm(n_voters=10, n_candidates=3, min=0, max=10, mean=0.7) generate_spatial Generate spatial simulation Description This function generates spatial data consisting of n_voters voters and n_candidates candidates. The spatial model is created by placing the candidates on a 2-dimensional plane according to the placement parameter, and then computing a distance matrix between voters and candidates. The distances are then transformed into scores using the score_method parameter. Finally, a plot of the candidates and voters is produced. Usage generate_spatial( n_candidates, n_voters, placement = "uniform", score_method = "linear", dim = 2 ) Arguments n_candidates The number of candidates. n_voters The number of voters. placement The method used to place the candidates on the 2-dimensional plane. Must be either "uniform" or "beta". Default is "uniform". score_method The method used to transform distances into scores. Must be either "linear" or "sigmoide". Default is "linear". dim The dimension of the latent space (by default dim =2) Value A matrix of scores. Examples generate_spatial(n_candidates = 5,n_voters = 100, placement = "uniform", score_method = "linear") generate_unif_continuous Generates a simulation of voting according to a uniform law, returns voters preferences Description Generates a simulation of voting according to a uniform law, returns voters preferences Usage generate_unif_continuous(n_voters, n_candidates, min = 0, max = 1) Arguments n_voters integer, represents the number of voters in the election n_candidates integer, represents the number of candidates in the election min int, the minimum value of the range of possible scores (by default 0) max int, the maximum value of the range of possible scores (by default 1) Value scores Examples voting_situation<- generate_unif_continuous(n_voters=10, n_candidates=3, min=0, max=10) generate_unif_disc Generate uniform discrete scores Description This function generates uniform discrete scores on a given scale Usage generate_unif_disc(n_voters, n_candidates, min = 0, max = 10) Arguments n_voters integer, the number of voters to generate scores for. n_candidates integer, The number of candidates to generate scores for. min The minimum value of the distribution, by default 0 max The maximum value of the distribution, by default 10 Value A matrix of scores with ’n_candidates’ rows and ’n_voters’ columns. Examples voting_situation <- generate_unif_disc(n_voters=10, n_candidates=3, min=0, max=5) icdf Generalized inverse of the empirical cumulative function. Description Generalized inverse of the empirical cumulative function. Usage icdf(u, x, n) Arguments u a numeric vector of quantiles to be transformed. x a numeric vector of data values. n a positive integer specifying the length of the output vector. Details Computes the generalized inverse of the empirical cumulative function, which transforms quantiles u to the corresponding values of x based on the frequency distribution of x. Value a numeric vector of transformed quantiles. preferences_to_ranks Preferences_to_ranks Description Preferences_to_ranks Usage preferences_to_ranks(preferences) 12 ScoresToDist Arguments preferences voters preferences Value ranks rename_rows Rename_rows Description Rename_rows Usage rename_rows(preferences) Arguments preferences voters preferences Value preferences ScoresToDist Score to distance Description Score to distance Usage ScoresToDist(x, dim = 2, method = "linear") Arguments x score dim dimension int method method string Value distance
broadsheet
npm
JavaScript
The Times Component Library === ### Purpose Home of The Times' `react`/`react native` components, using [react-native-web](https://github.com/necolas/react-native-web) to share across platforms ### Dev Environment We require MacOS with [Node.js](https://nodejs.org) (version >=8 with npm v5), [yarn](https://yarnpkg.com) (latest) and [watchman](https://facebook.github.io/watchman) installed. Native development requires [Xcode](https://developer.apple.com/xcode) and [Android Studio](https://developer.android.com/studio/index.html). You can try without these requirements, but you'd be on your own. Getting Started --- 1. Run `yarn` to install dependencies 2. components can be seen running in a storybook: * web storybook 1. `yarn storybook` 2. go to <http://localhost:9001> * native storybook 1. `yarn storybook-native` and leave it running 2. `yarn ios` and/or `yarn android` to start the (sim|em)ulators 3. go to <http://localhost:7007⚠️ Native Storybook ⚠️ In order to view the storybook on native, you'll need to fix a broken font which requires [fontforge](http://fontforge.github.io/en-US/) ``` brew install fontforge ``` When you first get a local copy of the fonts you may see some warnings which you can ignore Contributing --- See the [CONTRIBUTING.md](https://github.com/newsuk/times-components/blob/HEAD/.github/CONTRIBUTING.md) for an extensive breakdown of the project Readme --- ### Keywords * react * native * web
cellOrigins
cran
R
Package ‘cellOrigins’ October 12, 2022 Type Package Title Finds RNASeq Source Tissues Using In Situ Hybridisation Data Version 0.1.3 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Finds the most likely originating tissue(s) and developmental stage(s) of tissue-specific RNA se- quencing data. The package identifies both pure transcriptomes and mixtures of transcrip- tomes. The most likely identity is found through comparisons of the sequencing data with high- throughput in situ hybridisation patterns. Typical uses are the identification of cancer cell ori- gins, validation of cell culture strain identities, validation of single-cell transcriptomes, and vali- dation of identity and purity of flow-sorting and dissection sequencing products. License CC BY-NC-SA 4.0 Encoding UTF-8 LazyData true Imports iterpc NeedsCompilation no Repository CRAN Date/Publication 2020-06-05 09:00:02 UTC R topics documented: cellOrigins-packag... 2 BDGP_insitu_dmel_embry... 4 diagnosticPlot... 5 discovery.lo... 6 discovery_probabilit... 7 iterating_seqVsInsit... 8 prior.temporal_proximity_is_goo... 10 seqVsInsit... 11 vncMedianCoverage.ts... 13 cellOrigins-package Finding the most likely originating tissue(s) and developmental stage(s) of RNASeq data Description cellOrigins compares RNASeq read coverages with in high-throughput RNA in situ hybridisation patterns for transcriptome source identification and verification. The package can identify both pure transcriptomes and mixtures of transcriptomes. Typical uses are the identification of cancer cell origins, validation of cell culture strain identities, validation of single-cell transcriptomes, and validation of identity and purity of flow-sorting and dissection sequencing products. The comparison of quantitative RNA sequencing coverage with thresholded, qualitative staining patterns is probabilistic. First, given the sequenced transcriptome, a prediction is made how likely each sequenced transcript would lead to a positive signal in a high-throughput in situ hybridisation experiment. The probability of staining increases with the logarithm of the sequencing coverage. This relationship was empirically found through a comparison between Drosophila embryo tran- scriptomes and RNA in situ staining results. Then, using Bayes’s theorem all the genes in the simulated and observed hybridisation patterns are compared. The pattern (or linear combination of patterns) with the highest posterior probability is identified as the most likely source. Batteries included: the package contains a filtered high-confidence expression pattern dataset for Drosophila melanogaster embryos (based on BDGP insitu). Typical use: I GENERATE INPUT Input is RNASeq mean FPKM (fragments per kilobase per million reads). Whole-gene FPKM may be used (as output by e.g. cufflinks/cuffquant), however assignment difficulties at overlapping transcripts and transcript isoforms reduce prediction quality. For best results use FPKM values calculated for the targets of the in situ hybridisation probes as described below: Step 1) Generate masking bed file – this file is included for BDGP insitu in the extdata folder. For other species align probe sequences to the target genome using BLAT (https://genome.ucsc.edu/FAQ/FAQblat.html). Convert the best-scoring alignments to a masking bed file with psl_to_bed_best_score.pl (https://gist.github.com/davetan Then sort with bedtools sort (http://bedtools.readthedocs.org/). Step 2) Get coverages. Use Bedtools with the masking bed file to extract the mean sequencing covereage from wig files in the in situ probed regions: bedtools map -a sorted_probes.bed -b sequenced.wig -o max -c 4 >insitu_high_confidence.tsv Use the output tab separated values file as input for the function seqVsInsitu. II SOURCE IDENTIFICATION seqVsInsitu and iterating_seqVsInsitu calculate the probability for each in situ expression pattern that it is produced by the same gene expression patterns as the sequencing data. If you believe you have a mixed input, allow combined patterns from several target tissues. This is com- putationally expensive for more than two tissues. iterating_seqVsInsitu is faster thorugh cal- culating all combinations for n==2 and then using only the top tissues for n==3. The top tissues of n==3 is then are used for n==4 etc. III INTERPRETATION seqVsInsitu and iterating_seqVsInsitu return the terms or term combinations together with a log2 probability score for each. They also produce two diagnostic graphs. If multiple tissues contribute to the sample, the scatterplot should show a number of clusters at low n. As n increases, the clusters should merge into just two clusters at the ideal value of n. The line graph shows the log2 probability distribution. discovery_probability if RNASeq and in situ hybridisation data from the same tissue are paired, then with increasing FPKM the probability of RNA in situ discovery should increase logarithmi- cally. If the tissue sources do not match, no such relationship should be visible. Using this function, if the tissue combination in the argument is a match, there should by a nearly linearly increasing relationship in the log-plot, with saturation at very high FPKM values only. Details Package: cellOrigins Type: Package Version: 1.0 Date: 2015-03-18 License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References Molnar, D 2015, ’Single embryo-single organ transcriptomics of Drosophila embryos’, PhD thesis, University of Cambridge. BDGP insitu: Tomancak, Genome Biol. 2007;8(7):R145. BDGP insitu homepage: insitu.fruitfly.org/cgi-bin/ex/insitu.pl Examples ## Not run: pmoracle <- seqVsInsitu(transcriptomeMatrix) rownames(pmoracle)[1:3] diagnosticPlots(pmoracle) ## End(Not run) ##loading the BDGP insitu probe coordinates if not ##copied directly from the package extdata folder system.file("extdata", "BDGP_insitu_probes.bed", package = "cellOrigins") BDGP_insitu_dmel_embryo Patterns of gene expression in Drosophila melanogaster embryos Description High-confidence dataset of embryonic Drosophila melanogaster RNA expression patterns at 6 de- velopmental stages. This dataset was generated by filtering the "BDGP insitu" high-throughput RNA in situ hybridisation data set (Tomancak, Genome Biol. 2007;8(7):R145) for high-confidence results. Only genes useful for tissue identification were retained, and they thus represent gene ex- pression fingerprints of organs. Usage data("BDGP_insitu_dmel_embryo") Format The format is: num [1:2395, 1:337] 1 0 0 0 1 1 0 1 1 1 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:2395] "LD11379" "LD11394" "LD12611" "LD12613" ... ..$ : chr [1:337] "1|maternal" "2|pole cell" "3|pole cell" "4|germ cell" ... Details The expression data are collated in a matrix. The columns in the matrix are labeled stage|domain (e.g. "6|midgut"). The expression domains are denoted using the BDGP insitu controlled anatomical vocabulary.The rows are labeled with transcripts/probe names according to the BDGP insitu data set. The hybridisation probe genomic coordinates (Drosophila melanogaster genome release 5) are supplied as an additional file in this package. The data set characterises the expression of 2395 RNA species. This is the differentially expressed, high-confidence subset of BDGP insitu. The starting point for dataset preparation was the published SQL database dump with annotations (http://insitu.fruitfly.org/insitu-mysql-dump/insitu.sql.gz). All in situ hybridisations for wild type Drosophila melanogaster embryos were extracted from this source. The reporter construct annotations were not used. Only high-confidence expression patterns were retained. The gene expression in the BDGP insitu database was annotated by human curators from microscopic images. Depending on the quality of images and staining some expression patterns were easier to discern than others. The curators expressed their confidence in their expression call together with the annotation data of each gene. The filtering criteria for including a probe’s exression pattern were that 1. the final call of the annotators was ’acceptable’, 2. there was no remark about staining intensity (pointing to substandard quality), 3. the microscopic image was not excluded by quality control, 4. the annotation was displayed on the database’s website, 5. the probe/staining was not flagged for repeating or for giving up, and 6. the final word of the annotators (a free text field) did not contain negative remarks like "weak", "nonspecific", "muddy", "poor", "dull", "spillover" or "suspicious" staining; lack of staining penetration; a call to repeat the staining; signs of doubt (e.g. "might", "perhaps", "maybe", "could", "not sure", "not confirmed", "unconvincing", "conflicting", "can’t say", "failure", "wrong", "junk"); on camera problems; artefacts or transposons. 7. there was no annotation with "no staining" to avoid false negatives. Genes with known ubiquitous expression (including faint-ubiquitous) at any stage were excluded. Genes for which there was no published probe sequence (approximately 300) were excluded. Most of the RNA in situ hybridisation probes originated from the Drosophila Gold Collection (http://www.fruitfly.org/EST/gold_co and the Drosophila Gene Collection (http://www.fruitfly.org/DGC/index.html). Annotated gene expression in each anatomical unit was propagated to all its anatomical subunits. For example "5|Malpighian tubule primordium" expression was propagated to "5|Malpighian tubule main body primordium" and "5|Malpighian tubule tip cell primordium". Only this made both the presence and the absence of staining meaningful. In the original data set gene expression was usually only annotated to the largest unit of expression, but not to its subunits. For instance if there was expression in the whole foregut, there was by necessity also expression in its pharynx subunit. However, in such a case expression in the pharynx was not commonly denoted in the original data set. Consequently some anatomic units had very few expressed genes associated. These genes were those that were exclusively expressed in those anatomical units and in no superior units. Source Tomancak, Genome Biol. 2007;8(7):R145 Examples data(BDGP_insitu_dmel_embryo) diagnosticPlots Diagnostic plots to explore seqVsInsitu results Description Accepts the result of seqVsInsitu and iterating_seqVsInsitu and produces diagnostic plots. If the sequencing data fits to one or more terms or combinations of terms, then the scatterplot will cluster into foci. As the number of combined terms is increased the foci merge into fewer groups. A diagonal in the scatterplot is a sign of error. Usage diagnosticPlots(seqVsInsitu_results) Arguments seqVsInsitu_results Value of seqVsInsitu or iterating_seqVsInsitu. Value None. Examples fpath <- system.file("extdata", "vncMedianCoverage.tsv", package="cellOrigins") vncExpression <- read.delim(file = fpath, header=FALSE, as.is=TRUE) expression <- vncExpression$V2 names(expression) <- vncExpression$V1 result <- seqVsInsitu(expression, depth=1) diagnosticPlots(result) ## Not run: oracleResponse <- iterating_seqVsInsitu(expression, 3) diagnosticPlots(oracleResponse) ## End(Not run) discovery.log Calculates discovery probability by RNA in situ hybridisation given a sequencing signal Description A set of functions with different assumptions on the probability of RNA in situ staining, given a sequencing coverage. Usage discovery.log(seq, saturate = 60, bias = 0.01) discovery.linear(seq, saturate = 60, bias = 0.01) discovery.identic(seq, saturate=Inf, bias=0) Arguments seq A vector of sequencing FPKMs. saturate FPKM value from which on maximum discovery probability (=0.99) is assumed (i.e. almost certain true positives). Value of 60 is default, may need adjustment to sequencing coverage. bias Positive staining probability of 0 FPKM transcripts (i.e. false positives). Must be >0. Default is 0.01, an empirically determined value. Details 1. discovery.log Uses a logarithmic saturation function for discovery probabilities. This rela- tionship was empirically determined from sequencing and hybridisation data. 2. discovery.linear Linear saturation function for discovery probabilities. 3. discovery.identic Passes input through. Useful for comparing RNASeq Vs. RNASeq data. Also for cases when the discovery probability for each transcript has been already determined in some other way. Value A vector of probabilities. Element names are preserved. See Also seqVsInsitu Examples plot(0:80, discovery.log(0:80), ylim=c(0,1.1), type="l", xlab="FPKM", ylab="p(discovery insitu hybridization)") plot(0:80, discovery.linear(0:80), ylim=c(0,1.1), type="l", xlab="FPKM", ylab="p(discovery insitu hybridization)") discovery_probability In situ discovery probability as a function of FPKM Description Groups transcripts by expression strength and calculates for each such group the percentage of genes that gave a positive staining signal in the in situ hybridisation. If the sequenced material matches the in situ hybridisation tissue, then weakly expressed genes in the sequenced material should be rearely in the in situ staining set of genes. Strongly expressed genes should correspondingly often also stain during hybridisation. Overall, if the match is not spu- rious, there should be a logarithmic dose-response relationship between sequencing read coverage and staining probability. In a plot of discovery probability against log(coverage) this shows as an approximately straight line (see example). Usage discovery_probability(seq_signature, terms, cut.points, insitu=cellOrigins::BDGP_insitu_dmel_embryo) Arguments seq_signature A named vector containing FPKM RNAseq data. Each element name must cor- respond to the names used in the insitu argument. NAs are permitted. terms A vector of anatomical terms which together are assumed to be the origin of the RNAseq data. cut.points A vector of cut points for grouping of values. E.g. 0:3 denotes the bins 0<=x<1, 1<=x<2, 2<=x<3. insitu Matrix with in situ hybridisation data. Rows are transcript names (same names as used for seq_signature) and coloumns are anatomical terms (possibly com- bined with developmental stages). 1 denotes staining of a particular transcript in a particular tissue, 0 denotes no staining. Defaults to BDGP_insitu_dmel_embryo, a staining dataset for Drosophila melanogaster embryos. Value A matrix with a row for each bin and three coloumns. The first coloumn is the probability of discovery, the second the number of transcripts in the expression bin that were discovered by in situ hybridisation. The third coloumn is the total number of transcripts in the bin. See Also iterating_seqVsInsitu, BDGP_insitu_dmel_embryo, discovery.log, discovery.linear, discovery.identic, prior.temporal_proximity_is_good, prior.all_equal, diagnosticPlots. Examples fpath <- system.file("extdata", "vncMedianCoverage.tsv", package="cellOrigins") vncExpression <- read.delim(file = fpath, header=FALSE, as.is=TRUE) expression <- vncExpression$V2 names(expression) <- vncExpression$V1 p <- discovery_probability(expression, "6|ventral nerve cord", c(0, 2^(0:10))) plot(x=-1:9, y=p[,1], type="l", xlab="log2(FPKM)", ylab="p(discovery in situ)") iterating_seqVsInsitu Faster comparisons between mixed tissue-specific RNA sequencing data and high-throughput RNA in situ hybridisation Description The same functionality as seqVsInsitu but computationally less expensive if combinations of anatomical terms are tested. The number of term combinations to test increases rapidly in seqVsInsitu. For example with 350 anatomical terms there are 61425 combinations of 2 terms and 7207200 combinations of 3 terms. This makes the exhaustive search of seqVsInsitu costly with depth>2. iterating_seqVsInsitu reduces the computational cost by initially testing the combinations of only a few terms. Then in each iteration the cardinality of the combinations is increased by one, but only the top anatomical terms of the previous iteration are used to reduce the number of tested combinations. Usage iterating_seqVsInsitu(seq_signature, upto_depth, use_topN = 50, start_depth = 2, insitu = cellOrigins::BDGP_insitu_dmel_embryo, insitu_discovery_function = discovery.log, saturate = 500, prior = prior.temporal_proximity_is_good) Arguments seq_signature A named vector containing FPKM RNAseq data. Each element name must cor- respond to the names used in the insitu argument. NAs are permitted. upto_depth Number of terms to combine in the final iteration. use_topN How many of the top results from the previous iteration to use to find the terms for the current iteration. start_depth Number of terms to combine in the first iteration. All combinations of all terms are tested at this step. insitu Matrix with RNA in situ hybridisation data. Rows are transcript names (queried by probes: same names as used for seq_signature) and coloumns are anatom- ical terms (possibly combined with developmental stages). If a probe stains in a particular tissue, the value is 1, otherwise 0. Defaults to BDGP_insitu_dmel_embryo, a staining dataset for fruit fly embryos. insitu_discovery_function A function that converts FPKM values to the probability of discovery by RNA in situ hybridisation. Values must be ]0..1[, 0 and 1 are not permitted. Defaults to discovery.log, an approximation of empirically determined discovery proba- bilities. Other available functions are discovery.linear and discovery.identic. saturate Will be passed on to the insitu_discovery_function. The data set dependent maximum value at which the discovery probability should saturate. Defaults to 500 (FPKM). prior A function that evaluates to the log2 prior probability of each anatomic term or combination of terms. Defaults to prior.temporal_proximity_is_good, which works well with BDGP_insitu_dmel_embryo. prior.all_equal as- sumes equal probability of all terms. Value Returns a named list that contains a matrix for each iteration like those produced by seqVsInsitu. See Also seqVsInsitu Examples ## Not run: fpath <- system.file("extdata", "vncMedianCoverage.tsv", package="cellOrigins") vncExpression <- read.delim(file = fpath, header=FALSE, as.is=TRUE) expression <- vncExpression$V2 names(expression) <- vncExpression$V1 oracleResponse <- iterating_seqVsInsitu(expression, 3) head(oracleResponse[[1]]) head(oracleResponse[[2]]) diagnosticPlots(oracleResponse) ## End(Not run) prior.temporal_proximity_is_good Assign a prior probability to a combination of anatomical terms Description Accepts one or more anatomical terms and assigns to them a prior probability in the Bayesian sense. prior.all_equal assumes all terms and combinations to be equally probable. prior.temporal_proximity_is_good is meant mainly for use with BDGP_insitu_dmel_embryo if working with single or staged embryos. With this function the prior probability increases if the developmental stages in the tested terms are close together. The magnitude of the prior is scaled to the number of tested genes. Usage prior.temporal_proximity_is_good(term_pairs, insitu_signature) Arguments term_pairs A vector with anatomical terms that are tested in combination. insitu_signature The RNA in situ hybridisation data set as produced by fusion of the expres- sion patterns in term_pairs, and as it will be used for calculating the posterior probability in seqVsInsitu. seqVsInsitu Determine the most likely source(s) of a tissue-specific RNAseq dataset Description Compares tissue-specific RNA sequencing coverage with high-throughput RNA in situ hybridisa- tion patterns of gene expression. All pattern combinations are tested in an exhaustive search. Usage seqVsInsitu(seq_signature, depth = 2, insitu = cellOrigins::BDGP_insitu_dmel_embryo, insitu_discovery_function = discovery.log, saturate = 500, prior = prior.temporal_proximity_is_good) Arguments seq_signature A named vector containing FPKM RNAseq data. Each element name must cor- respond to the names used in the insitu argument. NAs are permitted. depth Number of RNA in situ expression patterns to combine to identify mixed popu- lations. If 1, the expression patterns as given are used. Otherwise all combina- tions of depth expression patterns are tried. Each term combined with itself is also tested i.e. pure populations will still be identified if depth>1. Defaults to 2. seqVsInsitu Depths > 2 can be slow. iterating_seqVsInsitu is much faster in these cases. insitu Matrix with RNA in situ hybridisation results. Rows are transcript names (same names as used for seq_signature) and coloumns are anatomical terms (possi- bly combined with developmental stages). 1 denotes staining of a particular tran- script in a particular tissue, 0 denotes no staining. Defaults to BDGP_insitu_dmel_embryo, a staining dataset for Drosophila melanogaster embryos. insitu_discovery_function A function that converts FPKM values to the probability of discovery by RNA in situ hybridisation. Probabilities must be ]0..1[, the values 0 and 1 are not permit- ted. Defaults to discovery.log, an approximation of empirically determined discovery probabilities. Other available functions are discovery.linear and discovery.identic. saturate Will be passed on to the insitu_discovery_function. The data set dependent maximum value at which discovery probability should saturate. Defaults to 500 (FPKM). prior A function that returns the log2 prior probability of each anatomic term or com- bination of terms. Defaults to prior.temporal_proximity_is_good, which works well with BDGP_insitu_dmel_embryo. prior.all_equal assumes that all terms are equally probable. Details First, the function calculates for each sequenced transcript how likely it is that it would produce an RNA in situ signal, given its expresion strength. Using these staining probabilities and Bayes’s rule the function then calculates the probability score for each of the given RNA in situ hybridisation patterns that it was produced by the same gene expression pattern as the sequenced transcriptome. If depth>1 then the function identifies the origins of not pure sequenced material. For that it merges multiple RNA in situ hybridisation patterns for comparison with the sequenced data. This simulates the outcome of cell populations mixing. seq_signature is best generated by taking the mean coverage of the regions which are actually tested with the RNA in situ hybridisation probes. This circumvents problems from misannotation, overlapping transcripts and faulty quantitation of individual transcripts from sequencing data. A protocol for generating such datasets is given in the package reference. Value A matrix with a row for each anatomical term (or combination of terms) and at least four columns. The terms are sorted by the posterior value and the top term is the most likely source of the RNAseq transcriptome. posterior A log2 posterior probability score. The highest value is given to the most likely tissue of origin. The value is only meaningful in comparison with other values within the same result set. prior Prior probability of the anatomical term(s), as given by the function prior. likelihood.from.absence.insitu Probability score from all the genes where RNA in situ hybridisation did not report staining. likelihood.from.presence.insitu Probability score from all the genes where in situ hybridisation reported staining. remaining coloumns Number of additional expressed genes added to the in situ signature with each term in the tested combination. Sometimes additional terms add only very few or no new genes at all. Such tissue contributions are meaningless artefacts. The posterior column is the sum of the other three named columns. The scores are proportional to the (unknown) probabilities of identity. See Also iterating_seqVsInsitu, BDGP_insitu_dmel_embryo, discovery.log, discovery.linear, discovery.identic, prior.temporal_proximity_is_good, prior.all_equal, diagnosticPlots. Examples fpath <- system.file("extdata", "vncMedianCoverage.tsv", package="cellOrigins") vncExpression <- read.delim(file = fpath, header=FALSE, as.is=TRUE) expression <- vncExpression$V2 names(expression) <- vncExpression$V1 result <- seqVsInsitu(expression, depth=1) vncMedianCoverage.tsv Drosophila melanogaster embryo ventral nerve cord RNASeq cover- age Description Median RNAseq read coverages from 3 dissected embryonic (stage 11) fruit fly ventral nerve cords. The sequencing coverages are measured within the probing intervals of high-confidence BDGP insitu probes, as described in cellOrigins-package. Format The format is: probe name, coverage, chromosome, probe beginn, probe end, strand. Source <NAME> 2015, ’Single embryo-single organ transcriptomics of Drosophila embryos’, PhD thesis, University of Cambridge. Examples fpath <- system.file("extdata", "vncMedianCoverage.tsv", package="cellOrigins") vncExpression <- read.delim(file = fpath, header=FALSE, as.is=TRUE)
github.com/hexops/vecty
go
Go
README [¶](#section-readme) --- ![](https://github.com/vecty/vecty-logo/raw/master/horizontal_color_tagline.png) Vecty lets you build responsive and dynamic web frontends in Go using WebAssembly, competing with modern web frameworks like React & VueJS. [![Build Status](https://travis-ci.org/hexops/vecty.svg?branch=master)](https://travis-ci.org/hexops/vecty) [![PkgGoDev](https://pkg.go.dev/badge/github.com/hexops/vecty)](https://pkg.go.dev/github.com/hexops/vecty) [![GoDoc](https://godoc.org/github.com/hexops/vecty?status.svg)](https://godoc.org/github.com/hexops/vecty) [![codecov](https://img.shields.io/codecov/c/github/hexops/vecty/master.svg)](https://codecov.io/gh/hexops/vecty) ### Benefits * Go developers can be competitive frontend developers. * Share Go code between your frontend & backend. * Reusability by sharing components via Go packages so that others can simply import them. ### Goals * *Simple* + Designed from the ground up to be easily mastered *by newcomers* (like Go). * *Performant* + Efficient & understandable performance, small bundle sizes, same performance as raw JS/HTML/CSS. * *Composable* + Nest components to form your entire user interface, seperating them logically as you would any normal Go package. * *Designed for Go (implicit)* + Written from the ground up asking the question *"What is the best way to solve this problem in Go?"*, not simply asking *"How do we translate $POPULAR_LIBRARY to Go?"* ### Features * Compiles to WebAssembly (via standard Go compiler). * Small bundle sizes: 0.5 MB hello world (see section below). * Fast expectation-based browser DOM diffing ('virtual DOM', but less resource usage). ### Current Status **Vecty is currently considered to be an experimental work-in-progress.** Prior to widespread production use, we must meet our [v1.0.0 milestone](https://github.com/hexops/vecty/issues?q=is%3Aopen+is%3Aissue+milestone%3A1.0.0) goals, which are being completed slowly and steadily as contributors have time (Vecty is over 4 years in the making!). Early adopters may make use of it for real applications today as long as they are understanding and accepting of the fact that: * APIs will change (maybe extensively). * A number of important things are not ready: + Extensive documentation, examples and tutorials + URL-based component routing + Ready-to-use component libraries (e.g. material UI) + Server-side rendering + And more, see [milestone: v1.0.0](https://github.com/hexops/vecty/issues?q=is%3Aopen+is%3Aissue+milestone%3A1.0.0) * The scope of Vecty is only ~80% defined currently. * There are a number of important [open issues](https://github.com/hexops/vecty/issues). For a list of projects currently using Vecty, see the [doc/projects-using-vecty.md](https://github.com/hexops/vecty/blob/v0.6.0/doc/projects-using-vecty.md) file. ### Near-zero dependencies Vecty has nearly zero dependencies, it only relies on `reflect` from the Go stdlib. Through this, it is able to produce the smallest bundle sizes for Go frontend applications out there, limited only by the Go compiler itself: | Example binary | Compiler | uncompressed | `gzip --best` | `brotli` | | --- | --- | --- | --- | --- | | `hellovecty.wasm` | `tinygo 0.14.0` | 252K | 97K | 81K | | `hellovecty.wasm` | `go 1.15` | 2.1M | 587K | 443K | | `markdown.wasm` | `go 1.15` | 3.6M | 1010K | 745K | | `todomvc.wasm` | `go 1.15` | 2.9M | 825K | 617K | Note: Bundle sizes above do not scale linearly with more code/dependencies in your Vecty project. `hellovecty` is the smallest base-line bundle that the compiler can produce with just Vecty as a dependency, other examples above pull in more of the Go standard library and you would not e.g. have to pay that total cost again. ### Community * Join us in the [#vecty](https://gophers.slack.com/messages/vecty/) channel on the [Gophers Slack](https://gophersinvite.herokuapp.com/)! ### Changelog See the [doc/CHANGELOG.md](https://github.com/hexops/vecty/blob/v0.6.0/doc/CHANGELOG.md) file. Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Index [¶](#pkg-index) * [func AddStylesheet(url string)](#AddStylesheet) * [func RenderBody(body Component)](#RenderBody) * [func RenderInto(selector string, c Component) error](#RenderInto) * [func RenderIntoNode(node SyscallJSValue, c Component) error](#RenderIntoNode) * [func Rerender(c Component)](#Rerender) * [func SetTitle(title string)](#SetTitle) * [type Applyer](#Applyer) * + [func Attribute(key string, value interface{}) Applyer](#Attribute) + [func Class(class ...string) Applyer](#Class) + [func Data(key, value string) Applyer](#Data) + [func Key(key interface{}) Applyer](#Key) + [func MarkupIf(cond bool, markup ...Applyer) Applyer](#MarkupIf) + [func Namespace(uri string) Applyer](#Namespace) + [func Property(key string, value interface{}) Applyer](#Property) + [func Style(key, value string) Applyer](#Style) + [func UnsafeHTML(html string) Applyer](#UnsafeHTML) * [type ClassMap](#ClassMap) * + [func (m ClassMap) Apply(h *HTML)](#ClassMap.Apply) * [type Component](#Component) * [type ComponentOrHTML](#ComponentOrHTML) * [type Copier](#Copier) * [type Core](#Core) * + [func (c *Core) Context() *Core](#Core.Context) * [type ElementMismatchError](#ElementMismatchError) * + [func (e ElementMismatchError) Error() string](#ElementMismatchError.Error) * [type Event](#Event) * [type EventListener](#EventListener) * + [func (l *EventListener) Apply(h *HTML)](#EventListener.Apply) + [func (l *EventListener) PreventDefault() *EventListener](#EventListener.PreventDefault) + [func (l *EventListener) StopPropagation() *EventListener](#EventListener.StopPropagation) * [type HTML](#HTML) * + [func Tag(tag string, m ...MarkupOrChild) *HTML](#Tag) + [func Text(text string, m ...MarkupOrChild) *HTML](#Text) * + [func (h *HTML) Key() interface{}](#HTML.Key) + [func (h *HTML) Node() SyscallJSValue](#HTML.Node) * [type InvalidTargetError](#InvalidTargetError) * + [func (e InvalidTargetError) Error() string](#InvalidTargetError.Error) * [type KeyedList](#KeyedList) * + [func (l KeyedList) Key() interface{}](#KeyedList.Key) * [type Keyer](#Keyer) * [type List](#List) * + [func (l List) WithKey(key interface{}) KeyedList](#List.WithKey) * [type MarkupList](#MarkupList) * + [func Markup(m ...Applyer) MarkupList](#Markup) * + [func (m MarkupList) Apply(h *HTML)](#MarkupList.Apply) * [type MarkupOrChild](#MarkupOrChild) * + [func If(cond bool, children ...ComponentOrHTML) MarkupOrChild](#If) * [type Mounter](#Mounter) * [type RenderSkipper](#RenderSkipper) * [type SyscallJSValue](#SyscallJSValue) * [type Unmounter](#Unmounter) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [AddStylesheet](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1278) [¶](#AddStylesheet) ``` func AddStylesheet(url [string](/builtin#string)) ``` AddStylesheet adds an external stylesheet to the document. #### func [RenderBody](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1188) [¶](#RenderBody) ``` func RenderBody(body [Component](#Component)) ``` RenderBody renders the given component as the document body. The given Component's Render method must return a "body" element or a panic will occur. This function blocks forever in order to prevent the program from exiting, which would prevent components from rerendering themselves in the future. It is a short-handed form for writing: ``` err := vecty.RenderInto("body", body) if err !== nil { panic(err) } select{} // run Go forever ``` #### func [RenderInto](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1227) [¶](#RenderInto) ``` func RenderInto(selector [string](/builtin#string), c [Component](#Component)) [error](/builtin#error) ``` RenderInto renders the given component into the existing HTML element found by the CSS selector (e.g. "#id", ".class-name") by replacing it. If there is more than one element found, the first is used. If no element is found, an error of type InvalidTargetError is returned. If the Component's Render method does not return an element of the same type, an error of type ElementMismatchError is returned. #### func [RenderIntoNode](https://github.com/hexops/vecty/blob/v0.6.0/dom_native.go#L37) [¶](#RenderIntoNode) ``` func RenderIntoNode(node [SyscallJSValue](#SyscallJSValue), c [Component](#Component)) [error](/builtin#error) ``` RenderIntoNode renders the given component into the existing HTML element by replacing it. If the Component's Render method does not return an element of the same type, an error of type ElementMismatchError is returned. #### func [Rerender](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L805) [¶](#Rerender) ``` func Rerender(c [Component](#Component)) ``` Rerender causes the body of the given Component (i.e. the HTML returned by the Component's Render method) to be re-rendered. If the Component has not been rendered before, Rerender panics. If the Component was previously unmounted, Rerender is no-op. Rerender operates efficiently by batching renders together. As a result, there is no guarantee that a calls to Rerender will map 1:1 with calls to the Component's Render method. For example, two calls to Rerender may result in only one call to the Component's Render method. #### func [SetTitle](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1273) [¶](#SetTitle) ``` func SetTitle(title [string](/builtin#string)) ``` SetTitle sets the title of the document. ### Types [¶](#pkg-types) #### type [Applyer](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L67) [¶](#Applyer) ``` type Applyer interface { // Apply applies the markup to the given HTML element or text node. Apply(h *[HTML](#HTML)) } ``` Applyer represents some type of markup (a style, property, data, etc) which can be applied to a given HTML element or text node. #### func [Attribute](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L120) [¶](#Attribute) ``` func Attribute(key [string](/builtin#string), value interface{}) [Applyer](#Applyer) ``` Attribute returns an Applyer which applies the given attribute to an element. In most situations, you should use Property function, or the prop subpackage (which is type-safe) instead. There are only a few attributes (aria-*, role, etc) which do not have equivalent properties. Always opt for the property first, before relying on an attribute. #### func [Class](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L142) [¶](#Class) ``` func Class(class ...[string](/builtin#string)) [Applyer](#Applyer) ``` Class returns an Applyer which applies the provided classes. Subsequent calls to this function will append additional classes. To toggle classes, use ClassMap instead. Each class name must be passed as a separate argument. #### func [Data](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L130) [¶](#Data) ``` func Data(key, value [string](/builtin#string)) [Applyer](#Applyer) ``` Data returns an Applyer which applies the given data attribute. #### func [Key](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L91) [¶](#Key) ``` func Key(key interface{}) [Applyer](#Applyer) ``` Key returns an Applyer that uniquely identifies the HTML element amongst its siblings. When used, all other sibling elements and components must also be keyed. #### func [MarkupIf](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L232) [¶](#MarkupIf) ``` func MarkupIf(cond [bool](/builtin#bool), markup ...[Applyer](#Applyer)) [Applyer](#Applyer) ``` MarkupIf returns nil if cond is false, otherwise it returns the given markup. #### func [Namespace](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L262) [¶](#Namespace) ``` func Namespace(uri [string](/builtin#string)) [Applyer](#Applyer) ``` Namespace is Applyer which sets the namespace URI to associate with the created element. This is primarily used when working with, e.g., SVG. See <https://developer.mozilla.org/en-US/docs/Web/API/Document/createElementNS#Valid> Namespace URIs #### func [Property](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L102) [¶](#Property) ``` func Property(key [string](/builtin#string), value interface{}) [Applyer](#Applyer) ``` Property returns an Applyer which applies the given JavaScript property to an HTML element or text node. Generally, this function is not used directly but rather the prop and style subpackages (which are type safe) should be used instead. To set style, use style package or Style. Property panics if key is "style". #### func [Style](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L79) [¶](#Style) ``` func Style(key, value [string](/builtin#string)) [Applyer](#Applyer) ``` Style returns an Applyer which applies the given CSS style. Generally, this function is not used directly but rather the style subpackage (which is type safe) should be used instead. #### func [UnsafeHTML](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L252) [¶](#UnsafeHTML) ``` func UnsafeHTML(html [string](/builtin#string)) [Applyer](#Applyer) ``` UnsafeHTML is Applyer which unsafely sets the inner HTML of an HTML element. It is entirely up to the caller to ensure the input HTML is properly sanitized. It is akin to innerHTML in standard JavaScript and dangerouslySetInnerHTML in React, and is said to be unsafe because Vecty makes no effort to validate or ensure the HTML is safe for insertion in the DOM. If the HTML came from a user, for example, it would create a cross-site-scripting (XSS) exploit in the application. The returned Applyer can only be applied to HTML, not vecty.Text, or else a panic will occur. #### type [ClassMap](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L176) [¶](#ClassMap) ``` type ClassMap map[[string](/builtin#string)][bool](/builtin#bool) ``` ClassMap is markup that specifies classes to be applied to an element if their boolean value are true. #### func (ClassMap) [Apply](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L179) [¶](#ClassMap.Apply) ``` func (m [ClassMap](#ClassMap)) Apply(h *[HTML](#HTML)) ``` Apply implements the Applyer interface. #### type [Component](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L40) [¶](#Component) ``` type Component interface { // Render is responsible for building HTML which represents the component. // // If Render returns nil, the component will render as nothing (in reality, // a noscript tag, which has no display or action, and is compatible with // Vecty's diffing algorithm). Render() [ComponentOrHTML](#ComponentOrHTML) // Context returns the components context, which is used internally by // Vecty in order to store the previous component render for diffing. Context() *[Core](#Core) // contains filtered or unexported methods } ``` Component represents a single visual component within an application. To define a new component simply implement the Render method and embed the Core struct: ``` type MyComponent struct { vecty.Core ... additional component fields (state or properties) ... } func (c *MyComponent) Render() vecty.ComponentOrHTML { ... rendering ... } ``` #### type [ComponentOrHTML](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L115) [¶](#ComponentOrHTML) ``` type ComponentOrHTML interface { // contains filtered or unexported methods } ``` ComponentOrHTML represents one of: ``` Component *HTML List KeyedList nil ``` An unexported method on this interface ensures at compile time that the underlying value must be one of these types. #### type [Copier](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L71) [¶](#Copier) ``` type Copier interface { // Copy returns a copy of the component. Copy() [Component](#Component) } ``` Copier is an optional interface that a Component can implement in order to copy itself. Vecty must internally copy components, and it does so by either invoking the Copy method of the Component or, if the component does not implement the Copier interface, a shallow copy is performed. TinyGo: If compiling your Vecty application using the experimental TinyGo support (<https://github.com/hexops/vecty/pull/243>) then all components must implement at least a shallow-copy Copier interface (this is not required otherwise): ``` func (c *MyComponent) Copy() vecty.Component { cpy := *c return &cpy } ``` #### type [Core](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L12) [¶](#Core) ``` type Core struct { // contains filtered or unexported fields } ``` Core implements the Context method of the Component interface, and is the core/central struct which all Component implementations should embed. #### func (*Core) [Context](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L19) [¶](#Core.Context) ``` func (c *[Core](#Core)) Context() *[Core](#Core) ``` Context implements the Component interface. #### type [ElementMismatchError](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1201) [¶](#ElementMismatchError) ``` type ElementMismatchError struct { // contains filtered or unexported fields } ``` ElementMismatchError is returned when the element returned by a component does not match what is required for rendering. #### func (ElementMismatchError) [Error](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1205) [¶](#ElementMismatchError.Error) ``` func (e [ElementMismatchError](#ElementMismatchError)) Error() [string](/builtin#string) ``` #### type [Event](https://github.com/hexops/vecty/blob/v0.6.0/dom_native.go#L19) [¶](#Event) ``` type Event struct { Value [SyscallJSValue](#SyscallJSValue) Target [SyscallJSValue](#SyscallJSValue) } ``` Event represents a DOM event. #### type [EventListener](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L7) [¶](#EventListener) ``` type EventListener struct { Name [string](/builtin#string) Listener func(*[Event](#Event)) // contains filtered or unexported fields } ``` EventListener is markup that specifies a callback function to be invoked when the named DOM event is fired. #### func (*EventListener) [Apply](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L33) [¶](#EventListener.Apply) ``` func (l *[EventListener](#EventListener)) Apply(h *[HTML](#HTML)) ``` Apply implements the Applyer interface. #### func (*EventListener) [PreventDefault](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L18) [¶](#EventListener.PreventDefault) ``` func (l *[EventListener](#EventListener)) PreventDefault() *[EventListener](#EventListener) ``` PreventDefault prevents the default behavior of the event from occurring. See <https://developer.mozilla.org/en-US/docs/Web/API/Event/preventDefault>. #### func (*EventListener) [StopPropagation](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L27) [¶](#EventListener.StopPropagation) ``` func (l *[EventListener](#EventListener)) StopPropagation() *[EventListener](#EventListener) ``` StopPropagation prevents further propagation of the current event in the capturing and bubbling phases. See <https://developer.mozilla.org/en-US/docs/Web/API/Event/stopPropagation>. #### type [HTML](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L141) [¶](#HTML) ``` type HTML struct { // contains filtered or unexported fields } ``` HTML represents some form of HTML: an element with a specific tag, or some literal text (a TextNode). #### func [Tag](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L772) [¶](#Tag) ``` func Tag(tag [string](/builtin#string), m ...[MarkupOrChild](#MarkupOrChild)) *[HTML](#HTML) ``` Tag returns an HTML element with the given tag name. Generally, this function is not used directly but rather the elem subpackage (which is type safe) is used instead. #### func [Text](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L785) [¶](#Text) ``` func Text(text [string](/builtin#string), m ...[MarkupOrChild](#MarkupOrChild)) *[HTML](#HTML) ``` Text returns a TextNode with the given literal text. Because the returned HTML represents a TextNode, the text does not have to be escaped (arbitrary user input fed into this function will always be safely rendered). #### func (*HTML) [Key](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L162) [¶](#HTML.Key) ``` func (h *[HTML](#HTML)) Key() interface{} ``` Key implements the Keyer interface. #### func (*HTML) [Node](https://github.com/hexops/vecty/blob/v0.6.0/dom_native.go#L28) [¶](#HTML.Node) ``` func (h *[HTML](#HTML)) Node() [SyscallJSValue](#SyscallJSValue) ``` Node returns the underlying JavaScript Element or TextNode. It panics if it is called before the DOM node has been attached, i.e. before the associated component's Mounter interface would be invoked. #### type [InvalidTargetError](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1211) [¶](#InvalidTargetError) ``` type InvalidTargetError struct { // contains filtered or unexported fields } ``` InvalidTargetError is returned when the element targeted by a render is invalid because it is null or undefined. #### func (InvalidTargetError) [Error](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L1215) [¶](#InvalidTargetError.Error) ``` func (e [InvalidTargetError](#InvalidTargetError)) Error() [string](/builtin#string) ``` #### type [KeyedList](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L690) [¶](#KeyedList) ``` type KeyedList struct { // contains filtered or unexported fields } ``` KeyedList is produced by calling List.WithKey. It has no public behaviour, and List members are no longer accessible once wrapped in this stucture. #### func (KeyedList) [Key](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L706) [¶](#KeyedList.Key) ``` func (l [KeyedList](#KeyedList)) Key() interface{} ``` Key implements the Keyer interface #### type [Keyer](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L98) [¶](#Keyer) ``` type Keyer interface { // Key returns a value that uniquely identifies the component amongst its // siblings. The returned type must be a valid map key, or rendering will // panic. Key() interface{} } ``` Keyer is an optional interface that a Component can implement in order to uniquely identify the component amongst its siblings. If implemented, all siblings, both components and HTML, must also be keyed. Implementing this interface allows siblings to be removed or re-ordered whilst retaining state, and improving render efficiency. #### type [List](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L674) [¶](#List) ``` type List [][ComponentOrHTML](#ComponentOrHTML) ``` List represents a list of components or HTML. #### func (List) [WithKey](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L684) [¶](#List.WithKey) ``` func (l [List](#List)) WithKey(key interface{}) [KeyedList](#KeyedList) ``` WithKey wraps the List in a Keyer using the given key. List members are inaccessible within the returned value. #### type [MarkupList](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L196) [¶](#MarkupList) ``` type MarkupList struct { // contains filtered or unexported fields } ``` MarkupList represents a list of Applyer which is individually applied to an HTML element or text node. It may only be created through the Markup function. #### func [Markup](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L215) [¶](#Markup) ``` func Markup(m ...[Applyer](#Applyer)) [MarkupList](#MarkupList) ``` Markup wraps a list of Applyer which is individually applied to an HTML element or text node. #### func (MarkupList) [Apply](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L201) [¶](#MarkupList.Apply) ``` func (m [MarkupList](#MarkupList)) Apply(h *[HTML](#HTML)) ``` Apply implements the Applyer interface. #### type [MarkupOrChild](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L48) [¶](#MarkupOrChild) ``` type MarkupOrChild interface { // contains filtered or unexported methods } ``` MarkupOrChild represents one of: ``` Component *HTML List KeyedList nil MarkupList ``` An unexported method on this interface ensures at compile time that the underlying value must be one of these types. #### func [If](https://github.com/hexops/vecty/blob/v0.6.0/markup.go#L224) [¶](#If) ``` func If(cond [bool](/builtin#bool), children ...[ComponentOrHTML](#ComponentOrHTML)) [MarkupOrChild](#MarkupOrChild) ``` If returns nil if cond is false, otherwise it returns the given children. #### type [Mounter](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L78) [¶](#Mounter) ``` type Mounter interface { // Mount is called after the component has been mounted, after the DOM node // has been attached. Mount() } ``` Mounter is an optional interface that a Component can implement in order to receive component mount events. #### type [RenderSkipper](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L128) [¶](#RenderSkipper) ``` type RenderSkipper interface { // SkipRender is called with a copy of the Component made the last time its // Render method was invoked. If it returns true, rendering of the // component will be skipped. // // The previous component may be of a different type than this // RenderSkipper itself, thus a type assertion should be used and no action // taken if the type does not match. SkipRender(prev [Component](#Component)) [bool](/builtin#bool) } ``` RenderSkipper is an optional interface that Component's can implement in order to short-circuit the reconciliation of a Component's rendered body. This is purely an optimization, and does not need to be implemented by Components for correctness. Without implementing this interface, only the difference between renders will be applied to the browser DOM. This interface allows components to bypass calculating the difference altogether and quickly state "nothing has changed, do not re-render". #### type [SyscallJSValue](https://github.com/hexops/vecty/blob/v0.6.0/dom_native.go#L16) [¶](#SyscallJSValue) ``` type SyscallJSValue jsObject ``` SyscallJSValue is an actual syscall/js.Value type under WebAssembly compilation. It is declared here just for purposes of testing Vecty under native 'go test', linting, and serving documentation under godoc.org. #### type [Unmounter](https://github.com/hexops/vecty/blob/v0.6.0/dom.go#L86) [¶](#Unmounter) ``` type Unmounter interface { // Unmount is called before the component has been unmounted, before the // DOM node has been removed. Unmount() } ``` Unmounter is an optional interface that a Component can implement in order to receive component unmount events.
favnums
cran
R
Package ‘favnums’ October 13, 2022 Type Package Depends R (>= 2.10) Title A Dataset of Favourite Numbers Version 1.0.0 Date 2015-07-21 Author <NAME> [aut, cre], <NAME> [cph] Maintainer <NAME> <<EMAIL>> Description A dataset of favourite numbers, selected from an online poll of over 30,000 people by <NAME> (http://pages.bloomsbury.com/favouritenumber). License CC0 LazyData true NeedsCompilation no Repository CRAN Date/Publication 2015-07-22 16:15:47 R topics documented: favnum... 1 favourite_number... 2 favnums A Dataset of Favourite Numbers Description This package provides a dataset of favourite numbers, selected from an online poll of over 30,000 people by <NAME> See Also favourite_numbers and the original dataset. favourite_numbers Favourite Numbers based on an online poll Description A dataset containing the favourite numbers selected by over 30,000 people in an online poll. Usage favourite_numbers Format A data frame with 1123 rows and 4 variables: number the actual number chosen. May be NA in the case of "imaginary" numbers, or Infinite. frequency the number of times this number was chosen. percentage the percentage of user answers a particular number represents. description descriptions of the number’s importance, as provided by <NAME>. Often NA. Source http://pages.bloomsbury.com/favouritenumber Examples head(favourite_numbers)
github.com/batchcorp/plumber
go
Go
README [¶](#section-readme) --- ![Brief Demo](https://github.com/batchcorp/plumber/raw/v1.16.0/assets/plumber_logo_full.png) [![Master build status](https://github.com/batchcorp/plumber/workflows/master/badge.svg)](https://github.com/batchcorp/plumber/actions/workflows/master-test.yaml) [![Go Report Card](https://goreportcard.com/badge/github.com/batchcorp/plumber)](https://goreportcard.com/report/github.com/batchcorp/plumber) plumber is a CLI devtool for inspecting, piping, messaging and redirecting data in message systems like Kafka, RabbitMQ , GCP PubSub and [many more](#readme-supported-messaging-systems). [1] The tool enables you to: * Safely view the contents of your data streams * Write plain or encoded data to any system * Route data from one place to another * Decode protobuf/avro/thrift/JSON data in real-time + Support for both Deep and Shallow protobuf envelope types + Support for google.protobuf.Any fields * Relay data to the [Batch platform](https://batch.sh) * Ship change data capture events to [Batch platform](https://batch.sh) * [Replay events into a message system on your local network](https://docs.batch.sh/what-are/what-are-destinations/plumber-as-a-destination) * And *many* other features (for a full list: `plumber -h`) [1] It's like `curl` for messaging systems. ### Why do you need it? Messaging systems are black boxes - gaining visibility into what is passing through them is an involved process that requires you to write brittle consumer code that you will eventually throw away. `plumber` enables you to stop wasting time writing throw-away code - use it to look into your queues and data streams, use it to connect disparate systems together or use it for debugging your event driven systems. ### Demo ![Brief Demo](https://github.com/batchcorp/plumber/raw/v1.16.0/assets/demo.gif) ### Install #### Via brew ``` $ brew tap batchcorp/public $ brew install plumber ``` #### Manually * [macOS](https://github.com/batchcorp/plumber/releases/latest/download/plumber-darwin) * [Linux](https://github.com/batchcorp/plumber/releases/latest/download/plumber-linux) * [Windows](https://github.com/batchcorp/plumber/releases/latest/download/plumber-windows.exe) Plumber is a single binary, to install you simply need to download it, give it executable permissions and call it from your shell. Here's an example set of commands to do this: ``` $ curl -L -o plumber https://github.com/batchcorp/plumber/releases/latest/download/plumber-darwin $ chmod +x plumber $ mv plumber /usr/local/bin/plumber ``` ### Usage #### Write messages ``` ❯ plumber write kafka --topics test --input foo INFO[0000] Successfully wrote message to topic 'test' backend=kafka INFO[0000] Successfully wrote '1' message(s) pkg=plumber ``` #### Read message(s) ``` ❯ plumber read kafka --topics test INFO[0000] Initializing (could take a minute or two) ... backend=kafka --- [Count: 1 Received at: 2021-11-30T12:51:32-08:00] --- +---+---+ | Key | NONE | | topic | test | | Offset | 8 | | Partition | 0 | | Header(s) | NONE | +---+---+ foo ``` NOTE: Add `-f` to perform a continuous read (like `tail -f`) #### Write messages via pipe **Write multiple messages** NOTE: Multiple messages are separated by a newline. ``` $ cat mydata.txt line1 line2 line3 $ cat mydata.txt | plumber write kafka --topics foo INFO[0000] Successfully wrote message to topic 'foo' pkg=kafka/write.go INFO[0000] Successfully wrote message to topic 'foo' pkg=kafka/write.go INFO[0000] Successfully wrote message to topic 'foo' pkg=kafka/write.go ``` **Write each element of a JSON array as a message** ``` $ cat mydata.json [{"key": "value1"},{"key": "value2"}] $ cat mydata.json | plumber write kafka --topics foo --json-array INFO[0000] Successfully wrote message to topic 'foo' pkg=kafka/write.go INFO[0000] Successfully wrote message to topic 'foo' pkg=kafka/write.go ``` ### Documentation * [docs/examples.md](https://github.com/batchcorp/plumber/blob/master/docs/examples.md) for more usage examples * [docs/env.md](https://github.com/batchcorp/plumber/blob/master/docs/env.md) for list of supported environment variables * [docs/metrics.md](https://github.com/batchcorp/plumber/blob/master/docs/metrics.md) for information on metrics that plumber exposes * [docs/server.md](https://github.com/batchcorp/plumber/blob/master/docs/server.md) for examples on running plumber in server mode ### Getting Help A full list of available flags can be displayed by using the `--help` flag after different parts of the command: ``` $ plumber --help $ plumber read --help $ plumber read kafka --help ``` ### Features * Encode & decode for multiple formats + Protobuf (Deep and [Shallow envelope](https://www.confluent.io/blog/spring-kafka-protobuf-part-1-event-data-modeling/#shallow-envelope)) + Avro + Thrift + Flatbuffer + GZip + JSON + JSONPB (protobuf serialized as JSON) + Base64 * `--continuous` support (ie. `tail -f`) * Support for **most** messaging systems * Supports writing via string, file or pipe * Observe, relay and archive messaging data * Single-binary, zero-config, easy-install ### Hmm, what is this Batch thing? We are distributed system enthusiasts that started a company called [Batch](https://batch.sh). Our company focuses on solving data stream observability for complex systems and workflows. Our goal is to allow *everyone* to build asynchronous systems, without the fear of introducing too much complexity. While working on our company, we built a tool for reading and writing messages from our messaging systems and realized that there is a serious lack of tooling in this space. We wanted a swiss army knife type of tool for working with messaging systems (we use Kafka and RabbitMQ internally), so we created `plumber`. ### Why the name `plumber`? We consider ourselves "internet plumbers" of sort - so the name seemed to fit :) ### Supported Messaging Systems * Kafka * RabbitMQ * RabbitMQ Streams * Google Cloud Platform PubSub * MQTT * Amazon Kinesis Streams **(NEW)** * Amazon SQS * Amazon SNS (Publishing) * ActiveMQ (STOMP protocol) * Azure Service Bus * Azure Event Hub * NATS * NATS Streaming (Jetstream) * Redis-PubSub * Redis-Streams * Postgres CDC (Change Data Capture) * MongoDB CDC (Change Data Capture) * Apache Pulsar * NSQ * KubeMQ NOTE: If your messaging tech is not supported - submit an issue and we'll do our best to make it happen! #### Kafka You need to ensure that you are using the same consumer group on all plumber instances. #### RabbitMQ Make sure that all instances of `plumber` are pointed to the same queue. #### Note on boolean flags In order to flip a boolean flag to `false`, prepend `--no` to the flag. ie. `--queue-declare` is `true` by default. To make it false, use `--no-queue-declare`. ### Tunnels `plumber` can now act as a replay destination (tunnel). Tunnel mode allows you to run an instance of plumber, on your local network, which will then be available in the Batch platform as a *replay destination*. This mitigates the need make firewall changes to replay messages from a Batch collection back to your message bus. See <https://docs.batch.sh/what-are/what-are-destinations/plumber-as-a-destination> for full documentation. ### High Performance & High Availability `plumber` comes with a "server" mode which will cause plumber to operate as a highly available cluster. You can read more about "server mode" [here](https://docs.batch.sh/plumber/server-mode). Server mode examples can be found in [docs/server.md](https://github.com/batchcorp/plumber/blob/master/docs/server.md) ### Acknowledgments **Huge** shoutout to [jhump](https://github.com/jhump) and for his excellent [protoreflect](https://github.com/jhump/protoreflect) library, without which `plumber` would not be anywhere *near* as easy to implement. *Thank you!* ### Release To push a new plumber release: 1. `git tag v0.18.0 master` 2. `git push origin v0.18.0` 3. Watch the github action 4. New release should be automatically created under <https://github.com/batchcorp/plumber/releases/> 5. Update release to include any relevant info 6. Update [homebrew](https://github.com/batchcorp/homebrew-public/raw/master/plumber.rb) SHA and version references ### Contribute We love contributions! Prior to sending us a PR, open an issue to discuss what you intend to work on. When ready to open PR - add good tests and let's get this thing merged! For further guidance check out our [contributing guide](https://github.com/batchcorp/plumber/blob/master/CONTRIBUTING.md). Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
github.com/aliyun/alibaba-cloud-sdk-go
go
Go
README [¶](#section-readme) --- English | [简体中文](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/README-CN.md) [![](https://aliyunsdk-pages.alicdn.com/icons/AlibabaCloud.svg)](https://www.alibabacloud.com) Alibaba Cloud SDK for Go === [![Go](https://github.com/aliyun/alibaba-cloud-sdk-go/actions/workflows/go.yml/badge.svg)](https://github.com/aliyun/alibaba-cloud-sdk-go/actions/workflows/go.yml) [![codecov](https://codecov.io/gh/aliyun/alibaba-cloud-sdk-go/graph/badge.svg?token=kHbylWc7aV)](https://codecov.io/gh/aliyun/alibaba-cloud-sdk-go) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Faliyun%2Falibaba-cloud-sdk-go.svg?type=shield&issueType=license)](https://app.fossa.io/projects/git%2Bgithub.com%2Faliyun%2Falibaba-cloud-sdk-go?ref=badge_shield&issueType=license) Alibaba Cloud SDK for Go allows you to access Alibaba Cloud services such as Elastic Compute Service (ECS), Server Load Balancer (SLB), and CloudMonitor. You can access Alibaba Cloud services without the need to handle API related tasks, such as signing and constructing your requests. This document introduces how to obtain and call [Alibaba Cloud SDK for Go](https://github.com/aliyun/alibaba-cloud-sdk-go). ### Troubleshoot [Troubleshoot](https://troubleshoot.api.aliyun.com/?source=github_sdk) Provide OpenAPI diagnosis service to help developers locate quickly and provide solutions for developers through `RequestID` or `error message`. ### Online Demo [Alibaba Cloud OpenAPI Developer Portal](https://api.aliyun.com/) provides the ability to call the cloud product OpenAPI online, and dynamically generate SDK Example code and quick retrieval interface, which can significantly reduce the difficulty of using the cloud API. ### Requirements * It's necessary for you to make sure your system meet the [Requirements](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/0-Requirements-EN.md), such as installing a Go environment which is new than 1.13.x. ### Installation Use `go get` to install SDK: ``` go get -u github.com/aliyun/alibaba-cloud-sdk-go/sdk ``` ### Quick Examples Before you begin, you need to sign up for an Alibaba Cloud account and retrieve your [Credentials](https://usercenter.console.aliyun.com/#/manage/ak). #### Create Client ``` package main import "github.com/aliyun/alibaba-cloud-sdk-go/sdk" func main() { client, err := sdk.NewClientWithAccessKey("REGION_ID", "ACCESS_KEY_ID", "ACCESS_KEY_SECRET") if err != nil { // Handle exceptions panic(err) } } ``` #### ROA Request ``` package main import "github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests" func main() { request := requests.NewCommonRequest() // Make a common request request.Method = "GET" // Set request method request.Product = "CS" // Specify product request.Domain = "cs.aliyuncs.com" // Location Service will not be enabled if the host is specified. For example, service with aCertification type-Bearer Token should be specified request.Version = "2015-12-15" // Specify product version request.PathPattern = "/clusters/[ClusterId]" // Specify path rule with ROA-style request.Scheme = "https" // Set request scheme. Default: http request.ApiName = "DescribeCluster" // Specify product interface request.QueryParams["ClusterId"] = "123456" // Assign values to parameters in the path request.QueryParams["RegionId"] = "region_id" // Specify the requested regionId, if not specified, use the client regionId, then default regionId request.TransToAcsRequest() // Trans commonrequest to acsRequest, which is used by client. } ``` #### RPC Request ``` package main import "github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests" func main() { request := requests.NewCommonRequest() // Make a common request request.Method = "POST" // Set request method request.Product = "Ecs" // Specify product request.Domain = "ecs.aliyuncs.com" // Location Service will not be enabled if the host is specified. For example, service with a Certification type-Bearer Token should be specified request.Version = "2014-05-26" // Specify product version request.Scheme = "https" // Set request scheme. Default: http request.ApiName = "CreateInstance" // Specify product interface request.QueryParams["InstanceType"] = "ecs.g5.large" // Assign values to parameters in the path request.QueryParams["RegionId"] = "region_id" // Specify the requested regionId, if not specified, use the client regionId, then default regionId request.TransToAcsRequest() // Trans commonrequest to acsRequest, which is used by client. } ``` ### Documentation * [Requirements](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/0-Requirements-EN.md) * [Installation](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/1-Installation-EN.md) * [Client & Credentials](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/2-Client-EN.md) * [SSL Verify](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/3-Verify-EN.md) * [Proxy](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/4-Proxy-EN.md) * [Timeout](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/5-Timeout-EN.md) * [Debug](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/6-Debug-EN.md) * [Logger](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/7-Logger-EN.md) * [Concurrent](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/8-Concurrent-EN.md) * [Asynchronous Call](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/9-Asynchronous-EN.md) * [Package Management](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/10-Package-Management-EN.md) * [Endpoint](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/docs/11-Endpoint-EN.md) ### Issues [Opening an Issue](https://github.com/aliyun/alibaba-cloud-sdk-go/issues/new), Issues not conforming to the guidelines may be closed immediately. ### Contribution Please make sure to read the [Contributing Guide](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/v1.62.581/CONTRIBUTING.md) before making a pull request. ### References * [Alibaba Cloud Regions & Endpoints](https://developer.aliyun.com/endpoints) * [Alibaba Cloud OpenAPI Developer Portal](https://api.aliyun.com/) * [Go](https://golang.org/dl/) * [Latest Release](https://github.com/aliyun/alibaba-cloud-sdk-go/releases) ### License [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Faliyun%2Falibaba-cloud-sdk-go.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Faliyun%2Falibaba-cloud-sdk-go?ref=badge_large) Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) This file is created for depping ensure.
anabel
cran
R
Package ‘anabel’ May 11, 2023 Title Analysis of Binding Events + l Version 3.0.1 Description A free software for a fast and easy analysis of 1:1 molecular interaction studies. This package is suitable for a high-throughput data analysis. Both the online app and the package are completely open source. You provide a table of sensogram, tell 'anabel' which method to use, and it takes care of all fitting details. The first two releases of 'anabel' were created and implemented as in (<doi:10.1177/1177932218821383>, <doi:10.1093/database/baz101>). License MIT + file LICENSE Encoding UTF-8 RoxygenNote 7.2.3 VignetteBuilder knitr LazyData true Imports cli (>= 3.4), dplyr (>= 1.0), ggplot2 (>= 3.3), kableExtra (>= 1.3), minpack.lm (>= 1.2), openxlsx (>= 4.2), progress (>= 1.2), purrr (>= 0.3), qpdf, reshape2 (>= 1.4), rlang (>= 1.0), stats (>= 4.0), tidyr (>= 1.2), utils (>= 4.0) Depends R (>= 4.0) Suggests htmltools (>= 0.5), knitr (>= 1.36), rmarkdown (>= 2.17), testthat (>= 3.0.0), withr Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-0431-845X>), <NAME> [aut] (<https://orcid.org/0000-0001-9723-2809>), <NAME> [aut] (<https://orcid.org/0000-0002-0071-9344>) Maintainer <NAME> <<EMAIL>.de> Repository CRAN Date/Publication 2023-05-11 11:50:02 UTC R topics documented: convert_toMola... 2 MCK_datase... 3 MCK_dataset_drif... 3 run_anabe... 4 SCA_datase... 6 SCA_dataset_drif... 7 SCK_datase... 7 SCK_dataset_deca... 8 convert_toMolar Convert a unit to molar Description convert the value into molar. Usage convert_toMolar(val, unit) Arguments val numeric value of the analyte concentration unit character string indicating the unit from which, the analyte concentration will be converted into molar. Details supported units are: millimolar, micromolar, nanomolar and picomolar. The name of the unit could be written, or its abbreviation such as: nanomolar (nm), micromolar (mim), picomolar (pm), or millimolar (mm). The unite in either form is case insensitive. Value The value of analyte concentration in molar Examples convert_toMolar(120, "nanomolar") convert_toMolar(120, "nm") convert_toMolar(120, "millimolar") convert_toMolar(120, "mm") convert_toMolar(120, "micromolar") convert_toMolar(120, "mim") convert_toMolar(120, "picomolar") convert_toMolar(120, "pm") MCK_dataset Simulated data of binding curve for MCK. Description A dataset containing 5 different binding curves of different analyte concentrations. Ka = 1e+7nM, Kd = 1e-2 Usage data(MCK_dataset) Format A data frame with 403 rows and 6 variables: Time time points of the binding interaction from start to end Conc..50.nM. binding curve generated with analyte concentration = 50nM Conc..16.7.nM. binding curve generated with analyte concentration = 16.7nM Conc..5.56.nM. binding curve generated with analyte concentration = 5.56nM Conc..1.85.nM. binding curve generated with analyte concentration = 1.85nM Conc..6.17e.1.nM. binding curve generated with analyte concentration = 0.617nM Source https://apps.cytivalifesciences.com/spr/ MCK_dataset_drift Simulated data of binding curve for MCK with linear drift. Description A dataset containing 5 different binding curves of different analyte concentrations with induced baseline drift = -0.01. Ka = 1e+7nM, Kd = 1e-2 Usage data(MCK_dataset) Format A data frame with 403 rows and 6 variables: Time time points of the binding interaction from start to end Conc..50.nM. binding curve generated with analyte concentration = 50nM Conc..16.7.nM. binding curve generated with analyte concentration = 16.7nM Conc..5.56.nM. binding curve generated with analyte concentration = 5.56nM Conc..1.85.nM. binding curve generated with analyte concentration = 1.85nM Conc..6.17e.1.nM. binding curve generated with analyte concentration = 0.617nM Source https://apps.cytivalifesciences.com/spr/ run_anabel Analysis for 1:1 Biomolecular Interactions Description Analysis for 1:1 biomolecular interactions, using one of single-curve analysis (SCA), single-cycle kinetics (SCK) or multi-cycle kinetics (MCK) Usage run_anabel( input = NA, samples_names_file = NULL, tstart = NA, tend = NA, tass = NA, tdiss = NA, conc = NA, drift = FALSE, decay = FALSE, quiet = TRUE, method = "SCA", outdir = NA, generate_output = "none", generate_Report = FALSE, generate_Plots = FALSE, generate_Tables = FALSE, save_tables_as = "xlsx", debug_mode = FALSE ) Arguments input Data.frame, an excel, or a csv file (full path) - required samples_names_file An optional data.frame, an excel, or a csv file (full path) containing the samples names. If provided, it must have two columns, Name and ID. ID: names of columns in the input file; Name: sample’s names. tstart Numeric value of time’s starting point (default: minimum time point in the in- put) tend Numeric value of time’s ending point (default: maximum time point in the input) tass Numeric value of association time - required tdiss Numeric value of dissociation time - required conc Numeric value, the used concentration of the analyte; should be in molar (see convert_toMolar) - required drift Boolean value, to apply drift correction (default: FALSE) decay Boolean value, to apply surface decay correction (default: FALSE) quiet Boolean value, to suppress notifications, messages and warnings (default: TRUE) method a character string indicating which fitting method to be used. One of "SCA", "SCK", or "MCK", case insensitive (default: SCA). outdir Path and name of the output directory in which the results will be saved (default: NA) generate_output a character string indicating what kind of output will be generated. One of "none", "all", or "customized", case insensitive (default: none). If "all" or "cus- tomized" were given, outdir is required. If "customized" was given, at least one of generate_Plots, generate_Tables, or/and generate_Report must be set to TRUE generate_Report Boolean value, should anabel generate a summary report of the experiment? (default: FALSE) generate_Plots Boolean value, should anabel generate plots? (default: FALSE). generate_output must be set to "customized" generate_Tables Boolean value, should anabel generate tables? (default: FALSE) save_tables_as a character string indicating data format to save the tables with; could be "xlsx", "csv", "txt" or "rds", case insensitive, (default: xlsx) debug_mode Boolean value, anabel will return additional fitting details for each curve and the estimated response (default: FALSE) Value default returned value is a list of two data frames, the kinetics table and the fit value of each time point (fit_raw). If dev_mode was set to TRUE a third data frame will be returned containing the initial value of the parameters and the fitting function. References Determination of rate and equilibrium binding constants for macromolecular interactions by sur- face plasmon resonance. <NAME>, <NAME>, <NAME>, <NAME>, I Brooks Analytical biochemistry 212, 457-468 (1993) Analyzing a kinetic titration series using affinity biosensors. <NAME>, <NAME>- samba, <NAME>, <NAME>, <NAME>zka Analytical Biochemistry 349, 136–147 (2006) Anabel: an online tool for the real-time kinetic analysis of binding events. <NAME>, Jo- <NAME> , <NAME>, <NAME>oth Bioinformatics and Biology Insights 13, 1-10 (2019) See Also convert_toMolar Examples # To analyse data using MCK method: run_anabel( input = MCK_dataset, tstart = 1, tass = 21, tdiss = 140, conc = c(3.9E-9, 1.6E-8, 6.2E-8, 2.5E-7, 1.0e-6), method = "MCK" ) SCA_dataset Simulated data for SCA method. Description A simulated data containing interaction information of three binding curves all generated with con- centration 5e-08, Usage data(SCA_dataset) Format A data frame with 453 rows and four variables: Time time points of the binding interaction from start till the experiment’s end Sample.A sample one with Ka = 1e+7nM, Kd = 1e-2 Sample.B sample two with Ka = 1e+6nM, Kd = 5e-2 Sample.C sample four with Ka = 1e+6nM, Kd = 1e-3 Source https://apps.cytivalifesciences.com/spr/ SCA_dataset_drift Simulated data for SCA method with linear drift. Description A simulated data containing interaction information of three binding curves all generated with con- centration 5e-08, baseline drift = -0.019 Usage data(SCA_dataset) Format A data frame with 453 rows and four variables: Time time points of the binding interaction from start till the experiment’s end Sample.A sample one with Ka = 1e+7nM, Kd = 1e-2 Sample.B sample two with Ka = 1e+6nM, Kd = 5e-2 Sample.C sample four with Ka = 1e+6nM, Kd = 1e-3 Source https://apps.cytivalifesciences.com/spr/ SCK_dataset Simulated data of different binding curves for SCK method. Description A dataset contains one binding curve with 5 titrations-series (5 injection-series), as follows: tass: 50, 220, 390, 560, 730; tdiss: 150, 320, 490, 660, 830; conc: 6.17e-10 1.85e-09 5.56e-09 1.67e-08 5.00e-08 M Usage data(SCK_dataset) Format A data frame with 1091 rows and 6 variables: Time time points of the binding interaction from start to end Sample.A sample containing 5 titerations with Ka = 1e+6nM, Kd = 1e-2 Source https://apps.cytivalifesciences.com/spr/ SCK_dataset_decay Simulated data of different binding curves for SCK method with expo- nential decay. Description A dataset contains one binding curve with 5 titrations-series (5 injection-series), as follows: tass: 50, 220, 390, 560, 730; tdiss: 150, 320, 490, 660, 830; conc: 6.17e-10 1.85e-09 5.56e-09 1.67e-08 5.00e-08 M Usage data(SCK_dataset) Format A data frame with 1091 rows and 6 variables: Time time points of the binding interaction from start to end Sample.A sample containing 5 titerations with Ka = 1e+6nM, Kd = 1e-2 Source https://apps.cytivalifesciences.com/spr/
wgsl-inline
rust
Rust
Crate wgsl_inline === WGSL Inline --- ![crates.io](https://img.shields.io/crates/v/wgsl-inline.svg) ![docs.rs](https://img.shields.io/docsrs/wgsl-inline) ![crates.io](https://img.shields.io/crates/l/wgsl-inline.svg) WGSL Inline adds a macro, `wgsl!`, which takes WGSL sourcecode and validates it, reporting any errors to the Rust compiler. Example --- In your `Cargo.toml`: ``` wgsl-inline = "0.1" ``` Then in your Rust source: ``` mod my_shader { wgsl_inline::wgsl!{ struct VertexOutput { @builtin(position) position: vec4<f32>, @location(0) frag_uv: vec2<f32>, } @vertex fn main( @location(0) position: vec4<f32>, @location(1) uv: vec2<f32> ) -> VertexOutput { var output: VertexOutput; output.position = position; output.frag_uv = uv; return output; } } } fn main() { // The generated `SOURCE` constant contains the source code, // with the added guarantee that the shader is valid. println!("shader source: {}", my_shader::SOURCE); } ``` Error Checking --- Error scopes are propogated to the token in the macro that caused the error. That is to say, your IDE should be able to tell you exactly which bit of the shader code isn’t valid, without ever leaving Rust! For example, my IDE shows me something like the following: ![Image of a WGSL compile error in an IDE](https://raw.githubusercontent.com/LucentFlux/wgsl-inline/main/docs/images/compile_error.png) Minification --- This crate comes with a “minification” feature flag `minify`. When enabled, all of your included shader source code will be reduced in size at compile time (removing variable names and excess whitespace). This is intended to be used on release builds, stripping debug information to increase shader parsing startup time and decrease read latency. ``` wgsl-inline = { version = "0.1", features = ["minify"] } ``` Macros --- * wgsl Crate wgsl_inline === WGSL Inline --- ![crates.io](https://img.shields.io/crates/v/wgsl-inline.svg) ![docs.rs](https://img.shields.io/docsrs/wgsl-inline) ![crates.io](https://img.shields.io/crates/l/wgsl-inline.svg) WGSL Inline adds a macro, `wgsl!`, which takes WGSL sourcecode and validates it, reporting any errors to the Rust compiler. Example --- In your `Cargo.toml`: ``` wgsl-inline = "0.1" ``` Then in your Rust source: ``` mod my_shader { wgsl_inline::wgsl!{ struct VertexOutput { @builtin(position) position: vec4<f32>, @location(0) frag_uv: vec2<f32>, } @vertex fn main( @location(0) position: vec4<f32>, @location(1) uv: vec2<f32> ) -> VertexOutput { var output: VertexOutput; output.position = position; output.frag_uv = uv; return output; } } } fn main() { // The generated `SOURCE` constant contains the source code, // with the added guarantee that the shader is valid. println!("shader source: {}", my_shader::SOURCE); } ``` Error Checking --- Error scopes are propogated to the token in the macro that caused the error. That is to say, your IDE should be able to tell you exactly which bit of the shader code isn’t valid, without ever leaving Rust! For example, my IDE shows me something like the following: ![Image of a WGSL compile error in an IDE](https://raw.githubusercontent.com/LucentFlux/wgsl-inline/main/docs/images/compile_error.png) Minification --- This crate comes with a “minification” feature flag `minify`. When enabled, all of your included shader source code will be reduced in size at compile time (removing variable names and excess whitespace). This is intended to be used on release builds, stripping debug information to increase shader parsing startup time and decrease read latency. ``` wgsl-inline = { version = "0.1", features = ["minify"] } ``` Macros --- * wgsl
KZPlayground
cocoapods
Objective-C
kzplayground === Introduction --- Welcome to the documentation for **kzplayground**, a powerful and versatile iOS framework for integrating multimedia and interactive components into your applications. This documentation will provide you with detailed information on how to get started, the various features and functionalities offered by kzplayground, as well as examples and code snippets to help you implement it effectively. Installation --- In order to integrate kzplayground into your iOS project, follow these simple steps: 1. Open your project in Xcode. 2. Navigate to your project settings. 3. Click on your project target. 4. Select the “General” tab. 5. Scroll down to the “Frameworks, Libraries, and Embedded Content” section. 6. Click on the “+” button. 7. Search for “kzplayground” and select it from the list. 8. Choose the appropriate version and click “Add”. Getting Started --- Once kzplayground is successfully integrated into your project, you can begin using its features. Follow these steps to get started: 1. Import the kzplayground framework into your source file. 2. Create an instance of the kzplayground class. 3. Initialize the kzplayground object with the necessary configuration options. 4. Start utilizing the various methods and functionalities provided by kzplayground. Features --- kzplayground offers a wide range of features that enhance multimedia integration and interactivity in your iOS applications. Some of the key features include: * Interactive multimedia components * Audio and video playback support * Real-time streaming capabilities * Gestural and touch-based interactions * Content synchronization across devices * Dynamic content generation * User authentication and access control * And much more! Examples --- Here are a few examples to showcase some of the functionalities and capabilities of kzplayground: ### Example 1: Interactive Media Gallery Create an interactive media gallery that allows users to browse and view images, videos, and audio files. Implement features like pinch-to-zoom, swipe gestures, and play/pause buttons for multimedia control. `// Swift code snippet here` ### Example 2: Real-time Content Streaming Stream live content from a remote server, such as a video feed or audio stream, and display it in real-time on your iOS application. Implement features like pause/resume and volume control. `// Swift code snippet here` ### Example 3: Content Synchronization Enable content synchronization across multiple devices using kzplayground. Create an interactive experience where users can collaborate and interact with multimedia elements simultaneously across different devices. `// Swift code snippet here` Conclusion --- Congratulations! You are now equipped with the knowledge and resources to effectively integrate kzplayground into your iOS applications. Explore the extensive capabilities and features offered by kzplayground to create immersive and interactive experiences for your users.
geocoder
readthedoc
Markdown
geocoder 1.17.3 documentation ### Navigation * [geocoder 1.17.3 documentation](index.html#document-index) » Geocoder: Simple, Consistent[¶](#geocoder-simple-consistent) === Release v1.17.3. ([Installation](index.html#install)) Simple and consistent geocoding library written in Python. Many online providers such as Google & Bing have geocoding services, these providers do not include Python libraries and have different JSON responses between each other. It can be very difficult sometimes to parse a particular geocoding provider since each one of them have their own JSON schema. Here is a typical example of retrieving a Lat & Lng from Google using Python, things shouldn’t be this hard. ``` >>> import requests >>> url = 'https://maps.googleapis.com/maps/api/geocode/json' >>> params = {'sensor': 'false', 'address': 'Mountain View, CA'} >>> r = requests.get(url, params=params) >>> results = r.json()['results'] >>> location = results[0]['geometry']['location'] >>> location['lat'], location['lng'] (37.3860517, -122.0838511) ``` Now lets use Geocoder to do the same task. ``` >>> import geocoder >>> g = geocoder.google('Mountain View, CA') >>> g.latlng (37.3860517, -122.0838511) ``` Testimonials[¶](#testimonials) --- **<NAME>** Geocoder: great geocoding library by @DenisCarriere. **mcbetz** Very good companion for Geocoder. Glad to see Python getting more geo libraries for Non-GIS users. API Documentation[¶](#api-documentation) --- If you are looking for information on a specific function, class or method, this part of the documentation is for you. ### API Overview[¶](#api-overview) #### Installation[¶](#installation) ##### PyPi Install[¶](#pypi-install) To install Geocoder, simply: ``` $ pip install geocoder ``` ##### GitHub Install[¶](#github-install) Installing the latest version from Github: ``` $ git clone https://github.com/DenisCarriere/geocoder $ cd geocoder $ python setup.py install ``` #### Examples[¶](#examples) Many properties are available once the geocoder object is created. ##### Forward Geocoding[¶](#forward-geocoding) ``` >>> import geocoder >>> g = geocoder.google('Mountain View, CA') >>> g.geojson >>> g.json >>> g.wkt >>> g.osm ... ``` ##### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> g = geocoder.google([45.15, -75.14], method='reverse') >>> g.city >>> g.state >>> g.state_long >>> g.country >>> g.country_long ... ``` ##### House Addresses[¶](#house-addresses) ``` >>> g = geocoder.google("453 Booth Street, Ottawa ON") >>> g.housenumber >>> g.postal >>> g.street >>> g.street_long ... ``` ##### IP Addresses[¶](#ip-addresses) ``` >>> import geocoder >>> g = geocoder.ip('199.7.157.0') >>> g = geocoder.ip('me') >>> g.latlng >>> g.city ``` ##### Command Line Interface[¶](#command-line-interface) Basic usesage with CLI ``` $ geocode "Ottawa, ON" --provider bing ``` Saving results into a file ``` $ geocode "Ottawa, ON" >> ottawa.geojson ``` Reverse geocoding with CLI ``` $ geocode "45.15, -75.14" --provider google --method reverse ``` Using JQ to query out a specific attribute ``` $ geocode "453 Booth Street" -p canadapost --output json | jq .postal ``` ### QGIS Field Calculator[¶](#qgis-field-calculator) Using the QGIS Field Calculator this will output WKT format. #### Output Field[¶](#output-field) * **Name:** wkt * **Type:** Text, unlimited length (text) #### Function Editor[¶](#function-editor) ``` import geocoder @qgsfunction(group='Geocoder') def geocode(location, feature, parent): g = geocoder.google(location) return g.wkt ``` #### Expression[¶](#expression) Find the **geocode** expression in the **Geocoder** function list, the final result will look something like this: ``` geocode("address") ``` Once the wkt field is added, you can then save your document as a CSV format and in the **Layer Options** define the **GEOMETRY** = **AS_WKT**. Providers[¶](#providers) --- Detailed information about each individual provider that are within Geocoder. ### ArcGIS[¶](#arcgis) The World Geocoding Service finds addresses and places in all supported countries from a single endpoint. The service can find point locations of addresses, business names, and so on. The output points can be visualized on a map, inserted as stops for a route, or loaded as input for a spatial analysis. an address, retrieving imagery metadata, or creating a route. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.arcgis('Redlands, CA') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Redlands, CA' --provider arcgis ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [ArcGIS Geocode API](https://developers.arcgis.com/rest/geocode/api-reference/geocoding-find.htm) ### Baidu[¶](#baidu) Baidu Maps Geocoding API is a free open the API, the default quota one million times / day. #### Geocoding[¶](#geocoding) ``` >>> import geocoder # pip install geocoder >>> g = geocoder.baidu('中国', key='<API KEY>') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '中国' --provider baidu ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export BAIDU_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: Baidu API key. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [API Reference](http://developer.baidu.com/map/index.php?title=webapi/guide/webservice-geocoding) * [Get Baidu key](http://lbsyun.baidu.com/apiconsole/key) ### Bing[¶](#bing) The Bing™ Maps REST Services Application Programming Interface (API) provides a Representational State Transfer (REST) interface to perform tasks such as creating a static map with pushpins, geocoding an address, retrieving imagery metadata, or creating a route. Using Geocoder you can retrieve Bing’s geocoded data from Bing Maps REST Services. #### Geocoding[¶](#geocoding) ``` >>> import geocoder # pip install geocoder >>> g = geocoder.bing('Mountain View, CA', key='<API KEY>') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.bing([45.15, -75.14], method='reverse') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Mountain View, CA' --provider bing $ geocode '45.15, -75.14' --provider bing --method reverse ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export BING_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: use your own API Key from Bing. * method: (default=geocode) Use the following: + geocode + reverse ##### References[¶](#references) * [Bing Maps REST Services](http://msdn.microsoft.com/en-us/library/ff701714.aspx) ### CanadaPost[¶](#canadapost) The next generation of address finders, AddressComplete uses intelligent, fast searching to improve data accuracy and relevancy. Simply start typing a business name, address or Postal Code and AddressComplete will suggest results as you go. Using Geocoder you can retrieve CanadaPost’s geocoded data from Addres Complete API. #### Geocoding (Postal Code)[¶](#geocoding-postal-code) ``` >>> import geocoder >>> g = geocoder.canadapost('453 Booth Street, Ottawa', key='<API KEY>') >>> g.postal 'K1R 7K9' >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '453 Booth Street, Ottawa' --provider canadapost | jq .postal ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export CANADAPOST_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: (optional) API Key from CanadaPost Address Complete. * language: (default=en) Output language preference. * country: (default=ca) Geofenced query by country. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [Addres Complete API](https://www.canadapost.ca/pca/) ### Google[¶](#google) Geocoding is the process of converting addresses (like “1600 Amphitheatre Parkway, Mountain View, CA”) into geographic coordinates (like latitude 37.423021 and longitude -122.083739), which you can use to place markers or position the map. Using Geocoder you can retrieve google’s geocoded data from Google Geocoding API. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.google('Mountain View, CA') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.google([45.15, -75.14], method='reverse') >>> g.json ... ``` #### Timezone[¶](#timezone) ``` >>> import geocoder >>> g = geocoder.google([45.15, -75.14], method='timezone') >>> g.timeZoneName 'Eastern Daylight Time' >>> g.timeZoneId 'America/Toronto' >>> g.dstOffset 3600 >>> g.rawOffset -18000 ``` #### Component Filtering[¶](#component-filtering) ``` >>> g = geocoder.google("Santa Cruz", components="country:ES") ``` Read me at Google’s Geocoding API <https://developers.google.com/maps/documentation/geocoding/intro#ComponentFiltering#### Elevation[¶](#elevation) ``` >>> import geocoder >>> g = geocoder.google([45.15, -75.14], method='elevation') >>> g.meters 71.0 >>> g.feet 232.9 >>> g.resolution 38.17580795288086 ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Mountain View, CA' --provider google $ geocode '45.15, -75.14' --provider google --method reverse $ geocode '45.15, -75.14' --provider google --method timezone $ geocode '45.15, -75.14' --provider google --method elevation ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export GOOGLE_API_KEY=<Secret API Key> $ export GOOGLE_CLIENT=<Secret Client> $ export GOOGLE_CLIENT_SECRET=<Secret Client Secret> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: Your Google developers free key. * language: 2-letter code of preferred language of returned address elements. * client: Google for Work client ID. Use with client_secret. Cannot use with key parameter * client_secret: Google for Work client secret. Use with client. * method: (default=geocode) Use the following: + geocode + reverse + timezone + elevation ##### References[¶](#references) * [Google Geocoding API](https://developers.google.com/maps/documentation/geocoding/) ### Mapbox[¶](#mapbox) The Mapbox Geocoding API lets you convert location text into geographic coordinates (1600 Pennsylvania Ave NW → -77.0366,38.8971). #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.mapbox('San Francisco, CA', access_token='<TOKEN>') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> latlng = [45.3, -105.1] >>> g = geocoder.mapbox(latlng, method='reverse') >>> g.json ... ``` #### Geocoding with Proximity[¶](#geocoding-with-proximity) Request feature data that best matches input and is biased to the given {latitude} and {longitude} coordinates. In the above example, a query of “200 Queen Street” returns a subset of all relevant addresses in the world. By adding the proximity option, this subset can be biased towards a given area, returning a more relevant set of results. ``` >>> import geocoder >>> latlng = [45.3, -66.1] >>> g = geocoder.mapbox("200 Queen Street", proximity=latlng) >>> g.address "200 Queen St, Saint John, E2L 2X1, New Brunswick, Canada" >>> g = geocoder.mapbox("200 Queen Street") "200 Queen St W, Toronto, M5T 1T9, Ontario, Canada" ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'San Francisco, CA' --provider mapbox --out geojson $ geocode '45.15, -75.14' --provider mapbox --method reverse ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export MAPBOX_ACCESS_TOKEN=<Secret Access Token> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * proximity: Search nearby [lat, lng]. * access_token: Use your own access token from Mapbox. * country: Filtering by country code {cc} ISO 3166 alpha 2. * method: (default=geocode) Use the following: + geocode + reverse ##### References[¶](#references) * [Mabpox Geocoding API](https://www.mapbox.com/developers/api/geocoding/) * [Get Mabpox Access Token](https://www.mapbox.com/account) ### MapQuest[¶](#mapquest) The geocoding service enables you to take an address and get the associated latitude and longitude. You can also use any latitude and longitude pair and get the associated address. Three types of geocoding are offered: address, reverse, and batch. Using Geocoder you can retrieve MapQuest’s geocoded data from Geocoding Service. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.mapquest('San Francisco, CA', key='<API KEY>') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.mapquest([45.15, -75.14], method='reverse', key='<API KEY>') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'San Francisco, CA' --provider mapquest --out geojson ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export MAPQUEST_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [Mapquest Geocoding Service](http://www.mapquestapi.com/geocoding/) * [Get Free API Key](https://developer.mapquest.com/plan_purchase/steps/business_edition/business_edition_free) ### MaxMind[¶](#maxmind) MaxMind’s GeoIP2 products enable you to identify the location, organization, connection speed, and user type of your Internet visitors. The GeoIP2 databases are among the most popular and accurate IP geolocation databases available. Using Geocoder you can retrieve Maxmind’s geocoded data from MaxMind’s GeoIP2. #### Geocoding (IP Address)[¶](#geocoding-ip-address) ``` >>> import geocoder >>> g = geocoder.maxmind('199.7.157.0') >>> g.latlng [45.413140, -75.656703] >>> g.city 'Toronto' >>> g.json ... ``` #### Geocode your own IP[¶](#geocode-your-own-ip) To retrieve your own IP address, simply have ‘’ or ‘me’ as the input. ``` >>> import geocoder >>> g = geocoder.maxmind('me') >>> g.latlng [45.413140, -75.656703] >>> g.ip '199.7.157.0' >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '8.8.8.8' --provider maxmind | jq . ``` ##### Parameters[¶](#parameters) * location: Your search IP Address you want geocoded. * location: (optional) ‘me’ will return your current IP address’s location. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [MaxMind’s GeoIP2](https://www.maxmind.com/en/geolocation_landing) ### Opencage[¶](#opencage) OpenCage Geocoder simple, easy, and open geocoding for the entire world Our API combines multiple geocoding systems in the background. Each is optimized for different parts of the world and types of requests.We aggregate the best results from open data sources and algorithms so you don’t have to. Each is optimized for different parts of the world and types of requests. Using Geocoder you can retrieve OpenCage’s geocoded data from OpenCage Geocoding Services. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.opencage('San Francisco, CA', key='<API Key>') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.opencage([45.15, -75.14], method='reverse') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'San Francisco, CA' --provider opencage --out geojson --key '<API Key>' | jq . ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export OPENCAGE_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: (optional) use your own API Key from OpenCage. * method: (default=geocode) Use the following: > + geocode ##### References[¶](#references) * [OpenCage Geocoding Services](http://geocoder.opencagedata.com/api.html) ### OpenStreetMap[¶](#openstreetmap) Nominatim (from the Latin, ‘by name’) is a tool to search OSM data by name and address and to generate synthetic addresses of OSM points (reverse geocoding). Using Geocoder you can retrieve OSM’s geocoded data from Nominatim. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.osm('New York city') >>> g.json ... ``` #### Nominatim Server[¶](#nominatim-server) Setting up your own offline Nominatim server is possible, using Ubuntu 14.04 as your OS and following the [Nominatim Install](http://wiki.openstreetmap.org/wiki/Nominatim/Installation) instructions. This enables you to request as much geocoding as your little heart desires! ``` >>> url = 'http://localhost/nominatim/' >>> url = 'localhost' >>> g = geocoder.osm("New York City", url=url) >>> g.json ... ``` #### OSM Addresses[¶](#osm-addresses) The [addr tag](http://wiki.openstreetmap.org/wiki/Key:addr) is the prefix for several [`](#id1)addr:[`](#id3)* keys to describe addresses. This format is meant to be saved as a CSV and imported into JOSM. ``` >>> g = geocoder.osm('11 Wall Street, New York') >>> g.osm { "x": -74.010865, "y": 40.7071407, "addr:country": "United States of America", "addr:state": "New York", "addr:housenumber": "11", "addr:postal": "10005", "addr:city": "NYC", "addr:street": "Wall Street" } ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'New York city' --provider osm --out geojson | jq . $ geocode 'New York city' -p osm -o osm $ geocode 'New York city' -p osm --url localhost ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * url: Custom OSM Server (ex: localhost) * method: (default=geocode) Use the following: > + geocode ##### References[¶](#references) * [Nominatim](http://wiki.openstreetmap.org/wiki/Nominatim) * [Nominatim Install](http://wiki.openstreetmap.org/wiki/Nominatim/Installation) * [addr tag](http://wiki.openstreetmap.org/wiki/Key:addr) ### FreeGeoIP.net[¶](#freegeoip-net) freegeoip.net provides a public HTTP API for software developers to search the geolocation of IP addresses. It uses a database of IP addresses that are associated to cities along with other relevant information like time zone, latitude and longitude. You’re allowed up to 10,000 queries per hour by default. Once this limit is reached, all of your requests will result in HTTP 403, forbidden, until your quota is cleared. #### Geocoding (IP Address)[¶](#geocoding-ip-address) ``` >>> import geocoder >>> g = geocoder.freegeoip('99.240.181.199') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '99.240.181.199' --provider freegeoip ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [API Reference](http://freegeoip.net/) ### GeocodeFarm[¶](#geocodefarm) Geocode.Farm is one of the few providers that provide this highly specialized service for free. We also have affordable paid plans, of course, but our free services are of the same quality and provide the same results. The major difference between our affordable paid plans and our free API service is the limitations. On one of our affordable paid plans your limit is set based on the plan you signed up for, starting at 25,000 query requests per day (API calls). On our free API offering, you are limited to 250 query requests per day (API calls). #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.geocodefarm('Mountain View, CA') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.geocodefarm([45.15, -75.14], method='reverse') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Mountain View, CA' --provider geocodefarm $ geocode '45.15, -75.14' --provider geocodefarm --method reverse ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export GEOCODEFARM_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: The string to search for. Usually a street address. If reverse then should be a latitude/longitude. * key: (optional) API Key. Only Required for Paid Users. * lang: (optional) 2 digit lanuage code to return results in. Currently only “en”(English) or “de”(German) supported. * country: (optional) The country to return results in. Used for biasing purposes and may not fully filter results to this specific country. * method: (default=geocode) Use the following: + geocode + reverse ##### References[¶](#references) * [GeocodeFarm API Documentation](https://geocode.farm/geocoding/free-api-documentation/) ### Geocoder.ca[¶](#geocoder-ca) Geocoder.ca - A Canadian and US location geocoder. Using Geocoder you can retrieve Geolytica’s geocoded data from Geocoder.ca. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.geolytica('Ottawa, ON') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Ottawa, ON' --provider geolytica ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * auth: The authentication code for unthrottled service. * strictmode: Optionally you can prevent geocoder from making guesses on your input. * strict: Optional Parameter for enabling strict parsing of free form location input. * method: (default=geocode) Use the following: + geocode * auth: (optional) The authentication code for unthrottled service (premium API) ##### References[¶](#references) * [API Reference](http://geocoder.ca/?api=1) ### GeoOttawa[¶](#geoottawa) This data was collected in the field using GPS software on handheld computers. Not all information has been verified for accuracy and therefore should only be used in an advisory capacity. Forestry Services reserves the right to revise the data pursuant to further inspection/review. If you find any errors or omissions, please report them to 3-1-1. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.ottawa('453 Booth Street') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '453 Booth Street' --provider ottawa ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [GeoOttawa Map](http://maps.ottawa.ca/geoottawa/) ### HERE[¶](#here) Send a request to the geocode endpoint to find an address using a combination of country, state, county, city, postal code, district, street and house number. Using Geocoder you can retrieve geocoded data from the HERE Geocoder REST API. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.here('Espoo, Finland') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.google([45.15, -75.14], method='reverse') >>> g.json ... ``` ##### Using API Key[¶](#using-api-key) If you want to use your own app_id & app_code, you must register an app at the [HERE Developer](https://developer.here.com/geocoder). ``` >>> g = geocoder.here('Espoo, Finland', app_id='<YOUR APP ID>', app_code='<YOUR APP CODE>') ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Espoo, Finland' --provider here $ geocode '45.15, -75.14' --provider here --method reverse ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export APP_ID=<Secret APP ID> $ export APP_CODE=<Secret APP Code> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * app_code: (optional) use your own Application Code from HERE. * app_id: (optional) use your own Application ID from HERE. * method: (default=geocode) Use the following: + geocode + reverse ##### References[¶](#references) * [HERE Geocoder REST API](https://developer.here.com/rest-apis/documentation/geocoder) ### IP Info.io[¶](#ip-info-io) Use the IPInfo.io IP lookup API to quickly and simply integrate IP geolocation into your script or website. Save yourself the hassle of setting up local GeoIP libraries and having to remember to regularly update the data. #### Geocoding (IP Address)[¶](#geocoding-ip-address) ``` >>> import geocoder >>> g = geocoder.ipinfo('199.7.157.0') >>> g.latlng [45.413140, -75.656703] >>> g.city 'Toronto' >>> g.json ... ``` #### Geocode your own IP[¶](#geocode-your-own-ip) To retrieve your own IP address, simply have ‘’ or ‘me’ as the input. ``` >>> import geocoder >>> g = geocoder.ipinfo('me') >>> g.latlng [45.413140, -75.656703] >>> g.ip '199.7.157.0' >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '199.7.157.0' --provider ipinfo | jq . ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * location: (optional) ‘’ will return your current IP address’s location. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [IpinfoIo](https://ipinfo.io) ### Tamu[¶](#tamu) The Texas A&M Geoservices Geocoding API provides output including Lat and Lon and numerous census data values. An API key linked to an account with Texas A&M is required. Tamu’s API differs from the other geocoders in this package in that it requires the street address, city, state, and US zipcode to be passed in separately, rather than as a single string. Because of this requirement, the “location”, “city”, “state”, and “zipcode” parameters are all required when using the Tamu provider. The “location” parameter should contain only the street address of the location. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.tamu( '595 Market St', city='San Francisco', state='California', zipcode='94105', key='demo') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '595 Market St' --provider tamu --city San Francisco --state CA --zipcode 94105 --key <Secret API Key> ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export TAMU_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: The street address of the location you want geocoded. * city: The city of the location to geocode. * state: The state of the location to geocode. * zipcode: The zipcode of the location to geocode. * key: use your own API Key from Tamu. * method: (default=geocode) Use the following: + geocode ##### Census Output Fields[¶](#census-output-fields) Note: “FIPS” stands for “Federal Information Processing System” * census_block: Census Block value for location * census_tract: Census Tract value for location * census_county_fips: Census County FIPS value * census_cbsa_fips: Census Core Base Statistical Area FIPS value * census_mcd_fips: Census Minor Civil Division FIPS value * census_msa_fips: Census Metropolitan Statistical Area FIPS value * census_place_fips: Census Place FIPS value * census_state_fips: Census State FIPS value * census_year: Census Year from which these values originated ##### References[¶](#references) * [Tamu Geocoding API](http://geoservices.tamu.edu/Services/Geocode/WebService/) ### TomTom[¶](#tomtom) The Geocoding API gives developers access to TomTom’s first class geocoding service. Developers may call this service through either a single or batch geocoding request. This service supports global coverage, with house number level matching in over 50 countries, and address point matching where available. Using Geocoder you can retrieve TomTom’s geocoded data from Geocoding API. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.tomtom('San Francisco, CA', key='<API KEY>') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'San Francisco, CA' --provider mapbox --out geojson ``` ##### Environment Variables[¶](#environment-variables) To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. ``` $ export TOMTOM_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: use your own API Key from TomTom. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [TomTom Geocoding API](http://developer.tomtom.com/products/geocoding_api) ### What3Words[¶](#what3words) What3Words is a global grid of 57 trillion 3mx3m squares. Each square has a unique 3 word address that people can find and communicate quickly, easily, and without ambiguity. **Addressing the world** Everyone and everywhere now has an address #### Geocoding (3 Words)[¶](#geocoding-3-words) ``` >>> import geocoder >>> g = geocoder.w3w('embedded.fizzled.trial') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.w3w([45.15, -75.14], method='reverse') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'embedded.fizzled.trial' --provider w3w $ geocode '45.15, -75.14' --provider w3w --method reverse ``` ##### Environment Variables[¶](#environment-variables) For safe storage of your API key on your computer, you can define that API key using your system’s environment variables. ``` $ export W3W_API_KEY=<Secret API Key> ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: use your own API Key from What3Words. * method: (default=geocode) Use the following: + geocode + reverse ##### References[¶](#references) * [API Reference](http://developer.what3words.com/) * [Get W3W key](http://developer.what3words.com/api-register/) ### Yahoo[¶](#yahoo) Yahoo PlaceFinder is a geocoding Web service that helps developers make their applications location-aware by converting street addresses or place names into geographic coordinates (and vice versa). Using Geocoder you can retrieve Yahoo’s geocoded data from Yahoo BOSS Geo Services. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.yahoo('San Francisco, CA') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'San Francisco, CA' --provider yahoo --out geojson ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * method: (default=geocode) Use the following: + geocode ##### References[¶](#references) * [Yahoo BOSS Geo Services](https://developer.yahoo.com/boss/geo/) ### Yandex[¶](#yandex) Yandex (Russian: Яндекс) is a Russian Internet company which operates the largest search engine in Russia with about 60% market share in that country. The Yandex home page has been rated as the most popular website in Russia. #### Geocoding[¶](#geocoding) ``` >>> import geocoder >>> g = geocoder.yandex('Moscow Russia') >>> g.json ... ``` #### Reverse Geocoding[¶](#reverse-geocoding) ``` >>> import geocoder >>> g = geocoder.yandex([55.95, 37.96], method='reverse') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode 'Moscow Russia' --provider yandex --out geojson $ geocode '55.95, 37.96' --provider yandex --method reverse ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * lang: Chose the following language: > + **ru-RU** — Russian (by default) > + **uk-UA** — Ukrainian > + **be-BY** — Belarusian > + **en-US** — American English > + **en-BR** — British English > + **tr-TR** — Turkish (only for maps of Turkey) > * kind: Type of toponym (only for reverse geocoding): > + **house** — house or building > + **street** — street > + **metro** — subway station > + **district** — city district > + **locality** — locality (city, town, village, etc.) > * method: (default=geocode) Use the following: > + geocode > + reverse ##### References[¶](#references) * [Yandex API Reference](http://api.yandex.com/maps/doc/geocoder/desc/concepts/input_params.xml) ### TGOS[¶](#tgos) TGOS Map is official map service of Taiwan. #### Geocoding[¶](#geocoding) ``` >>> import geocoder # pip install geocoder >>> g = geocoder.tgos('台北市內湖區內湖路一段735號', key='<API KEY>') >>> g.json ... ``` ##### Command Line Interface[¶](#command-line-interface) ``` $ geocode '台北市內湖區內湖路一段735號' --provider tgos ``` ##### Parameters[¶](#parameters) * location: Your search location you want geocoded. * key: Use your own API Key from TGOS. * method: (default=geocode) Use the following: + geocode * language: (default=taiwan) Use the following: + taiwan + english + chinese ##### References[¶](#references) * [TGOS Maps API](http://api.tgos.nat.gov.tw/TGOS_MAP_API/Web/Default.aspx) Contributor Guide[¶](#contributor-guide) --- If you want to contribute to the project, this part of the documentation is for you. ### Authors[¶](#authors) #### Lead Developer[¶](#lead-developer) * [<NAME>](https://twitter.com/DenisCarriere) - Creator of Python’s Geocoder #### Contributors[¶](#contributors) A big thanks to all the people that help contribute: * [<NAME>](https://github.com/virus-warnning) - Implemented TGOS provider * [<NAME>](https://twitter.com/KevinBrolly) - Implemented GeocodeFarm provider * [<NAME>](https://github.com/ac6y) - Implemented Tamu provider * [<NAME>](https://github.com/Chartres) - Added Google for Work. * [<NAME>](https://github.com/dunice-vadimh) - Added IPInfo provider. * [<NAME>](https://github.com/yedpodtrzitko) - Cleaned up code & Added Six * [<NAME>](https://twitter.com/ThomasG77) - Wrote an article about [Geocoder vs. Geopy](http://webgeodatavore.com/python-geocoders-clients-comparison.html) * [<NAME>](https://github.com/max-arnold) - Submitted Github Issue * [<NAME>](https://twitter.com/zxiiro) - Cleaned up code & Unit Testing * [<NAME>](https://twitter.com/myusuf3) - Promoted by [Pycoders Weekly](https://twitter.com/pycoders), [Issue #155 Nimoy](http://us4.campaign-archive2.com/?u=9735795484d2e4c204da82a29&id=2776ce7284) * [<NAME>](http://alexpilon.ca) - Cleaned up code * [<NAME>](https://twitter.com/philiphubs) - Provided HERE improvements & documentation * [<NAME>](https://twitter.com/themiurgo) - Improved code quality and introduced Rate Limits * [<NAME>](https://github.com/alexanderlukanin13) - Improved Python 3 compatibilty * [flebel](https://github.com/flebel) - Submitted Github Issues * [patrickyan](https://github.com/patrickyan) - Submitted Github Issues * [esy](https://github.com/lambda-conspiracy) - Submitted Github Issues * <NAME> (Сергей Грабалин) - Fixed Python2 Unicode Issues Geocoder is a simple and consistent geocoding library written in Python. Dealing with multiple different geocoding provider such as Google, Bing, OSM & many more has never been easier. ### Support If you are having issues we would love to hear from you. Just [hit me up](mailto:<EMAIL>). You can alternatively raise an issue [here](http://www.github.com/DenisCarriere/geocoder/issues/) on Github. ### Stay Informed [Follow @DenisCarriere](https://twitter.com/DenisCarriere) ### Quick search Enter search terms or a module, class or function name.
django-safedelete
readthedoc
Python
django-safedelete 0.4 documentation [django-safedelete](index.html#document-index) --- Django safedelete[¶](#django-safedelete) === What is it ?[¶](#what-is-it) --- For various reasons, you may want to avoid deleting objects from your database. This Django application provides an abstract model, that allows you to transparently retrieve or delete your objects, without having them deleted from your database. You can choose what happens when you delete an object : * it can be masked from your database (soft delete, the default behavior) * it can be masked from your database and mask any dependent models. (cascading soft delete) * it can be normally deleted (hard delete) * it can be hard-deleted, but if its deletion would delete other objects, it will only be masked * it can be never deleted or masked from your database (no delete, use with caution) Example[¶](#example) --- ``` # imports from safedelete.models import SafeDeleteModel from safedelete.models import HARD_DELETE_NOCASCADE # Models # We create a new model, with the given policy : Objects will be hard-deleted, or soft deleted if other objects would have been deleted too. class Article(SafeDeleteModel): _safedelete_policy = HARD_DELETE_NOCASCADE name = models.CharField(max_length=100) class Order(SafeDeleteModel): _safedelete_policy = HARD_DELETE_NOCASCADE name = models.CharField(max_length=100) articles = models.ManyToManyField(Article) # Example of use >>> article1 = Article(name='article1') >>> article1.save() >>> article2 = Article(name='article2') >>> article2.save() >>> order = Order(name='order') >>> order.save() >>> order.articles.add(article1) # This article will be masked, but not deleted from the database as it is still referenced in an order. >>> article1.delete() # This article will be deleted from the database. >>> article2.delete() ``` Compatibilities[¶](#compatibilities) --- * Branch 0.2.x is compatible with django >= 1.2 * Branch 0.3.x is compatible with django >= 1.4 * Branch 0.4.x is compatible with django >= 1.8 * Branch 0.5.x is compatible with django >= 1.11 * Branch 1.0.x, 1.1.x and 1.2.x are compatible with django >= 2.2 * Branch 1.3.x is compatible with django >= 3.2 and Python >= 3.7 Current branch (1.3.x) is tested with : * Django 3.2 using python 3.7 to 3.10. * Django 4.0 using python 3.8 to 3.10. * Django 4.1 using python 3.8 to 3.10. * Django 4.2 using python 3.8 to 3.11. Installation[¶](#installation) --- Installing from pypi (using pip). ``` pip install django-safedelete ``` Installing from github. ``` pip install -e git://github.com/makinacorpus/django-safedelete.git#egg=django-safedelete ``` Add `safedelete` in your `INSTALLED_APPS`: ``` INSTALLED_APPS = [ 'safedelete', [...] ] ``` The application doesn’t have any special requirement. Configuration[¶](#configuration) --- In the main django settings you can activate the boolean variable `SAFE_DELETE_INTERPRET_UNDELETED_OBJECTS_AS_CREATED`. If you do this the `update_or_create()` function from django’s standard manager class will return `True` for the `created` variable if the object was soft-deleted and is now “revived”. By default, the field that indicates a database entry is soft-deleted is `deleted`, however, you can override the field name using the `SAFE_DELETE_FIELD_NAME` setting. Documentation[¶](#documentation) === Model[¶](#model) --- ### Built-in model[¶](#module-safedelete.models) *class* `safedelete.models.``SafeDeleteModel`(**args*, ***kwargs*)[[source]](_modules/safedelete/models.html#SafeDeleteModel)[¶](#safedelete.models.SafeDeleteModel) Abstract safedelete-ready model. Note To create your safedelete-ready models, you have to make them inherit from this model. | Attribute deleted: | | --- | | | DateTimeField set to the moment the object was deleted. Is set to `None` if the object has not been deleted. | | Attribute deleted_by_cascade: | | | BooleanField set True whenever the object is deleted due cascade operation called by delete method of any parent Model. Default value is False. Later if its parent model calls for cascading undelete, it will restore only child classes that were also deleted by a cascading operation (deleted_by_cascade equals to True), i.e. all objects that were deleted before their parent deletion, should keep deleted if the same parent object is restored by undelete method. If this behavior isn’t desired, class that inherits from SafeDeleteModel can override this attribute by setting it as None: overriding model class won’t have its `deleted_by_cascade` field and won’t be restored by cascading undelete even if it was deleted by a cascade operation. ``` >>> class MyModel(SafeDeleteModel): ... deleted_by_cascade = None ... my_field = models.TextField() ``` | | Attribute _safedelete_policy: | | | define what happens when you delete an object. It can be one of `HARD_DELETE`, `SOFT_DELETE`, `SOFT_DELETE_CASCADE`, `NO_DELETE` and `HARD_DELETE_NOCASCADE`. Defaults to `SOFT_DELETE`. ``` >>> class MyModel(SafeDeleteModel): ... _safedelete_policy = SOFT_DELETE ... my_field = models.TextField() ... >>> # Now you have your model (with its ``deleted`` field, and custom manager and delete method) ``` | | Attribute objects: | | | The [`safedelete.managers.SafeDeleteManager`](index.html#safedelete.managers.SafeDeleteManager) returns the non-deleted models. | | Attribute all_objects: | | | The [`safedelete.managers.SafeDeleteAllManager`](index.html#safedelete.managers.SafeDeleteAllManager) returns all the models (non-deleted and soft-deleted). | | Attribute deleted_objects: | | | The [`safedelete.managers.SafeDeleteDeletedManager`](index.html#safedelete.managers.SafeDeleteDeletedManager) returns the soft-deleted models. | `save`(*keep_deleted=False*, ***kwargs*)[[source]](_modules/safedelete/models.html#SafeDeleteModel.save)[¶](#safedelete.models.SafeDeleteModel.save) Save an object, un-deleting it if it was deleted. Args: keep_deleted: Do not undelete the model if soft-deleted. (default: {False}) kwargs: Passed onto [`save()`](#safedelete.models.SafeDeleteModel.save). Note Undeletes soft-deleted models by default. `undelete`(*force_policy: Optional[int] = None*, ***kwargs*) → Tuple[int, Dict[str, int]][[source]](_modules/safedelete/models.html#SafeDeleteModel.undelete)[¶](#safedelete.models.SafeDeleteModel.undelete) Undelete a soft-deleted model. Args: force_policy: Force a specific undelete policy. (default: {None}) kwargs: Passed onto [`save()`](#safedelete.models.SafeDeleteModel.save). Note Will raise a `AssertionError` if the model was not soft-deleted. *classmethod* `has_unique_fields`() → bool[[source]](_modules/safedelete/models.html#SafeDeleteModel.has_unique_fields)[¶](#safedelete.models.SafeDeleteModel.has_unique_fields) Checks if one of the fields of this model has a unique constraint set (unique=True). It also checks if the model has sets of field names that, taken together, must be unique. Args: model: Model instance to check *class* `safedelete.models.``SafeDeleteMixin`(**args*, ***kwargs*)[[source]](_modules/safedelete/models.html#SafeDeleteMixin)[¶](#safedelete.models.SafeDeleteMixin) `SafeDeleteModel` was previously named `SafeDeleteMixin`. Deprecated since version 0.4.0: Use [`SafeDeleteModel`](#safedelete.models.SafeDeleteModel) instead. ### Policies[¶](#policies) You can change the policy of your model by setting its `_safedelete_policy` attribute. The different policies are: `safedelete.models.``HARD_DELETE`[¶](#safedelete.models.HARD_DELETE) This policy will: * Hard delete objects from the database if you call the `delete()` method. > There is no difference with « normal » models, but you can still manually mask them from the database, for example by using `obj.delete(force_policy=SOFT_DELETE)`. `safedelete.models.``SOFT_DELETE`[¶](#safedelete.models.SOFT_DELETE) This policy will: This will make the objects be automatically masked (and not deleted), when you call the delete() method. They will NOT be masked in cascade. `safedelete.models.``SOFT_DELETE_CASCADE`[¶](#safedelete.models.SOFT_DELETE_CASCADE) This policy will: This will make the objects be automatically masked (and not deleted) and all related objects, when you call the delete() method. They will be masked in cascade. `safedelete.models.``HARD_DELETE_NOCASCADE`[¶](#safedelete.models.HARD_DELETE_NOCASCADE) This policy will: * Delete the object from database if no objects depends on it (e.g. no objects would have been deleted in cascade). * Mask the object if it would have deleted other objects with it. `safedelete.models.``NO_DELETE`[¶](#safedelete.models.NO_DELETE) This policy will: * Keep the objects from being masked or deleted from your database. The only way of removing objects will be by using raw SQL. ### Policies Delete Logic Customization[¶](#policies-delete-logic-customization) Each of the policies has an overwritable function in case you need to customize a particular policy delete logic. The function per policy are as follows: | Policy | Overwritable Function | | --- | --- | | SOFT_DELETE | soft_delete_policy_action | | HARD_DELETE | hard_delete_policy_action | | HARD_DELETE_NOCASCADE | hard_delete_cascade_policy_action | | SOFT_DELETE_CASCADE | soft_delete_cascade_policy_action | Example: To add custom logic before or after the execution of the original delete logic of a model with the policy SOFT_DELETE you can overwrite the `soft_delete_policy_action` function as such: ``` def soft_delete_policy_action(self, **kwargs): # Insert here custom pre delete logic delete_response = super().soft_delete_policy_action(**kwargs) # Insert here custom post delete logic return delete_response ``` ### Fields uniqueness[¶](#fields-uniqueness) Because unique constraints are set at the database level, set unique=True on a field will also check uniqueness against soft deleted objects. This can lead to confusion as the soft deleted objects are not visible by the user. This can be solved by setting a partial unique constraint that will only check uniqueness on non-deleted objects: ``` class Post(SafeDeleteModel): name = models.CharField(max_length=100) class Meta: constraints = [ UniqueConstraint( fields=['name'], condition=Q(deleted__isnull=True), name='unique_active_name' ), ] ``` Managers[¶](#managers) --- ### Built-in managers[¶](#module-safedelete.managers) *class* `safedelete.managers.``SafeDeleteManager`(*queryset_class: Optional[Type[safedelete.queryset.SafeDeleteQueryset]] = None*)[[source]](_modules/safedelete/managers.html#SafeDeleteManager)[¶](#safedelete.managers.SafeDeleteManager) Default manager for the SafeDeleteModel. If _safedelete_visibility == DELETED_VISIBLE_BY_PK, the manager can returns deleted objects if they are accessed by primary key. | Attribute _safedelete_visibility: | | --- | | | define what happens when you query masked objects. It can be one of `DELETED_INVISIBLE` and `DELETED_VISIBLE_BY_PK`. Defaults to `DELETED_INVISIBLE`. ``` >>> from safedelete.models import SafeDeleteModel >>> from safedelete.managers import SafeDeleteManager >>> class MyModelManager(SafeDeleteManager): ... _safedelete_visibility = DELETED_VISIBLE_BY_PK ... >>> class MyModel(SafeDeleteModel): ... _safedelete_policy = SOFT_DELETE ... my_field = models.TextField() ... objects = MyModelManager() ... >>> ``` | | Attribute _queryset_class: | | | define which class for queryset should be used This attribute allows to add custom filters for both deleted and not deleted objects. It is `SafeDeleteQueryset` by default. Custom queryset classes should be inherited from `SafeDeleteQueryset`. | `get_queryset`()[[source]](_modules/safedelete/managers.html#SafeDeleteManager.get_queryset)[¶](#safedelete.managers.SafeDeleteManager.get_queryset) Return a new QuerySet object. Subclasses can override this method to customize the behavior of the Manager. `all_with_deleted`() → django.db.models.query.QuerySet[[source]](_modules/safedelete/managers.html#SafeDeleteManager.all_with_deleted)[¶](#safedelete.managers.SafeDeleteManager.all_with_deleted) Show all models including the soft deleted models. Note This is useful for related managers as those don’t have access to `all_objects`. `deleted_only`() → django.db.models.query.QuerySet[[source]](_modules/safedelete/managers.html#SafeDeleteManager.deleted_only)[¶](#safedelete.managers.SafeDeleteManager.deleted_only) Only show the soft deleted models. Note This is useful for related managers as those don’t have access to `deleted_objects`. `all`(***kwargs*) → django.db.models.query.QuerySet[[source]](_modules/safedelete/managers.html#SafeDeleteManager.all)[¶](#safedelete.managers.SafeDeleteManager.all) Pass kwargs to `SafeDeleteQuerySet.all()`. Args: force_visibility: Show deleted models. (default: {None}) Note The `force_visibility` argument is meant for related managers when no other managers like `all_objects` or `deleted_objects` are available. `update_or_create`(*defaults=None*, ***kwargs*) → Tuple[django.db.models.base.Model, bool][[source]](_modules/safedelete/managers.html#SafeDeleteManager.update_or_create)[¶](#safedelete.managers.SafeDeleteManager.update_or_create) See `()`. Change to regular djangoesk function: Regular update_or_create() fails on soft-deleted, existing record with unique constraint on non-id field If object is soft-deleted we don’t update-or-create it but reset the deleted field to None. So the object is visible again like a create in any other case. Attention: If the object is “revived” from a soft-deleted state the created return value will still be false because the object is technically not created unless you set SAFE_DELETE_INTERPRET_UNDELETED_OBJECTS_AS_CREATED = True in the django settings. Args: defaults: Dict with defaults to update/create model instance with kwargs: Attributes to lookup model instance with *static* `get_soft_delete_policies`()[[source]](_modules/safedelete/managers.html#SafeDeleteManager.get_soft_delete_policies)[¶](#safedelete.managers.SafeDeleteManager.get_soft_delete_policies) Returns all states which stand for some kind of soft-delete *class* `safedelete.managers.``SafeDeleteAllManager`(*queryset_class: Optional[Type[safedelete.queryset.SafeDeleteQueryset]] = None*)[[source]](_modules/safedelete/managers.html#SafeDeleteAllManager)[¶](#safedelete.managers.SafeDeleteAllManager) SafeDeleteManager with `_safedelete_visibility` set to `DELETED_VISIBLE`. Note This is used in `safedelete.models.SafeDeleteModel.all_objects`. *class* `safedelete.managers.``SafeDeleteDeletedManager`(*queryset_class: Optional[Type[safedelete.queryset.SafeDeleteQueryset]] = None*)[[source]](_modules/safedelete/managers.html#SafeDeleteDeletedManager)[¶](#safedelete.managers.SafeDeleteDeletedManager) SafeDeleteManager with `_safedelete_visibility` set to `DELETED_ONLY_VISIBLE`. Note This is used in `safedelete.models.SafeDeleteModel.deleted_objects`. ### Visibility[¶](#visibility) A custom manager is used to determine which objects should be included in the querysets. *class* `safedelete.managers.``SafeDeleteManager`(*queryset_class: Optional[Type[safedelete.queryset.SafeDeleteQueryset]] = None*)[[source]](_modules/safedelete/managers.html#SafeDeleteManager) Default manager for the SafeDeleteModel. If _safedelete_visibility == DELETED_VISIBLE_BY_PK, the manager can returns deleted objects if they are accessed by primary key. | Attribute _safedelete_visibility: | | --- | | | define what happens when you query masked objects. It can be one of `DELETED_INVISIBLE` and `DELETED_VISIBLE_BY_PK`. Defaults to `DELETED_INVISIBLE`. ``` >>> from safedelete.models import SafeDeleteModel >>> from safedelete.managers import SafeDeleteManager >>> class MyModelManager(SafeDeleteManager): ... _safedelete_visibility = DELETED_VISIBLE_BY_PK ... >>> class MyModel(SafeDeleteModel): ... _safedelete_policy = SOFT_DELETE ... my_field = models.TextField() ... objects = MyModelManager() ... >>> ``` | | Attribute _queryset_class: | | | define which class for queryset should be used This attribute allows to add custom filters for both deleted and not deleted objects. It is `SafeDeleteQueryset` by default. Custom queryset classes should be inherited from `SafeDeleteQueryset`. | If you want to change which objects are “masked”, you can set the `_safedelete_visibility` attribute of the manager to one of the following: `safedelete.managers.``DELETED_INVISIBLE`[¶](#safedelete.managers.DELETED_INVISIBLE) This is the default visibility. The objects marked as deleted will be visible in one case : If you access them directly using a OneToOne or a ForeignKey relation. For example, if you have an article with a masked author, you can still access the author using `article.author`. If the article is masked, you are not able to access it using reverse relationship : `author.article_set` will not contain the masked article. `safedelete.managers.``DELETED_VISIBLE_BY_FIELD`[¶](#safedelete.managers.DELETED_VISIBLE_BY_FIELD) This policy is like [`DELETED_INVISIBLE`](#safedelete.managers.DELETED_INVISIBLE), except that you can still access a deleted object if you call the `get()` or `filter()` function, passing it the default field `pk` parameter. Configurable through the _safedelete_visibility_field attribute of the manager. So, deleted objects are still available if you access them directly by this field. QuerySet[¶](#queryset) --- ### Built-in QuerySet[¶](#module-safedelete.queryset) *class* `safedelete.queryset.``SafeDeleteQueryset`(*model: Optional[Type[django.db.models.base.Model]] = None*, *query: Optional[safedelete.query.SafeDeleteQuery] = None*, *using: Optional[str] = None*, *hints: Optional[Dict[str*, *django.db.models.base.Model]] = None*)[[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset)[¶](#safedelete.queryset.SafeDeleteQueryset) Default queryset for the SafeDeleteManager. Takes care of “lazily evaluating” safedelete QuerySets. QuerySets passed within the `SafeDeleteQueryset` will have all of the models available. The deleted policy is evaluated at the very end of the chain when the QuerySet itself is evaluated. *classmethod* `as_manager`()[[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset.as_manager)[¶](#safedelete.queryset.SafeDeleteQueryset.as_manager) Override as_manager behavior to ensure we create a SafeDeleteManager. `delete`(*force_policy: Optional[int] = None*) → Tuple[int, Dict[str, int]][[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset.delete)[¶](#safedelete.queryset.SafeDeleteQueryset.delete) Overrides bulk delete behaviour. Note The current implementation loses performance on bulk deletes in order to safely delete objects according to the deletion policies set. See also `safedelete.models.SafeDeleteModel.delete()` `undelete`(*force_policy: Optional[int] = None*) → Tuple[int, Dict[str, int]][[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset.undelete)[¶](#safedelete.queryset.SafeDeleteQueryset.undelete) Undelete all soft deleted models. Note The current implementation loses performance on bulk undeletes in order to call the pre/post-save signals. See also [`safedelete.models.SafeDeleteModel.undelete()`](index.html#safedelete.models.SafeDeleteModel.undelete) `all`(*force_visibility=None*) → _QS[[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset.all)[¶](#safedelete.queryset.SafeDeleteQueryset.all) Override so related managers can also see the deleted models. A model’s m2m field does not easily have access to all_objects and so setting force_visibility to True is a way of getting all of the models. It is not recommended to use force_visibility outside of related models because it will create a new queryset. Args: force_visibility: Force a deletion visibility. (default: {None}) `filter`(**args*, ***kwargs*)[[source]](_modules/safedelete/queryset.html#SafeDeleteQueryset.filter)[¶](#safedelete.queryset.SafeDeleteQueryset.filter) Return a new QuerySet instance with the args ANDed to the existing set. Signals[¶](#module-safedelete.signals) --- ### Signals[¶](#id1) There are two signals available. Please refer to the [Django signals](https://docs.djangoproject.com/en/dev/topics/signals/) documentation on how to use them. `safedelete.signals.``pre_softdelete`[¶](#safedelete.signals.safedelete.signals.pre_softdelete) Sent before an object is soft deleted. `safedelete.signals.``post_softdelete`[¶](#safedelete.signals.safedelete.signals.post_softdelete) Sent after an object has been soft deleted. `safedelete.signals.``post_undelete`[¶](#safedelete.signals.safedelete.signals.post_undelete) Sent after a deleted object is restored. Handling administration[¶](#handling-administration) --- ### Model admin[¶](#module-safedelete.admin) Deleted objects will also be hidden in the admin site by default. A `ModelAdmin` abstract class is provided to give access to deleted objects. An undelete action is provided to undelete objects in bulk. The `deleted` attribute is also excluded from editing by default. You can use the `highlight_deleted` method to show deleted objects in red in the admin listing. You also have the option of using `highlight_deleted_field` which is similar to `highlight_deleted`, but allows you to specify a field for sorting and representation. Whereas `highlight_deleted` uses your object’s `__str__` function to represent the object, `highlight_deleted_field` uses the value from your object’s specified field. To use `highlight_deleted_field`, add “highlight_deleted_field” to your list filters (as a string, seen in the example below), and set field_to_highlight = “desired_field_name” (also seen below). Then you should also set its short description (again, see below). *class* `safedelete.admin.``SafeDeleteAdmin`(*model*, *admin_site*)[[source]](_modules/safedelete/admin.html#SafeDeleteAdmin)[¶](#safedelete.admin.SafeDeleteAdmin) An abstract ModelAdmin which will include deleted objects in its listing. | Example: | ``` >>> from safedelete.admin import SafeDeleteAdmin, SafeDeleteAdminFilter, highlight_deleted >>> class ContactAdmin(SafeDeleteAdmin): ... list_display = (highlight_deleted, "highlight_deleted_field", "first_name", "last_name", "email") + SafeDeleteAdmin.list_display ... list_filter = ("last_name", SafeDeleteAdminFilter,) + SafeDeleteAdmin.list_filter ... ... field_to_highlight = "id" ... ... ContactAdmin.highlight_deleted_field.short_description = ContactAdmin.field_to_highlight ``` |
icrawler
readthedoc
Python
icrawler 0.6.6 documentation [icrawler](index.html#document-index) --- Welcome to icrawler[¶](#welcome-to-icrawler) === icrawler[¶](#icrawler) === Introduction[¶](#introduction) --- Documentation: <http://icrawler.readthedocs.io/Try it with `pip install icrawler` or `conda install -c hellock icrawler`. This package is a mini framework of web crawlers. With modularization design, it is easy to use and extend. It supports media data like images and videos very well, and can also be applied to texts and other type of files. Scrapy is heavy and powerful, while icrawler is tiny and flexible. With this package, you can write a multiple thread crawler easily by focusing on the contents you want to crawl, keeping away from troublesome problems like exception handling, thread scheduling and communication. It also provides built-in crawlers for popular image sites like **Flickr** and search engines such as **Google**, **Bing** and **Baidu**. (Thank all the contributors and pull requests are always welcome!) Requirements[¶](#requirements) --- Python 3.5+ (recommended). Examples[¶](#examples) --- Using built-in crawlers is very simple. A minimal example is shown as follows. ``` from icrawler.builtin import GoogleImageCrawler google_crawler = GoogleImageCrawler(storage={'root_dir': 'your_image_dir'}) google_crawler.crawl(keyword='cat', max_num=100) ``` You can also configurate number of threads and apply advanced search options. (Note: compatible with 0.6.0 and later versions) ``` from icrawler.builtin import GoogleImageCrawler google_crawler = GoogleImageCrawler( feeder_threads=1, parser_threads=2, downloader_threads=4, storage={'root_dir': 'your_image_dir'}) filters = dict( size='large', color='orange', license='commercial,modify', date=((2017, 1, 1), (2017, 11, 30))) google_crawler.crawl(keyword='cat', filters=filters, max_num=1000, file_idx_offset=0) ``` For more advanced usage about built-in crawlers, please refer to the [documentation](http://icrawler.readthedocs.io/en/latest/builtin.html). Writing your own crawlers with this framework is also convenient, see the [tutorials](http://icrawler.readthedocs.io/en/latest/extend.html). Architecture[¶](#architecture) --- A crawler consists of 3 main components (Feeder, Parser and Downloader), they are connected with each other with FIFO queues. The workflow is shown in the following figure. * `url_queue` stores the url of pages which may contain images * `task_queue` stores the image url as well as any meta data you like, each element in the queue is a dictionary and must contain the field `img_url` * Feeder puts page urls to `url_queue` * Parser requests and parses the page, then extracts the image urls and puts them into `task_queue` * Downloader gets tasks from `task_queue` and requests the images, then saves them in the given path. Feeder, parser and downloader are all thread pools, so you can specify the number of threads they use. Documentation index[¶](#documentation-index) === Installation[¶](#installation) --- The quick way (with pip): ``` pip install icrawler ``` or (with conda) ``` conda install -c hellock icrawler ``` You can also manually install it by ``` git clone git@github.com:hellock/icrawler.git cd icrawler python setup.py install ``` If you fail to install it on Linux, it is probably caused by *lxml*. See [here](http://lxml.de/installation.html#requirements) for solutions. Built-in crawlers[¶](#built-in-crawlers) --- This framework contains 6 built-in image crawlers. * Google * Bing * Baidu * Flickr * General greedy crawl (crawl all the images from a website) * UrlList (crawl all images given an url list) ### Search engine crawlers[¶](#search-engine-crawlers) The search engine crawlers (Google, Bing, Baidu) have universal APIs. Here is an example of how to use the built-in crawlers. ``` from icrawler.builtin import BaiduImageCrawler, BingImageCrawler, GoogleImageCrawler google_crawler = GoogleImageCrawler( feeder_threads=1, parser_threads=1, downloader_threads=4, storage={'root_dir': 'your_image_dir'}) filters = dict( size='large', color='orange', license='commercial,modify', date=((2017, 1, 1), (2017, 11, 30))) google_crawler.crawl(keyword='cat', filters=filters, offset=0, max_num=1000, min_size=(200,200), max_size=None, file_idx_offset=0) bing_crawler = BingImageCrawler(downloader_threads=4, storage={'root_dir': 'your_image_dir'}) bing_crawler.crawl(keyword='cat', filters=None, offset=0, max_num=1000) baidu_crawler = BaiduImageCrawler(storage={'root_dir': 'your_image_dir'}) baidu_crawler.crawl(keyword='cat', offset=0, max_num=1000, min_size=(200,200), max_size=None) ``` The filter options provided by Google, Bing and Baidu are different. Supported filter options and possible values are listed below. GoogleImageCrawler: * `type` – “photo”, “face”, “clipart”, “linedrawing”, “animated”. * `color` – “color”, “blackandwhite”, “transparent”, “red”, “orange”, “yellow”, “green”, “teal”, “blue”, “purple”, “pink”, “white”, “gray”, “black”, “brown”. * `size` – “large”, “medium”, “icon”, or larger than a given size (e.g. “>640x480”), or exactly is a given size (“=1024x768”). * `license` – “noncommercial”(labeled for noncommercial reuse), “commercial”(labeled for reuse), “noncommercial,modify”(labeled for noncommercial reuse with modification), “commercial,modify”(labeled for reuse with modification). * `date` – “pastday”, “pastweek” or a tuple of dates, e.g. `((2016, 1, 1), (2017, 1, 1))` or `((2016, 1, 1), None)`. BingImageCrawler: * `type` – “photo”, “clipart”, “linedrawing”, “transparent”, “animated”. * `color` – “color”, “blackandwhite”, “red”, “orange”, “yellow”, “green”, “teal”, “blue”, “purple”, “pink”, “white”, “gray”, “black”, “brown” * `size` – “large”, “medium”, “small” or larger than a given size (e.g. “>640x480”). * `license` – “creativecommons”, “publicdomain”, “noncommercial”, “commercial”, “noncommercial,modify”, “commercial,modify”. * `layout` – “square”, “wide”, “tall”. * `people` – “face”, “portrait”. * `date` – “pastday”, “pastweek”, “pastmonth”, “pastyear”. BaiduImageCrawler: * `type`: “portrait”, “face”, “clipart”, “linedrawing”, “animated”, “static” * `color`: “red”, “orange”, “yellow”, “green”, “purple”, “pink”, “teal”, “blue”, “brown”, “white”, “black”, “blackandwhite”. When using `GoogleImageCrawler`, language can be specified via the argument `language`, e.g., `google_crawler.crawl(keyword='cat', language="us")`. Note Tips: Search engines will limit the number of returned images, even when we use a browser to view the result page. The limitation is usually 1000 for many search engines such as google and bing. To crawl more than 1000 images with a single keyword, we can specify different date ranges. ``` google_crawler.crawl( keyword='cat', filters={'date': ((2016, 1, 1), (2016, 6, 30))}, max_num=1000, file_idx_offset=0) google_crawler.crawl( keyword='cat', filters={'date': ((2016, 6, 30), (2016, 12, 31))}, max_num=1000, file_idx_offset='auto') # set `file_idx_offset` to "auto" so that filenames can be consecutive numbers (e.g., 1001 ~ 2000) ``` ### Flickr crawler[¶](#flickr-crawler) ``` from datetime import date from icrawler.builtin import FlickrImageCrawler flickr_crawler = FlickrImageCrawler('your_apikey', storage={'root_dir': 'your_image_dir'}) flickr_crawler.crawl(max_num=1000, tags='child,baby', group_id='68012010@N00', min_upload_date=date(2015, 5, 1)) ``` Supported optional searching arguments are listed in <https://www.flickr.com/services/api/flickr.photos.search.html>. Here are some examples. * `user_id` – The NSID of the user who’s photo to search. * `tags` – A comma-delimited list of tags. * `tag_mode` – Either “any” for an OR combination of tags, or “all” for an AND combination. * `text` – A free text search. Photos who’s title, description or tags contain the text will be returned. * `min_upload_date` – Minimum upload date. The date can be in the form of `datetime.date` object, an unix timestamp or a string. * `max_upload_date` – Maximum upload date. Same form as `min_upload_date`. * `group_id` – The id of a group who’s pool to search. * `extras` – A comma-delimited list of extra information to fetch for each returned record. See [here](https://www.flickr.com/services/api/flickr.photos.search.html) for more details. * `per_page` – Number of photos to return per page. Some advanced searching arguments, which are not updated in the [Flickr API](https://www.flickr.com/services/api/flickr.photos.search.html), are also supported. Valid arguments and values are shown as follows. * `color_codes` – A comma-delimited list of color codes, which filters the results by your chosen color(s). Please see any Flickr search page for the corresponding relations between the colors and the codes. * `styles` – A comma-delimited list of styles, including `blackandwhite`, `depthoffield`, `minimalism` and `pattern`. * `orientation` – A comma-delimited list of image orientation. It can be `landscape`, `portrait`, `square` and `panorama`. The default includes all of them. Another parameter `size_preference` is available for Flickr crawler, it define the preferred order of image sizes. Valid values are shown as follows. * original * large 2048: 2048 on longest side† * large 1600: 1600 on longest side† * large: 1024 on longest side* * medium 800: 800 on longest side† * medium 640: 640 on longest side * medium: 500 on longest side * small 320: 320 on longest side * small: 240 on longest side * thumbnail: 100 on longest side * large square: 150x150 * square: 75x75 `size_preference` can be either a list or a string, if not specified, all sizes are acceptable and larger sizes are prior to smaller ones. Note * Before May 25th 2010 large photos only exist for very large original images. † Medium 800, large 1600, and large 2048 photos only exist after March 1st 2012. ### Greedy crawler[¶](#greedy-crawler) If you just want to crawl all the images from some website, then `GreedyImageCrawler` may be helpful. ``` from icrawler.builtin import GreedyImageCrawler greedy_crawler = GreedyImageCrawler(storage={'root_dir': 'your_image_dir'}) greedy_crawler.crawl(domains='http://www.bbc.com/news', max_num=0, min_size=None, max_size=None) ``` The argument `domains` can be either an url string or list. ### URL list crawler[¶](#url-list-crawler) If you have already got an image url list somehow and want to download all images using multiple threads, then `UrlListCrawler` may be helpful. ``` from icrawler.builtin import UrlListCrawler urllist_crawler = UrlListCrawler(downloader_threads=4, storage={'root_dir': 'your_image_dir'}) urllist_crawler.crawl('url_list.txt') ``` You can see the complete example in *test.py*, to run it ``` python test.py [options] ``` `options` can be `google`, `bing` , `baidu`, `flickr`, `greedy`, `urllist` or `all`, using `all` by default if no arguments are specified. Note that you have to provide your flickr apikey if you want to test FlickrCrawler. Extend and write your own[¶](#extend-and-write-your-own) --- It is easy to extend `icrawler` and use it to crawl other websites. The simplest way is to override some methods of Feeder, Parser and Downloader class. 1. **Feeder** The method you need to override is ``` feeder.feed(self, **kwargs) ``` If you want to offer the start urls at one time, for example from ‘<http://example.com/page_url/1>’ up to ‘<http://example.com/page_url/10>’ ``` from icrawler import Feeder class MyFeeder(Feeder): def feed(self): for i in range(10): url = 'http://example.com/page_url/{}'.format(i + 1) self.output(url) ``` 2. **Parser** The method you need to override is ``` parser.parse(self, response, **kwargs) ``` `response` is the page content of the url from `url_queue`, what you need to do is to parse the page and extract file urls, and then put them into `task_queue`. Beautiful Soup package is recommended for parsing html pages. Taking `GoogleParser` for example, ``` class GoogleParser(Parser): def parse(self, response): soup = BeautifulSoup(response.content, 'lxml') image_divs = soup.find_all('div', class_='rg_di rg_el ivg-i') for div in image_divs: meta = json.loads(div.text) if 'ou' in meta: yield dict(file_url=meta['ou']) ``` 3. **Downloader** If you just want to change the filename of downloaded images, you can override the method ``` downloader.get_filename(self, task, default_ext) ``` The default names of downloaded files are increasing numbers, from 000001 to 999999. Here is an example of using other filename formats instead of numbers as filenames. ``` import base64 from icrawler import ImageDownloader from icrawler.builtin import GoogleImageCrawler from six.moves.urllib.parse import urlparse class PrefixNameDownloader(ImageDownloader): def get_filename(self, task, default_ext): filename = super(PrefixNameDownloader, self).get_filename( task, default_ext) return 'prefix_' + filename class Base64NameDownloader(ImageDownloader): def get_filename(self, task, default_ext): url_path = urlparse(task['file_url'])[2] if '.' in url_path: extension = url_path.split('.')[-1] if extension.lower() not in [ 'jpg', 'jpeg', 'png', 'bmp', 'tiff', 'gif', 'ppm', 'pgm' ]: extension = default_ext else: extension = default_ext # works for python 3 filename = base64.b64encode(url_path.encode()).decode() return '{}.{}'.format(filename, extension) google_crawler = GoogleImageCrawler( downloader_cls=PrefixNameDownloader, # downloader_cls=Base64NameDownloader, downloader_threads=4, storage={'root_dir': 'images/google'}) google_crawler.crawl('tesla', max_num=10) ``` If you want to process meta data, for example save some annotations of the images, you can override the method ``` downloader.process_meta(self, task): ``` Note that your parser need to put meta data as well as file urls into `task_queue`. If you want to do more with the downloader, you can also override the method ``` downloader.download(self, task, default_ext, timeout=5, max_retry=3, overwrite=False, **kwargs) ``` You can retrieve tasks from `task_queue` and then do what you want to do. 4. **Crawler** You can either use the base class `Crawler` or inherit from it. Two main apis are ``` crawler.__init__(self, feeder_cls=Feeder, parser_cls=Parser, downloader_cls=Downloader, feeder_threads=1, parser_threads=1, downloader_threads=1, storage={'backend': 'FileSystem', 'root_dir': 'images'}, log_level=logging.INFO) ``` and ``` crawler.crawl(self, feeder_kwargs={}, parser_kwargs={}, downloader_kwargs={}) ``` So you can use your crawler like this ``` crawler = Crawler(feeder_cls=MyFeeder, parser_cls=MyParser, downloader_cls=ImageDownloader, downloader_threads=4, storage={'backend': 'FileSystem', 'root_dir': 'images'}) crawler.crawl(feeder_kwargs=dict(arg1='blabla', arg2=0), downloader_kwargs=dict(max_num=1000, min_size=None)) ``` Or define a class to avoid using complex and ugly dictionaries as arguments. ``` class MyCrawler(Crawler): def __init__(self, *args, **kwargs): super(GoogleImageCrawler, self).__init__( feeder_cls=MyFeeder, parser_cls=MyParser, downloader_cls=ImageDownloader, *args, **kwargs) def crawl(self, arg1, arg2, max_num=1000, min_size=None, max_size=None, file_idx_offset=0): feeder_kwargs = dict(arg1=arg1, arg2=arg2) downloader_kwargs = dict(max_num=max_num, min_size=None, max_size=None, file_idx_offset=file_idx_offset) super(MyCrawler, self).crawl(feeder_kwargs=feeder_kwargs, downloader_kwargs=downloader_kwargs) crawler = MyCrawler(downloader_threads=4, storage={'backend': 'FileSystem', 'root_dir': 'images'}) crawler.crawl(arg1='blabla', arg2=0, max_num=1000, max_size=(1000,800)) ``` How to use proxies[¶](#how-to-use-proxies) --- A powerful `ProxyPool` class is provided to handle the proxies. You will need to override the `Crawler.set_proxy_pool()` method to use it. If you just need a few (for example less than 30) proxies, you can override it like the following. ``` def set_proxy_pool(self): self.proxy_pool = ProxyPool() self.proxy_pool.default_scan(region='overseas', expected_num=10, out_file='proxies.json') ``` Then it will scan 10 valid overseas (out of mainland China) proxies and automatically use these proxies to request pages and images. If you have special requirements on proxies, you can use ProxyScanner and write your own scan functions to satisfy your demands. ``` def set_proxy_pool(self): proxy_scanner = ProxyScanner() proxy_scanner.register_func(proxy_scanner.scan_file, {'src_file': 'proxy_overseas.json'}) proxy_scanner.register_func(your_own_scan_func, {'arg1': '', 'arg2': ''}) self.proxy_pool.scan(proxy_scanner, expected_num=10, out_file='proxies.json') ``` Every time when making a new request, a proxy will be selected from the pool. Each proxy has a weight from 0.0 to 1.0, if a proxy has a greater weight, it has more chance to be selected for a request. The weight is increased or decreased automatically according to the rate of successful connection. API reference[¶](#api-reference) --- ### crawler[¶](#module-icrawler.crawler) Crawler base class *class* `icrawler.crawler.``Crawler`(*feeder_cls=<class 'icrawler.feeder.Feeder'>*, *parser_cls=<class 'icrawler.parser.Parser'>*, *downloader_cls=<class 'icrawler.downloader.Downloader'>*, *feeder_threads=1*, *parser_threads=1*, *downloader_threads=1*, *storage={'backend': 'FileSystem'*, *'root_dir': 'images'}*, *log_level=20*, *extra_feeder_args=None*, *extra_parser_args=None*, *extra_downloader_args=None*)[[source]](_modules/icrawler/crawler.html#Crawler)[¶](#icrawler.crawler.Crawler) Base class for crawlers `session`[¶](#icrawler.crawler.Crawler.session) A Session object. | Type: | [Session](index.html#icrawler.utils.Session) | `feeder`[¶](#icrawler.crawler.Crawler.feeder) A Feeder object. | Type: | [Feeder](index.html#icrawler.feeder.Feeder) | `parser`[¶](#icrawler.crawler.Crawler.parser) A Parser object. | Type: | [Parser](index.html#icrawler.parser.Parser) | `downloader`[¶](#icrawler.crawler.Crawler.downloader) A Downloader object. | Type: | [Downloader](index.html#icrawler.downloader.Downloader) | `signal`[¶](#icrawler.crawler.Crawler.signal) A Signal object shared by all components, used for communication among threads | Type: | [Signal](index.html#icrawler.utils.Signal) | `logger`[¶](#icrawler.crawler.Crawler.logger) A Logger object used for logging | Type: | Logger | `crawl`(*feeder_kwargs=None*, *parser_kwargs=None*, *downloader_kwargs=None*)[[source]](_modules/icrawler/crawler.html#Crawler.crawl)[¶](#icrawler.crawler.Crawler.crawl) Start crawling This method will start feeder, parser and download and wait until all threads exit. | Parameters: | * **feeder_kwargs** (*dict**,* *optional*) – Arguments to be passed to `feeder.start()` * **parser_kwargs** (*dict**,* *optional*) – Arguments to be passed to `parser.start()` * **downloader_kwargs** (*dict**,* *optional*) – Arguments to be passed to `downloader.start()` | `init_signal`()[[source]](_modules/icrawler/crawler.html#Crawler.init_signal)[¶](#icrawler.crawler.Crawler.init_signal) Init signal 3 signals are added: `feeder_exited`, `parser_exited` and `reach_max_num`. `set_logger`(*log_level=20*)[[source]](_modules/icrawler/crawler.html#Crawler.set_logger)[¶](#icrawler.crawler.Crawler.set_logger) Configure the logger with log_level. `set_proxy_pool`(*pool=None*)[[source]](_modules/icrawler/crawler.html#Crawler.set_proxy_pool)[¶](#icrawler.crawler.Crawler.set_proxy_pool) Construct a proxy pool By default no proxy is used. | Parameters: | **pool** ([*ProxyPool*](index.html#icrawler.utils.ProxyPool)*,* *optional*) – a `ProxyPool` object | `set_session`(*headers=None*)[[source]](_modules/icrawler/crawler.html#Crawler.set_session)[¶](#icrawler.crawler.Crawler.set_session) Init session with default or custom headers | Parameters: | **headers** – A dict of headers (default None, thus using the default header to init the session) | `set_storage`(*storage*)[[source]](_modules/icrawler/crawler.html#Crawler.set_storage)[¶](#icrawler.crawler.Crawler.set_storage) Set storage backend for downloader For full list of storage backend supported, please see `storage`. | Parameters: | **storage** (*dict* *or* [*BaseStorage*](index.html#icrawler.storage.BaseStorage)) – storage backend configuration or instance | ### feeder[¶](#module-icrawler.feeder) *class* `icrawler.feeder.``Feeder`(*thread_num*, *signal*, *session*)[[source]](_modules/icrawler/feeder.html#Feeder)[¶](#icrawler.feeder.Feeder) Bases: `icrawler.utils.thread_pool.ThreadPool` Base class for feeder. A thread pool of feeder threads, in charge of feeding urls to parsers. `thread_num`[¶](#icrawler.feeder.Feeder.thread_num) An integer indicating the number of threads. | Type: | int | `global_signal`[¶](#icrawler.feeder.Feeder.global_signal) A `Signal` object for communication among all threads. | Type: | [Signal](index.html#icrawler.utils.Signal) | `out_queue`[¶](#icrawler.feeder.Feeder.out_queue) A queue connected with parsers’ inputs, storing page urls. | Type: | Queue | `session`[¶](#icrawler.feeder.Feeder.session) A session object. | Type: | [Session](index.html#icrawler.utils.Session) | `logger`[¶](#icrawler.feeder.Feeder.logger) A logging.Logger object used for logging. | Type: | Logger | `workers`[¶](#icrawler.feeder.Feeder.workers) A list storing all the threading.Thread objects of the feeder. | Type: | list | `lock`[¶](#icrawler.feeder.Feeder.lock) A `Lock` instance shared by all feeder threads. | Type: | Lock | `feed`(***kwargs*)[[source]](_modules/icrawler/feeder.html#Feeder.feed)[¶](#icrawler.feeder.Feeder.feed) Feed urls. This method should be implemented by users. `worker_exec`(***kwargs*)[[source]](_modules/icrawler/feeder.html#Feeder.worker_exec)[¶](#icrawler.feeder.Feeder.worker_exec) Target function of workers *class* `icrawler.feeder.``SimpleSEFeeder`(*thread_num*, *signal*, *session*)[[source]](_modules/icrawler/feeder.html#SimpleSEFeeder)[¶](#icrawler.feeder.SimpleSEFeeder) Bases: [`icrawler.feeder.Feeder`](#icrawler.feeder.Feeder) Simple search engine like Feeder `feed`(*url_template*, *keyword*, *offset*, *max_num*, *page_step*)[[source]](_modules/icrawler/feeder.html#SimpleSEFeeder.feed)[¶](#icrawler.feeder.SimpleSEFeeder.feed) Feed urls once | Parameters: | * **url_template** – A string with parameters replaced with “{}”. * **keyword** – A string indicating the searching keyword. * **offset** – An integer indicating the starting index. * **max_num** – An integer indicating the max number of images to be crawled. * **page_step** – An integer added to offset after each iteration. | *class* `icrawler.feeder.``UrlListFeeder`(*thread_num*, *signal*, *session*)[[source]](_modules/icrawler/feeder.html#UrlListFeeder)[¶](#icrawler.feeder.UrlListFeeder) Bases: [`icrawler.feeder.Feeder`](#icrawler.feeder.Feeder) Url list feeder which feed a list of urls `feed`(*url_list*, *offset=0*, *max_num=0*)[[source]](_modules/icrawler/feeder.html#UrlListFeeder.feed)[¶](#icrawler.feeder.UrlListFeeder.feed) Feed urls. This method should be implemented by users. ### parser[¶](#module-icrawler.parser) *class* `icrawler.parser.``Parser`(*thread_num*, *signal*, *session*)[[source]](_modules/icrawler/parser.html#Parser)[¶](#icrawler.parser.Parser) Bases: `icrawler.utils.thread_pool.ThreadPool` Base class for parser. A thread pool of parser threads, in charge of downloading and parsing pages, extracting file urls and put them into the input queue of downloader. `global_signal`[¶](#icrawler.parser.Parser.global_signal) A Signal object for cross-module communication. `session`[¶](#icrawler.parser.Parser.session) A requests.Session object. `logger`[¶](#icrawler.parser.Parser.logger) A logging.Logger object used for logging. `threads`[¶](#icrawler.parser.Parser.threads) A list storing all the threading.Thread objects of the parser. `thread_num`[¶](#icrawler.parser.Parser.thread_num) An integer indicating the number of threads. `lock`[¶](#icrawler.parser.Parser.lock) A threading.Lock object. `parse`(*response*, ***kwargs*)[[source]](_modules/icrawler/parser.html#Parser.parse)[¶](#icrawler.parser.Parser.parse) Parse a page and extract image urls, then put it into task_queue. This method should be overridden by users. | Example: | | ``` >>> task = {} >>> self.output(task) ``` `worker_exec`(*queue_timeout=2*, *req_timeout=5*, *max_retry=3*, ***kwargs*)[[source]](_modules/icrawler/parser.html#Parser.worker_exec)[¶](#icrawler.parser.Parser.worker_exec) Target method of workers. Firstly download the page and then call the [`parse()`](#icrawler.parser.Parser.parse) method. A parser thread will exit in either of the following cases: 1. All feeder threads have exited and the `url_queue` is empty. 2. Downloaded image number has reached required number. | Parameters: | * **queue_timeout** (*int*) – Timeout of getting urls from `url_queue`. * **req_timeout** (*int*) – Timeout of making requests for downloading pages. * **max_retry** (*int*) – Max retry times if the request fails. * ****kwargs** – Arguments to be passed to the [`parse()`](#icrawler.parser.Parser.parse) method. | ### downloader[¶](#module-icrawler.downloader) *class* `icrawler.downloader.``Downloader`(*thread_num*, *signal*, *session*, *storage*)[[source]](_modules/icrawler/downloader.html#Downloader)[¶](#icrawler.downloader.Downloader) Bases: `icrawler.utils.thread_pool.ThreadPool` Base class for downloader. A thread pool of downloader threads, in charge of downloading files and saving them in the corresponding paths. `task_queue`[¶](#icrawler.downloader.Downloader.task_queue) A queue storing image downloading tasks, connecting `Parser` and [`Downloader`](#icrawler.downloader.Downloader). | Type: | [CachedQueue](index.html#icrawler.utils.CachedQueue) | `signal`[¶](#icrawler.downloader.Downloader.signal) A Signal object shared by all components. | Type: | [Signal](index.html#icrawler.utils.Signal) | `session`[¶](#icrawler.downloader.Downloader.session) A session object. | Type: | [Session](index.html#icrawler.utils.Session) | `logger`[¶](#icrawler.downloader.Downloader.logger) A logging.Logger object used for logging. `workers`[¶](#icrawler.downloader.Downloader.workers) A list of downloader threads. | Type: | list | `thread_num`[¶](#icrawler.downloader.Downloader.thread_num) The number of downloader threads. | Type: | int | `lock`[¶](#icrawler.downloader.Downloader.lock) A threading.Lock object. | Type: | Lock | `storage`[¶](#icrawler.downloader.Downloader.storage) storage backend. | Type: | [BaseStorage](index.html#icrawler.storage.BaseStorage) | `clear_status`()[[source]](_modules/icrawler/downloader.html#Downloader.clear_status)[¶](#icrawler.downloader.Downloader.clear_status) Reset fetched_num to 0. `download`(*task*, *default_ext*, *timeout=5*, *max_retry=3*, *overwrite=False*, ***kwargs*)[[source]](_modules/icrawler/downloader.html#Downloader.download)[¶](#icrawler.downloader.Downloader.download) Download the image and save it to the corresponding path. | Parameters: | * **task** (*dict*) – The task dict got from `task_queue`. * **timeout** (*int*) – Timeout of making requests for downloading images. * **max_retry** (*int*) – the max retry times if the request fails. * ****kwargs** – reserved arguments for overriding. | `get_filename`(*task*, *default_ext*)[[source]](_modules/icrawler/downloader.html#Downloader.get_filename)[¶](#icrawler.downloader.Downloader.get_filename) Set the path where the image will be saved. The default strategy is to use an increasing 6-digit number as the filename. You can override this method if you want to set custom naming rules. The file extension is kept if it can be obtained from the url, otherwise `default_ext` is used as extension. | Parameters: | **task** (*dict*) – The task dict got from `task_queue`. | Output: Filename with extension. `process_meta`(*task*)[[source]](_modules/icrawler/downloader.html#Downloader.process_meta)[¶](#icrawler.downloader.Downloader.process_meta) Process some meta data of the images. This method should be overridden by users if wanting to do more things other than just downloading the image, such as saving annotations. | Parameters: | **task** (*dict*) – The task dict got from task_queue. This method will make use of fields other than `file_url` in the dict. | `reach_max_num`()[[source]](_modules/icrawler/downloader.html#Downloader.reach_max_num)[¶](#icrawler.downloader.Downloader.reach_max_num) Check if downloaded images reached max num. | Returns: | if downloaded images reached max num. | | Return type: | bool | `set_file_idx_offset`(*file_idx_offset=0*)[[source]](_modules/icrawler/downloader.html#Downloader.set_file_idx_offset)[¶](#icrawler.downloader.Downloader.set_file_idx_offset) Set offset of file index. | Parameters: | **file_idx_offset** – It can be either an integer or ‘auto’. If set to an integer, the filename will start from `file_idx_offset` + 1. If set to `'auto'`, the filename will start from existing max file index plus 1. | `worker_exec`(*max_num*, *default_ext=''*, *queue_timeout=5*, *req_timeout=5*, ***kwargs*)[[source]](_modules/icrawler/downloader.html#Downloader.worker_exec)[¶](#icrawler.downloader.Downloader.worker_exec) Target method of workers. Get task from `task_queue` and then download files and process meta data. A downloader thread will exit in either of the following cases: 1. All parser threads have exited and the task_queue is empty. 2. Downloaded image number has reached required number(max_num). | Parameters: | * **queue_timeout** (*int*) – Timeout of getting tasks from `task_queue`. * **req_timeout** (*int*) – Timeout of making requests for downloading pages. * ****kwargs** – Arguments passed to the [`download()`](#icrawler.downloader.Downloader.download) method. | *class* `icrawler.downloader.``ImageDownloader`(*thread_num*, *signal*, *session*, *storage*)[[source]](_modules/icrawler/downloader.html#ImageDownloader)[¶](#icrawler.downloader.ImageDownloader) Bases: [`icrawler.downloader.Downloader`](#icrawler.downloader.Downloader) Downloader specified for images. `get_filename`(*task*, *default_ext*)[[source]](_modules/icrawler/downloader.html#ImageDownloader.get_filename)[¶](#icrawler.downloader.ImageDownloader.get_filename) Set the path where the image will be saved. The default strategy is to use an increasing 6-digit number as the filename. You can override this method if you want to set custom naming rules. The file extension is kept if it can be obtained from the url, otherwise `default_ext` is used as extension. | Parameters: | **task** (*dict*) – The task dict got from `task_queue`. | Output: Filename with extension. `keep_file`(*task*, *response*, *min_size=None*, *max_size=None*)[[source]](_modules/icrawler/downloader.html#ImageDownloader.keep_file)[¶](#icrawler.downloader.ImageDownloader.keep_file) Decide whether to keep the image Compare image size with `min_size` and `max_size` to decide. | Parameters: | * **response** (*Response*) – response of requests. * **min_size** (*tuple* *or* *None*) – minimum size of required images. * **max_size** (*tuple* *or* *None*) – maximum size of required images. | | Returns: | whether to keep the image. | | Return type: | bool | `worker_exec`(*max_num*, *default_ext='jpg'*, *queue_timeout=5*, *req_timeout=5*, ***kwargs*)[[source]](_modules/icrawler/downloader.html#ImageDownloader.worker_exec)[¶](#icrawler.downloader.ImageDownloader.worker_exec) Target method of workers. Get task from `task_queue` and then download files and process meta data. A downloader thread will exit in either of the following cases: 1. All parser threads have exited and the task_queue is empty. 2. Downloaded image number has reached required number(max_num). | Parameters: | * **queue_timeout** (*int*) – Timeout of getting tasks from `task_queue`. * **req_timeout** (*int*) – Timeout of making requests for downloading pages. * ****kwargs** – Arguments passed to the `download()` method. | ### storage[¶](#module-icrawler.storage) *class* `icrawler.storage.``BaseStorage`[[source]](_modules/icrawler/storage/base.html#BaseStorage)[¶](#icrawler.storage.BaseStorage) Bases: `object` Base class of backend storage `exists`(*id*)[[source]](_modules/icrawler/storage/base.html#BaseStorage.exists)[¶](#icrawler.storage.BaseStorage.exists) Check the existence of some data | Parameters: | **id** (*str*) – unique id of the data in the storage | | Returns: | whether the data exists | | Return type: | bool | `max_file_idx`()[[source]](_modules/icrawler/storage/base.html#BaseStorage.max_file_idx)[¶](#icrawler.storage.BaseStorage.max_file_idx) Get the max existing file index | Returns: | the max index | | Return type: | int | `write`(*id*, *data*)[[source]](_modules/icrawler/storage/base.html#BaseStorage.write)[¶](#icrawler.storage.BaseStorage.write) Abstract interface of writing data | Parameters: | * **id** (*str*) – unique id of the data in the storage. * **data** (*bytes* *or* *str*) – data to be stored. | *class* `icrawler.storage.``FileSystem`(*root_dir*)[[source]](_modules/icrawler/storage/filesystem.html#FileSystem)[¶](#icrawler.storage.FileSystem) Bases: `icrawler.storage.base.BaseStorage` Use filesystem as storage backend. The id is filename and data is stored as text files or binary files. `exists`(*id*)[[source]](_modules/icrawler/storage/filesystem.html#FileSystem.exists)[¶](#icrawler.storage.FileSystem.exists) Check the existence of some data | Parameters: | **id** (*str*) – unique id of the data in the storage | | Returns: | whether the data exists | | Return type: | bool | `max_file_idx`()[[source]](_modules/icrawler/storage/filesystem.html#FileSystem.max_file_idx)[¶](#icrawler.storage.FileSystem.max_file_idx) Get the max existing file index | Returns: | the max index | | Return type: | int | `write`(*id*, *data*)[[source]](_modules/icrawler/storage/filesystem.html#FileSystem.write)[¶](#icrawler.storage.FileSystem.write) Abstract interface of writing data | Parameters: | * **id** (*str*) – unique id of the data in the storage. * **data** (*bytes* *or* *str*) – data to be stored. | *class* `icrawler.storage.``GoogleStorage`(*root_dir*)[[source]](_modules/icrawler/storage/google_storage.html#GoogleStorage)[¶](#icrawler.storage.GoogleStorage) Bases: `icrawler.storage.base.BaseStorage` Google Storage backend. The id is filename and data is stored as text files or binary files. The root_dir is the bucket address such as gs://<your_bucket>/<your_directory>. `exists`(*id*)[[source]](_modules/icrawler/storage/google_storage.html#GoogleStorage.exists)[¶](#icrawler.storage.GoogleStorage.exists) Check the existence of some data | Parameters: | **id** (*str*) – unique id of the data in the storage | | Returns: | whether the data exists | | Return type: | bool | `max_file_idx`()[[source]](_modules/icrawler/storage/google_storage.html#GoogleStorage.max_file_idx)[¶](#icrawler.storage.GoogleStorage.max_file_idx) Get the max existing file index | Returns: | the max index | | Return type: | int | `write`(*id*, *data*)[[source]](_modules/icrawler/storage/google_storage.html#GoogleStorage.write)[¶](#icrawler.storage.GoogleStorage.write) Abstract interface of writing data | Parameters: | * **id** (*str*) – unique id of the data in the storage. * **data** (*bytes* *or* *str*) – data to be stored. | ### utils[¶](#module-icrawler.utils) *class* `icrawler.utils.``CachedQueue`(**args*, ***kwargs*)[[source]](_modules/icrawler/utils/cached_queue.html#CachedQueue)[¶](#icrawler.utils.CachedQueue) Bases: `Queue.Queue`, `object` Queue with cache This queue is used in [`ThreadPool`](#icrawler.utils.ThreadPool), it enables parser and downloader to check if the page url or the task has been seen or processed before. `_cache`[¶](#icrawler.utils.CachedQueue._cache) cache, elements are stored as keys of it. | Type: | OrderedDict | `cache_capacity`[¶](#icrawler.utils.CachedQueue.cache_capacity) maximum size of cache. | Type: | int | `is_duplicated`(*item*)[[source]](_modules/icrawler/utils/cached_queue.html#CachedQueue.is_duplicated)[¶](#icrawler.utils.CachedQueue.is_duplicated) Check whether the item has been in the cache If the item has not been seen before, then hash it and put it into the cache, otherwise indicates the item is duplicated. When the cache size exceeds capacity, discard the earliest items in the cache. | Parameters: | **item** (*object*) – The item to be checked and stored in cache. It must be immutable or a list/dict. | | Returns: | Whether the item has been in cache. | | Return type: | bool | `put`(*item*, *block=True*, *timeout=None*, *dup_callback=None*)[[source]](_modules/icrawler/utils/cached_queue.html#CachedQueue.put)[¶](#icrawler.utils.CachedQueue.put) Put an item to queue if it is not duplicated. `put_nowait`(*item*, *dup_callback=None*)[[source]](_modules/icrawler/utils/cached_queue.html#CachedQueue.put_nowait)[¶](#icrawler.utils.CachedQueue.put_nowait) Put an item into the queue without blocking. Only enqueue the item if a free slot is immediately available. Otherwise raise the Full exception. *class* `icrawler.utils.``Proxy`(*addr=None*, *protocol='http'*, *weight=1.0*, *last_checked=None*)[[source]](_modules/icrawler/utils/proxy_pool.html#Proxy)[¶](#icrawler.utils.Proxy) Bases: `object` Proxy class `addr`[¶](#icrawler.utils.Proxy.addr) A string with IP and port, for example ‘123.123.123.123:8080’ | Type: | str | `protocol`[¶](#icrawler.utils.Proxy.protocol) ‘http’ or ‘https’ | Type: | str | `weight`[¶](#icrawler.utils.Proxy.weight) A float point number indicating the probability of being selected, the weight is based on the connection time and stability | Type: | float | `last_checked`[¶](#icrawler.utils.Proxy.last_checked) A UNIX timestamp indicating when the proxy was checked | Type: | time | `format`()[[source]](_modules/icrawler/utils/proxy_pool.html#Proxy.format)[¶](#icrawler.utils.Proxy.format) Return the proxy compatible with requests.Session parameters | Returns: | A dict like {‘http’: ‘123.123.123.123:8080’} | | Return type: | dict | `to_dict`()[[source]](_modules/icrawler/utils/proxy_pool.html#Proxy.to_dict)[¶](#icrawler.utils.Proxy.to_dict) convert detailed proxy info into a dict | Returns: | A dict with four keys: `addr`, `protocol`, `weight` and `last_checked` | | Return type: | dict | *class* `icrawler.utils.``ProxyPool`(*filename=None*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool)[¶](#icrawler.utils.ProxyPool) Bases: `object` Proxy pool class ProxyPool provides friendly apis to manage proxies. `idx`[¶](#icrawler.utils.ProxyPool.idx) Index for http proxy list and https proxy list. | Type: | dict | `test_url`[¶](#icrawler.utils.ProxyPool.test_url) A dict containing two urls, when testing if a proxy is valid, test_url[‘http’] and test_url[‘https’] will be used according to the protocol. | Type: | dict | `proxies`[¶](#icrawler.utils.ProxyPool.proxies) All the http and https proxies. | Type: | dict | `addr_list`[¶](#icrawler.utils.ProxyPool.addr_list) Address of proxies. | Type: | dict | `dec_ratio`[¶](#icrawler.utils.ProxyPool.dec_ratio) When decreasing the weight of some proxy, its weight is multiplied with dec_ratio. | Type: | float | `inc_ratio`[¶](#icrawler.utils.ProxyPool.inc_ratio) Similar to dec_ratio but used for increasing weights, default the reciprocal of dec_ratio. | Type: | float | `weight_thr`[¶](#icrawler.utils.ProxyPool.weight_thr) The minimum weight of a valid proxy, if the weight of a proxy is lower than weight_thr, it will be removed. | Type: | float | `logger`[¶](#icrawler.utils.ProxyPool.logger) A logging.Logger object used for logging. | Type: | Logger | `add_proxy`(*proxy*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.add_proxy)[¶](#icrawler.utils.ProxyPool.add_proxy) Add a valid proxy into pool You must call add_proxy method to add a proxy into pool instead of directly operate the proxies variable. `decrease_weight`(*proxy*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.decrease_weight)[¶](#icrawler.utils.ProxyPool.decrease_weight) Decreasing the weight of a proxy by multiplying dec_ratio `default_scan`(*region='mainland'*, *expected_num=20*, *val_thr_num=4*, *queue_timeout=3*, *val_timeout=5*, *out_file='proxies.json'*, *src_files=None*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.default_scan)[¶](#icrawler.utils.ProxyPool.default_scan) Default scan method, to simplify the usage of scan method. It will register following scan functions: 1. scan_file 2. scan_cnproxy (if region is mainland) 3. scan_free_proxy_list (if region is overseas) 4. scan_ip84 5. scan_mimiip After scanning, all the proxy info will be saved in out_file. | Parameters: | * **region** – Either ‘mainland’ or ‘overseas’ * **expected_num** – An integer indicating the expected number of proxies, if this argument is set too great, it may take long to finish scanning process. * **val_thr_num** – Number of threads used for validating proxies. * **queue_timeout** – An integer indicating the timeout for getting a candidate proxy from the queue. * **val_timeout** – An integer indicating the timeout when connecting the test url using a candidate proxy. * **out_file** – the file name of the output file saving all the proxy info * **src_files** – A list of file names to scan | `get_next`(*protocol='http'*, *format=False*, *policy='loop'*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.get_next)[¶](#icrawler.utils.ProxyPool.get_next) Get the next proxy | Parameters: | * **protocol** (*str*) – ‘http’ or ‘https’. (default ‘http’) * **format** (*bool*) – Whether to format the proxy. (default False) * **policy** (*str*) – Either ‘loop’ or ‘random’, indicating the policy of getting the next proxy. If set to ‘loop’, will return proxies in turn, otherwise will return a proxy randomly. | | Returns: | If format is true, then return the formatted proxy which is compatible with requests.Session parameters, otherwise a Proxy object. | | Return type: | [Proxy](index.html#icrawler.utils.Proxy) or dict | `increase_weight`(*proxy*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.increase_weight)[¶](#icrawler.utils.ProxyPool.increase_weight) Increase the weight of a proxy by multiplying inc_ratio `is_valid`(*addr*, *protocol='http'*, *timeout=5*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.is_valid)[¶](#icrawler.utils.ProxyPool.is_valid) Check if a proxy is valid | Parameters: | * **addr** – A string in the form of ‘ip:port’ * **protocol** – Either ‘http’ or ‘https’, different test urls will be used according to protocol. * **timeout** – A integer indicating the timeout of connecting the test url. | | Returns: | If the proxy is valid, returns {‘valid’: True, ‘response_time’: xx} otherwise returns {‘valid’: False, ‘msg’: ‘xxxxxx’}. | | Return type: | dict | `load`(*filename*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.load)[¶](#icrawler.utils.ProxyPool.load) Load proxies from file `proxy_num`(*protocol=None*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.proxy_num)[¶](#icrawler.utils.ProxyPool.proxy_num) Get the number of proxies in the pool | Parameters: | **protocol** (*str**,* *optional*) – ‘http’ or ‘https’ or None. (default None) | | Returns: | If protocol is None, return the total number of proxies, otherwise, return the number of proxies of corresponding protocol. | `remove_proxy`(*proxy*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.remove_proxy)[¶](#icrawler.utils.ProxyPool.remove_proxy) Remove a proxy out of the pool `save`(*filename*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.save)[¶](#icrawler.utils.ProxyPool.save) Save proxies to file `scan`(*proxy_scanner*, *expected_num=20*, *val_thr_num=4*, *queue_timeout=3*, *val_timeout=5*, *out_file='proxies.json'*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.scan)[¶](#icrawler.utils.ProxyPool.scan) Scan and validate proxies Firstly, call the scan method of proxy_scanner, then using multiple threads to validate them. | Parameters: | * **proxy_scanner** – A ProxyScanner object. * **expected_num** – Max number of valid proxies to be scanned. * **val_thr_num** – Number of threads used for validating proxies. * **queue_timeout** – Timeout for getting a proxy from the queue. * **val_timeout** – An integer passed to is_valid as argument timeout. * **out_file** – A string or None. If not None, the proxies will be saved into out_file. | `validate`(*proxy_scanner*, *expected_num=20*, *queue_timeout=3*, *val_timeout=5*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyPool.validate)[¶](#icrawler.utils.ProxyPool.validate) Target function of validation threads | Parameters: | * **proxy_scanner** – A ProxyScanner object. * **expected_num** – Max number of valid proxies to be scanned. * **queue_timeout** – Timeout for getting a proxy from the queue. * **val_timeout** – An integer passed to is_valid as argument timeout. | *class* `icrawler.utils.``ProxyScanner`[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner)[¶](#icrawler.utils.ProxyScanner) Proxy scanner class ProxyScanner focuses on scanning proxy lists from different sources. `proxy_queue`[¶](#icrawler.utils.ProxyScanner.proxy_queue) The queue for storing proxies. `scan_funcs`[¶](#icrawler.utils.ProxyScanner.scan_funcs) Name of functions to be used in scan method. `scan_kwargs`[¶](#icrawler.utils.ProxyScanner.scan_kwargs) Arguments of functions `scan_threads`[¶](#icrawler.utils.ProxyScanner.scan_threads) A list of threading.thread object. `logger`[¶](#icrawler.utils.ProxyScanner.logger) A logging.Logger object used for logging. `is_scanning`()[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.is_scanning)[¶](#icrawler.utils.ProxyScanner.is_scanning) Return whether at least one scanning thread is alive `register_func`(*func_name*, *func_kwargs*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.register_func)[¶](#icrawler.utils.ProxyScanner.register_func) Register a scan function | Parameters: | * **func_name** – The function name of a scan function. * **func_kwargs** – A dict containing arguments of the scan function. | `scan`()[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan)[¶](#icrawler.utils.ProxyScanner.scan) Start a thread for each registered scan function to scan proxy lists `scan_cnproxy`()[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan_cnproxy)[¶](#icrawler.utils.ProxyScanner.scan_cnproxy) Scan candidate (mainland) proxies from <http://cn-proxy.com`scan_file`(*src_file*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan_file)[¶](#icrawler.utils.ProxyScanner.scan_file) Scan candidate proxies from an existing file `scan_free_proxy_list`()[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan_free_proxy_list)[¶](#icrawler.utils.ProxyScanner.scan_free_proxy_list) Scan candidate (overseas) proxies from <http://free-proxy-list.net`scan_ip84`(*region='mainland'*, *page=1*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan_ip84)[¶](#icrawler.utils.ProxyScanner.scan_ip84) Scan candidate proxies from <http://ip84.com| Parameters: | * **region** – Either ‘mainland’ or ‘overseas’. * **page** – An integer indicating how many pages to be scanned. | `scan_mimiip`(*region='mainland'*, *page=1*)[[source]](_modules/icrawler/utils/proxy_pool.html#ProxyScanner.scan_mimiip)[¶](#icrawler.utils.ProxyScanner.scan_mimiip) Scan candidate proxies from <http://mimiip.com| Parameters: | * **region** – Either ‘mainland’ or ‘overseas’. * **page** – An integer indicating how many pages to be scanned. | *class* `icrawler.utils.``Session`(*proxy_pool*)[[source]](_modules/icrawler/utils/session.html#Session)[¶](#icrawler.utils.Session) Bases: `requests.sessions.Session` `get`(*url*, ***kwargs*)[[source]](_modules/icrawler/utils/session.html#Session.get)[¶](#icrawler.utils.Session.get) Sends a GET request. Returns `Response` object. | Parameters: | * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. | | Return type: | requests.Response | `post`(*url*, *data=None*, *json=None*, ***kwargs*)[[source]](_modules/icrawler/utils/session.html#Session.post)[¶](#icrawler.utils.Session.post) Sends a POST request. Returns `Response` object. | Parameters: | * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) json to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. | | Return type: | requests.Response | *class* `icrawler.utils.``Signal`[[source]](_modules/icrawler/utils/signal.html#Signal)[¶](#icrawler.utils.Signal) Bases: `object` Signal class Provides interfaces for set and get some globally shared variables(signals). `signals`[¶](#icrawler.utils.Signal.signals) A dict of all signal names and values. `init_status`[¶](#icrawler.utils.Signal.init_status) The initial values of all signals. `get`(*name*)[[source]](_modules/icrawler/utils/signal.html#Signal.get)[¶](#icrawler.utils.Signal.get) Get a signal value by its name. | Parameters: | **name** – a string indicating the signal name. | | Returns: | Value of the signal or None if the name is invalid. | `names`()[[source]](_modules/icrawler/utils/signal.html#Signal.names)[¶](#icrawler.utils.Signal.names) Return all the signal names `reset`()[[source]](_modules/icrawler/utils/signal.html#Signal.reset)[¶](#icrawler.utils.Signal.reset) Reset signals with their initial values `set`(***signals*)[[source]](_modules/icrawler/utils/signal.html#Signal.set)[¶](#icrawler.utils.Signal.set) Set signals. | Parameters: | **signals** – A dict(key-value pairs) of all signals. For example {‘signal1’: True, ‘signal2’: 10} | *class* `icrawler.utils.``ThreadPool`(*thread_num*, *in_queue=None*, *out_queue=None*, *name=None*)[[source]](_modules/icrawler/utils/thread_pool.html#ThreadPool)[¶](#icrawler.utils.ThreadPool) Bases: `object` Simple implementation of a thread pool This is the base class of `Feeder`, `Parser` and `Downloader`, it incorporates two FIFO queues and a number of “workers”, namely threads. All threads share the two queues, after each thread starts, it will watch the `in_queue`, once the queue is not empty, it will get a task from the queue and process as wanted, then it will put the output to `out_queue`. Note This class is not designed as a generic thread pool, but works specifically for crawler components. `name`[¶](#icrawler.utils.ThreadPool.name) thread pool name. | Type: | str | `thread_num`[¶](#icrawler.utils.ThreadPool.thread_num) number of available threads. | Type: | int | `in_queue`[¶](#icrawler.utils.ThreadPool.in_queue) input queue of tasks. | Type: | Queue | `out_queue`[¶](#icrawler.utils.ThreadPool.out_queue) output queue of finished tasks. | Type: | Queue | `workers`[¶](#icrawler.utils.ThreadPool.workers) a list of working threads. | Type: | list | `lock`[¶](#icrawler.utils.ThreadPool.lock) thread lock. | Type: | Lock | `logger`[¶](#icrawler.utils.ThreadPool.logger) standard python logger. | Type: | Logger | `connect`(*component*)[[source]](_modules/icrawler/utils/thread_pool.html#ThreadPool.connect)[¶](#icrawler.utils.ThreadPool.connect) Connect two ThreadPools. The `in_queue` of the second pool will be set as the `out_queue` of the current pool, thus all the output will be input to the second pool. | Parameters: | **component** ([*ThreadPool*](index.html#icrawler.utils.ThreadPool)) – the ThreadPool to be connected. | | Returns: | the modified second ThreadPool. | | Return type: | [ThreadPool](index.html#icrawler.utils.ThreadPool) | Release notes[¶](#release-notes) --- ### 0.6.1 (2018-05-25)[¶](#section-1) * **New**: Add an option to skip downloading when the file already exists. ### 0.6.0 (2018-03-17)[¶](#section-2) * **New**: Make the api of search engine crawlers (GoogleImageCrawler, BingImageCrawler, BaiduImageCrawler) universal, add the argument `filters` and remove arguments `img_type`, `img_color`, `date_min`, etc. * **New**: Add more search options (type, color, size, layout, date, people, license) for Bing (Thanks [@kirtanp](https://github.com/kirtanp)). * **New**: Add more search options (type, color, size) for Baidu. * **Fix**: Fix the json parsing error of `BaiduImageCrawler` when some invalid escaped characters exist.
cdcmd
ctan
TeX
# The cdcmd package <NAME> (Longaster Email: <EMAIL> Released October 12, 2021, version v1.0 ###### Abstract cdcmd is a package that allows you define 'polymorphic' command. Like styledcmd package, you can define \protected command, but cdcmd can define expandable conditional command as well. ## 1 Main Interface \newcondition {\identifier} {\identifier} {\identifier-idslist} \clearcondition\clearcondition[(identifier(s))] \newcondition new \identifier} and its \ids. The leading and trailing spaces in \identifier} will be removed. \setcondition sets \ids of \identifier locally. The un-+ version will clear \ids formerly set. Both \identifier} and \id cannot be *. \clearcondition will clear ids from given \identifiers locally. Default value is *, that is, clear all. \conditionif \conditionif * [(identifier=idslist)] {\true} {\false} \conditioncmd * [(identifier=idslist)] {\material} \conditionif When the \identifier=idslist) makes true condition, leave \true/\material in the input stream. Leaving \false when the condition is false. The starred version is all, unstarred version is any. See below for more details. The \conditionif and \conditioncmd are expandable (f-expandable). \conditionif, \conditioncmd are \protected. The default value of \identifier=idlist) is *, will leave \true/\material in the input stream. 4. \(_identifier=ids list_) is a defined _identifier_, and _all of_ the \(\langle\)_ids_\(\rangle\) has been set, such as paper=b5 or paper={a5,b5}. The any id set to \(\langle\)_identifier_\(\rangle\) defined will evaluate to false, except *, because the _identifier_ never have defined id, even the \(\langle\)_ids_\(\rangle\) is empty (defined=); 5. _All_ items in \(\langle\)_identifier=ids list_\(\rangle\) match any statements listed above, such as paper={a5,b5},defined. \newconditioncommand * \(\langle\)function\(\rangle\) [(arg nums)] [(default)] {(code)} \newconditioncommand * \(\langle\)function\(\rangle\) [(arg nums)] {(code)} Those commands are just like \newcommand, \renewcommand, etc. They will define command like \foo+{_identifier=ids list_}\(\rangle\)_(args)_. The optional argument cannot contain \par. The \(\mathsf{e}\)-version commands define expandable command, and cannot set default value. However you can use \(\mathsf{xparse}\)-like command illustrated followed, which can set default value. Unstarred version is \long, just like LaTeX's. The new \(\langle\)_function_\(\rangle\) will take one optional argument: +, the function is just like the * in \conditionif, etc. And one mandatory argument \(\langle\)_identifier=ids list_\(\rangle\). After absorbing these two arguments, then absorb arguments of given \(\langle\)_arg nums_\(\rangle\), or use \(\langle\)_default_\(\rangle\), if given. \NewConditionCommand \(\langle\)function\(\rangle\) {(arg spec)} {(code)} \newConditionCommand \(\langle\)ProvideConditionCommand \(\langle\)PacclareConditionCommand \(\langle\)Pacspec\(\rangle\) must follow the rules of the \(\mathsf{xparse}\) package. The new \(\langle\)_function_\(\rangle\) will take one optional argument: +, the function is just like the * in \conditionif, etc. And one mandatory argument \(\langle\)_identifier=ids list_\(\rangle\). After absorbing these two arguments, then absorb arguments of given \(\langle\)_arg spec_\(\rangle\). ## 2 Examples \newcondition{defined}{} \newcondition{paper}{a4,a5,b5} \setcondition{paper={a5,b5}} \conditionif [*]{t}{f}: t \conditionif [defined]{t}{f}: f \conditionif [defined=]{t}{f}: t \conditionif [defined=*]{t}{f}: t \conditionif [defined=a]{t}{f}: f {conditioncaseTF!{ {paper=a3}{a3} {paper=a4}{a4} {paper,defined}{pd} }{true}{false} a3true \newconditioncommand\longprotectedcdcmd{longprotectedcdcmd} \newconditioncommand\longprotectedcdcmdi[1]{longprotectedcdcmdi<#1>} \newconditioncommand\longprotectedcdcmdio[1][DFT]{longprotectedcdcmdio<#1>} \newconditioncommand*\shortprotectedcdcmd{shortprotectedcdcmd} \newconditioncommand*\shortprotectedcdcmdi[1]{shortprotectedcdcmdi<#1>} \newconditioncommand*\shortprotectedcdcmdio[1][DFT]{shortprotectedcdcmdio<#1>} \section{paper={a4,a5}} \longprotectedcdcmd{*} \longprotectedcdcmdi{*}{1\par arg} \longprotectedcdcmdio{*} \longprotectedcdcmdio{*}[1opt] \longprotectedcdcmdio{paper=a4}[1opt a4] \longprotectedcdcmdio+{paper={a4,a7}}[1opt a4a7] \shortprotectedcdcmd{*} \shortprotectedcdcmdi{*}{1\par arg} \shortprotectedcdcmdio{*} \shortprotectedcdcmdio{*}[1opt] \shortprotectedcdcmdio{paper=a4}[1opt a4] \shortprotectedcdcmdio+{paper={a4,a7}}[1opt a4a7] longprotectedcdcmd longprotectedcdcmdi<1 \arg> longprotectedcdcmdio<DFT> longprotectedcdcmdio<1opt> longprotectedcdcmdio<1opt a4> shortprotectedcdcmdio<1arg> shortprotectedcdcmdio<DFT> shortprotectedcdcmdio<DFT> shortprotectedcdcmdio<1opt> shortprotectedcdcmdio<1opt a4## 3 For package authors The meaning should be obvious. The meaning should be obvious. The meaning should be obvious. ## 4 Implementation ``` 1(*package) 2(@e=cdcmd) 3\strconst:Nm\c_cdcmd_all_str{*} 4\clist_new:N\g_cdcmd_clist 5\bool_new:N\l_cdcmd_clear_set_bool 6\msg_new:nnn{cdcmd}{condition-exist} 7{The-condition-'#1'-you-try-to-new-already-exists.} 8\msg_new:nnn{cdcmd}{condition-not-exist} 9{The-condition-'#1'-not-exists.} 10\msg_new:nnn{cdcmd}{condition-id-not-exist} 11{The-id-'#2'of-condition-'#1'-not-exists.} 12\cdcmd_if_exist:nTF 13\cdcmd_id_exist:nTF 14\gr_new_conditional:Npnn\cdcmd_id_exist:n#1{p,T,F,TF} 15\gr_return_true:}{\prg_return_false:} 16} ``` Condition \(\cdcmd_id_if\exists:nTF\)ID\(\langle id\rangle\) of condition \(\langle indentifier\rangle\) if exist. ``` 1\prg_new_conditional:Npnn\cdcmd_cd_id_if_exist:nn#1#2{T,F,TF} 18{ 19\clist_if_in:cnTF{c_cdcmd_condition@#1_clist}{#2} 20{\prg_return_true:}{\prg_return_false:} 21} ``` (_End definition for \cdcmd_id_if\exists:nTF\). This function is documented on page??.) ``` 1\cdcmd_cd_id_if_exist:nnTFID\(\langle id\rangle\) of condition \(\langle indentifier\rangle\) if exist. 17\prg_new_conditional:Npnn\cdcmd_cd_id_if_exist:nn#1#2{T,F,TF} 18{ 19\clist_if_in:cnTF{c_cdcmd_condition@#1_clist}{#2} 20{\prg_return_true:}{\prg_return_false:} 21} ``` (_End definition for \cdcmd_id_if\exists:nTF\). This function is documented on page??.) __ccdm_clist_if_in_p:Nn__ccdm_clist_if_in_p:NV__ccdm_clist_if_in_p:Nw__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:n__ccdm_clist_if_in_p:cN__ccdm_clist_if_in_p:nV__ccdm_clist_if_in_p:n } 268 \IfBooleanTF {#1} 269 { \ccdcmd_all_if:nTF } 270 { \ccdcmd_any_if:nTF } 271 {#2} {#3} {#4} 272 } 273 \NewDocumentCommand \conditioncmd { s O{*} +m } 274 { 275 \IfBooleanTF {#1} 276 { \ccdcmd_all_if:nTF } 277 { \ccdcmd_any_if:nTF } 278 {#2} {#3} { 279 } 280 \NewDocumentCommand \conditioncase { s t! +m } 281 { 282 \IfBooleanTF {#2} 283 { 284 \IfBooleanTF {#1} 285 \cdcmd_all_case_false:n {#3} } 286 { \ccdcmd_any_case_false:n {#3} } 287 } 288 { 289 \IfBooleanTF {#1} 290 { \ccdcmd_all_case_true:n {#3} } 291 { \ccdcmd_any_case_true:n {#3} } 292 } 293 } 294 \NewDocumentCommand \conditioncaseTF { s t! +m } 295 { 296 \IfBooleanTF {#2} 297 { 298 \IfBooleanTF {#1} 299 \ccdcmd_all_case_false:nTF {#3} } 300 { \ccdcmd_any_case_false:nTF {#3} } 301 } 302 { 303 \IfBooleanTF {#1} 304 { \ccdcmd_all_case_true:nTF {#3} } 305 { \ccdcmd_any_case_true:nTF {#3} } 306 } 307 } Define new xparse like conditional command. 308 \str_const:Nn \c_cdcmd_pair_u_str { cdcmd@u6 } 309 \str_const:Nn \c_cdcmd_pair_u_str { cdcmd@n6 } 310 \cs_new_nopar:Npn _cdcmd_cs_pair_u:N #1 311 { \c_cdcmd_pair_u_str \cs_to_str:N #1 } 312 \cs_new_nopar:Npn _cdcmd_cs_pair_n:N #1 313 { \c_cdcmd_pair_n_str \cs_to_str:N #1 } 314 \cs_new:Npn _cdcmd_arg_spec_from_num:nn #1#2 315 { 316 \if_case:w 0#1 \exp_stop_f: 317 \or: #2%or: #2%or: #2%2#2%2 \or: #2%2#2%2#2%2%2 \or: #2%2#2%2#2#2%2%2 \or: #2%2#2%2#2%2#2%2%2 \f: 318 \or: #2%2#2%2#2%2# } } {IfNoValueTF{##4} { use:c { __cdcmd_ #1 _cdcmd_p_l_num:Nnn } ##2 {##3} {#5} { use:c { __cdcmd_ #1 _cdcmd_o_num:Nnnn } ##2 {##3} {#4} {#5} {+m} } } } } } *\list_map_function:nN { new, renew, declare } _cdcmd_new_cdcmd_cmd_ne_aux:n *NewDocumentCommand \proideconditioncommand { s m 0{0} o +m } * { \cs_if_free:NT #2 { \IfBooleanTF{#1} { \IfNoValueTF{#4} { \newconditioncommand * #2 [#3] {#5} } { \newconditioncommand * #2 [#3] {#4} {#5} } } } } * \IfNoValueTF{#4} { \newconditioncommand #2 [#3] {#5} } { \newconditioncommand #2 [#3] [#4] {#5} } } } } * \int_step_inline:nnnn { 7 } { 1 } { 12 } { _cdcmd_new_cdcmd_cmd_no:xxx { \seq_item:Nn \c_cdcmd_CMD_no_seq {#1} } { \seq_item:Nn \c_cdcmd_cmd_no_seq {#1} } { \seq_item:Nn \c_cdcmd_Cmd_no_seq {#1} } * \cs_new_protected:Npn _cdcmd_new_cdcmd_cmd_e_no_aux:n #1 * { \exp_args:Nc \NewDocumentCommand { #1 conditioncommand } { s m 0{0} +m } * \IfBooleanTF{##1} { \use:c { __cdcmd_ #1 _cdcmd_np_nl_num:Nnn } ##2 {##3} {#44} } { \use:c { __cdcmd_ #1 _cdcmd_np_l_num:Nnn } ##2 {##3} {#44} } * \list_map_function:nN { new, renew, declare } _cdcmd_new_cdcmd_end_e_no_aux:n *NewDocumentCommand \proideconditioncommand { s m 0{0} +m } * \cs_if_free:NT #2 * { \IfBooleanTF{#1} { \newconditioncommand * #2 [#3] {#4} } * \newconditioncommand #2 [#2] {#4} } * \
smoothROCtime
cran
R
Package ‘smoothROCtime’ October 14, 2022 Type Package Title Smooth Time-Dependent ROC Curve Estimation Version 0.1.0 Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Imports ks Suggests KMsurv,lattice, survival Description Computes smooth estimations for the Cumulative/Dynamic and Inci- dent/Dynamic ROC curves, in presence of right censorship, based on the bivariate kernel den- sity estimation of the joint distribution function of the Marker and Time-to-event variables. License GPL LazyData TRUE RoxygenNote 6.0.1 NeedsCompilation no Repository CRAN Date/Publication 2018-11-14 10:40:03 UTC R topics documented: smoothROCtime-packag... 2 funce... 3 plot.sROC... 5 stRo... 6 smoothROCtime-package Smooth Time-Dependent ROC Curve Estimation Description Computes smooth estimations for the Cumulative/Dynamic and Incident/Dynamic ROC curves, in presence of right censorship, based on the bivariate kernel density estimation of the joint distribution function of the Marker and Time-to-event variables. Details • funcen: Bivariate kernel density estimation of the joint density function of the (marker, time− to − event) variable. • stRoc: Smooth estimations for Cumulative/Dynamic and Incident/Dynamic ROC curves. • plot.sROCt: Plots of Cumulative/Dynamic and Incident/Dyanmic ROC curve estimations. Author(s) <NAME> <<EMAIL>> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Smooth time-dependent receiver operating char- acteristic curve estimators. Statistical Methods in Medical Research, 27(3):651-674, 2018. https: //doi.org/10.1177/0962280217740786. <NAME>, <NAME>, and <NAME>. Cumulative/dynamic ROC curve esti- mation. JOURNAL of Statistical Computation and Simulation, 86(17):3582-3594, 2016. https: //doi.org/10.1080/00949655.2016.1175442. <NAME>. Bandwidth matrices for multivariate kernel density estimation. Ph.D. Thesis, University of Western, Australia, 2004. <NAME>, <NAME>, and <NAME>. A simple method to estimate the time-dependent receiver operating characteristic curve and the area under the curve with right censored data. Statistical Methods in Medical Research, 27(8), 2016. https://doi.org/10.1177/0962280216680239. See Also CRAN package ks is used in this package. funcen Bivariate kernel density estimation under random censoring Description Computes the kernel density estimation of the bivariate vector (marker, time − to − event) with the time-to-event variable subject to right censorship, according to the procedure exposed in https: //doi.org/10.1177/0962280217740786. Usage funcen(data, H, bw, adj, ...) Arguments data matrix with three columns: time-to-event, censoring status (0=censored/1=uncensored) and marker. H 2x2 bandwidth matrix when it is specified in an explicit way. bw method for computing the bandwidth matrix. Most of the methods included in the kde function can be used: Hpi, Hpi.diag, Hlscv, Hlscv.diag, Hbcv, Hbcv.diag, Hscv, Hscv.diag, Hucv and Hucv.diag. Other considered methods are naive.pdf (diag(N^-1/5, N^-1/5)^2) and naive.cdf (diag(N^-1/3, N^-1/3)^2), where N is the sample size. adj adjusment parameter for calculating the bandwidth matrix. Default value 1. ... kde function arguments can also be used for specifying the way in which the kernel density function estimation should be computed. Details The matrix of bandwidths can be defined by using H=matrix() or automatically selected by the method indicated in bw. Given the matrix of bandwidths, H, the argument adj modifies it and the final computed matrix is adj^2 H. If H is missing, the naive.pdf method is used for obtaining the kernel density estimation. Function funcen generates, from the original set of data, a collection of pseudodata through an iterative weights allocation process, with two main goals: keep the information from the censored observations represented in the sample and prepare data so they can be used as incoming parameters in the kde function included in the ks package. A weighted kernel density estimation is therefore finally computed. There should be at least two uncensored observations for computing the density estimation. Omitted parameters are considered to be the default ones in the kde function. Value An object of class kde is returned. It is a list where the most relevant values are: x matrix containing the pseudodata values. It has two columns: marker and time- to-event. eval.points list of points where the bivariate kernel estimation is calculated. estimate values of the density estimation. H bandwidth matrix. names variable names. w weights calculated by the function and allocated to pseudodata. References <NAME> and <NAME>. Smooth time-dependent receiver operating char- acteristic curve estimators. Statistical Methods in Medical Research, 27(3):651-674, 2018. https: //doi.org/10.1177/0962280217740786. <NAME>. Bandwidth matrices for multivariate kernel density estimation. Ph.D. Thesis, University of Western, Australia, 2004. http://www.mvstat.net/tduong. Examples library(smoothROCtime) require(KMsurv) require(lattice) data(kidtran) # Preparing data: a logarithmic transformationof the time-to-event variable is made DT <- cbind(log(kidtran$time),kidtran$delta,kidtran$age) n <-length(log(kidtran$time)) # Directly definition of the bandwidth matrix H <- diag((c(sd(kidtran$age),sd(log(kidtran$time)))*n^(-0.2))^2) # Kernel density function estimation density <- funcen(data=DT,H=H) # Plot graphics wireframe(density$estimate, row.values=density$eval.points[[1]], column.values=density$eval.points[[1]],zlab="density") contour(x=density$eval.points[[1]], y=density$eval.points[[2]], z=density$estimate, ylim=c(6,10)) plot.sROCt Plots of time-dependent ROC curve estimations Description Plots of both Cumulative and Incident/Dynamic ROC curve estimations, provided by function stRoc. Usage ## S3 method for class 'sROCt' plot(x, tcr, xlab, ylab, type = "l", lwd = 5, ...) Arguments x object of class sROCt generated with stRoc function and containing the esti- mations of the time-dependent ROC curves for one single point or a vector of points. tcr type of time-dependent ROC curve estimation that will be plotted: • “C” for Cumulative/Dynamic, • “I” for Incident/Dynamic, • “B” for Both time-dependent ROC curve estimations. xlab a tittle for the x axis. The default value is "False - Positive Rate". ylab a tittle for the y axis. The default value is "True - Negative Rate". type what type of plot is going to be drawn. The default value is "l" and a line will be plotted. lwd line width. As a default value "5" is taken. ... plot function arguments can also be used for customizing the plot. Details Parameter tcr is mandatory with no default values. If a "B" is indicated and the sROCt object placed as x parameter contains only one type of time-dependent ROC curve estimation, an error message will be returned. Another error message will appear in case of placing either "C" or "I" when the sROCt object does not contain the suitable ROC curve estimation. When one single type of ROC curve estimation is chosen, one graphic will be drawn for each point of time in the sROCt object, having as many independent plots as number of points of time. Graphic parameters like axis labels or line width will be the same for all the plots. In case of choosing both time-dependent ROC curve estimations, they will be plotted in a single graphic for each point of time in sROCt object. As before, we will have as many independent plots as points of time and the graphic parameters will be the same in all plots. Examples library(smoothROCtime) require(survival) # Monoclonal Gammapothy of Undetermined Significance dataset data(mgus) # Time-to-event time <- ifelse(is.na(mgus$pctime), mgus$futime,mgus$pctime) # Status status <- ifelse(is.na(mgus$pctime), 0, 1) # Preparing data DT <-as.data.frame(cbind(log(time), status, mgus$alb)) colnames(DT) <- c("futime", "pcm", "alb") dta <- na.omit(cbind(DT$futime, DT$pcm, -DT$alb)) # Point of Time t10 <- log(10*365.25) # ten years in logarithm scale # Cumulative/Dynamic and Incident dynamic ROC curve estimations at t=10 years rcu <- stRoc(data=dta, t=t10, tcr="B", meth = "1", verbose=TRUE) # Plots of both ROC curve estimations plot(rcu, tcr="B", frame=FALSE) stRoc Smooth Time-dependent ROC curve estimations Description Provides smooth estimations of Cumulative/Dynamic (C/D) and Incident/Dynamic (I/D) ROC curves in presence of rigth censorship and the corresponding Areas Under the Curves (AUCs), at a single point of time or a vector of points. • The function computes two different procedures to obtain smooth estimations of the C/D ROC curve. Both are based on the kernel density estimation of the joint distribution function of the marker and time-to-event variables, provided by funcen function. The first method, to which we will refere as smooth method, is carried out according to the methodology pro- posed in https://doi.org/10.1177/0962280217740786. The second one uses this esti- mation of the joint density function of the variables marker and time-to-event for comput- ing the weights or probabilities allocated to censored observations (undefined individuals) in https://doi.org/10.1080/00949655.2016.1175442 and https://doi.org/10.1177/ 0962280216680239. It will be referred as p-kernel method. • In case of the I/D ROC curve, a smooth approximation procedure (smooth method) is com- puted based as well on the kernel density estimation of the joint distribution function of the marker and time-to-event variables proposed in https://doi.org/10.1177/0962280217740786 Usage stRoc(data, t, H, bw, adj, tcr, meth, ...) Arguments data matrix of data values with three columns: time-to-event, censoring status (0=cen- sored/1=uncensored) and marker. t point of time or vector of points where the time-dependent ROC curve is esti- mated. H 2x2 bandwidth matrix. bw procedure for computing the bandwidth matrix. Most of the methods included at the kde function can be used: Hpi, Hpi.diag, Hlscv, Hlscv.diag, Hbcv, Hbcv.diag, Hscv, Hscv.diag, Hucv and Hucv.diag. Other considered methods are naive.pdf (diag(N^-1/5, N^-1/5)^2) and naive.cdf (diag(N^-1/3, N^-1/3)^2), where N is the sample size. adj adjusment parameter for calculating the bandwidth matrix. Default value 1. tcr type of time-dependent ROC curve estimation that will be estimated: • “C” for Cumulative/Dynamic, • “I” for Incident/Dynamic, • “B” for Both time-dependent ROC curve estimations. meth method for computing the estimation of the C/D ROC curve.The suitable values are: • “1” for the smooth method, • “2” for the p-kernel method. As default value the smooth method is taken. ... kde function arguments can be used for estimating the bivariate kernel density function. Details Function funcen is called from each execution of function stRoc, in order to compute the kernel density estimation of the joint distribution of the (Marker, Time-to-event) variable, therefore, the input parameters in funcen are input parameters as well in stRoc and the same considerations apply. The matrix of bandwidths can be defined by using H=matrix() or automatically selected by the method indicated in bw. Given the matrix of bandwidths, H, the argument adj modifies it and the final matrix is adj^2 H. If H is missing, the naive.pdf method is used. If tcr is missing the C/D ROC curve estimation will be computed with the method indicated in meth. If no value has been placed in meth the smooth method will be used. The I/D ROC curve estimation will be always computed with the smooth method. Value An object of class sROCt is returned. It is a list with the following values: th considered thresholds for the marker. FP false-positive rate calculated at each point in th. TP true-positive rate estimated at each point in th. p points where the time-dependent ROC curve is evaluated. R time-dependent ROC curve values computed at p. t time/s at which each time-dependent ROC curve estimation is computed. Each point ot time will appear as many times as the length of the vector of points p. auc area under the corresponding time-dependent ROC curve estimation. As in the previous case, each value appears as many times as the length of the vector of points p. tcr type of time-dependent ROC curve estimation computed, • “C” - Cumulative/Dynamic. • “I” - Incident/Dynamic. For each computed time-dependent ROC curve estimation this value is repeated as many times as the length of p. Pi probabilities calculated for the individuals in the sample if the p-kernel method has been used for the estimation of the C/D ROC curve. This element is a matrix with the following columns: • time - single point of time at which the estimation each the C/D ROC curve has been computed. • obvt - observed times for the individuals in the sample. • p - estimations of the probabilities computed and allocated to each subject. References <NAME> and <NAME>. Smooth time-dependent receiver operating char- acteristic curve estimators. Statistical Methods in Medical Research, 27(3):651-674, 2018.https: //doi.org/10.1177/0962280217740786. <NAME>, <NAME>, and <NAME>. Cumulative/dynamic ROC curve esti- mation. JOURNAL of Statistical Computation and Simulation, 86(17):3582-3594, 2016. https: //doi.org/10.1080/00949655.2016.1175442. <NAME>, <NAME>, and <NAME>. A simple method to estimate the time-dependent receiver operating characteristic curve and the area under the curve with right censored data. Statistical Methods in Medical Research, 27(8), 2016. https://doi.org/10.1177/0962280216680239. <NAME>. Bandwidth matrices for multivariate kernel density estimation. Ph.D. Thesis, University of Western, Australia, 2004. http://www.mvstat.net/tduong. Examples library(smoothROCtime) require(KMsurv) data(kidtran) # Preparing data: a logarithmic transformation of the time-to-event variable is made DT <- cbind(log(kidtran$time),kidtran$delta,kidtran$age) # Point of Time t5 <- log(5*365.25) # five years in logarithm scale # Cumulative/dynamic ROC curve estimation rcd <- stRoc(data=DT, t=t5, bw="Hpi", tcr="C", meth=2) # Plot graphic plot(rcd$p, rcd$ROC, type="l", lwd=5, main="C/D ROC",xlab="FPR",ylab="TPR") lines(c(0,1),c(0,1),lty=2,col="gray")
snek
hex
Erlang
Snek === A framework for defining Battlesnake-compatible rulesets and board positions. This top-level module is just a namespace. Check the submodules for all the interesting bits. Snek.Board === A struct for representing a board position. This may be used to keep track of state in a game, each turn of the game producing the next board position. [Link to this section](#summary) Summary === [Types](#types) --- [spawn\_result()](#t:spawn_result/0) When spawning, `{:ok, board}` if there is space available, `{:error, :occupied}` otherwise. [t()](#t:t/0) A board position. [Functions](#functions) --- [adjascent\_neighbors(board, origin)](#adjascent_neighbors/2) Returns a list of neighboring points adjascent to a point of origin. [alive\_snakes\_remaining(board)](#alive_snakes_remaining/1) Returns the number of snakes on the board who are still alive (not eliminated). [all\_even\_points(board)](#all_even_points/1) Returns a list of all even points on the board, alternating like a checkerboard. [all\_points(board)](#all_points/1) Returns a list of all points on the board. [any\_points\_occupied?(board, points)](#any_points_occupied?/2) Returns true if and only if any of the given points on the board are occupied. [center\_point(board)](#center_point/1) Returns the point at the center of the board. [diagonal\_neighbors(board, origin)](#diagonal_neighbors/2) Returns a list of neighboring points diagonal to a point of origin. [empty?(board)](#empty?/1) Returns true if and only if this board is empty, otherwise false. [maybe\_eliminate\_snake(board, snake, snakes\_by\_length\_descending)](#maybe_eliminate_snake/3) Eliminate this snake if it has moved out of bounds, collided with itself, collided with another snake body, or lost in a head-to-head collision. [maybe\_eliminate\_snakes(board)](#maybe_eliminate_snakes/1) Eliminate snakes who have moved out of bounds, collided with themselves, collided with other snake bodies, or lost in a head-to-head collision. [maybe\_feed\_snakes(board)](#maybe_feed_snakes/1) Feed snakes who eat an apple. [move\_snake(board, snake\_id, direction)](#move_snake/3) Moves a snake on the board according to its move for this turn. [move\_snakes(board, snake\_moves)](#move_snakes/2) Moves each snake on the board according to their respective moves for this turn. [new(size)](#new/1) Returns a new empty board of a given size. [occupied?(board, point)](#occupied?/2) Returns true if and only if the given point on the board is occupied, otherwise false. [occupied\_by\_apple?(board, point)](#occupied_by_apple?/2) Returns true if and only if the given point on the board is occupied by an apple, otherwise false. [occupied\_by\_snake?(board, point)](#occupied_by_snake?/2) Returns true if and only if the given point on the board is occupied by a snake's body part, otherwise false. [occupied\_points(board)](#occupied_points/1) Returns a list of all occupied points on the board. [out\_of\_bounds?(board, point)](#out_of_bounds?/2) Returns true if and only if this point is outside of the board's boundaries, in other words the opposite of [`within_bounds?/2`](#within_bounds?/2). [reduce\_snake\_healths(board)](#reduce_snake_healths/1) Reduce the health of each snake by one point. [snake\_collides\_with\_other\_snake?(snake\_a, snake\_b)](#snake_collides_with_other_snake?/2) Returns true if and only if `snake_a`'s head is in collision with any of `snake_b`'s body parts, excluding `snake_b`'s head. Otherwise, returns false. [snake\_loses\_head\_to\_head\_collision?(snake\_a, snake\_b)](#snake_loses_head_to_head_collision?/2) Returns true if and only if there is a head-to-head collision between `snake_a` and `snake_b` and `snake_a`'s body length is shorter or equal to `snake_b`'s body length, thereby causing `snake_a` to lose the head-to-head. [snake\_out\_of\_bounds?(board, snake)](#snake_out_of_bounds?/2) Returns true if and only if this snake has some body part outside of the board's boundaries. [spawn\_apple(board, point)](#spawn_apple/2) Spawns an apple at the specified point on the board. [spawn\_apple\_at\_center(board)](#spawn_apple_at_center/1) Spawns an apple in the center of the board. [spawn\_apple\_unchecked(board, point)](#spawn_apple_unchecked/2) Spawns an apple at the specified point on the board. [spawn\_apples(board, points)](#spawn_apples/2) Spawns apples at each of the specified points on the board. [spawn\_snake(board, id, head, length \\ 3, health \\ 100)](#spawn_snake/5) Spawns a snake at the specified point on the board. [spawn\_snake\_at\_center(board, id, length \\ 3, health \\ 100)](#spawn_snake_at_center/4) Spawns a snake in the center of the board. [spawn\_snakes(board, ids\_and\_heads, length \\ 3, health \\ 100)](#spawn_snakes/4) Spawns multiple snakes, each at a specified point on the board. [unoccupied\_adjascent\_neighbors(board, origin)](#unoccupied_adjascent_neighbors/2) Returns a list of unoccupied neighboring points adjascent to a point of origin. [unoccupied\_diagonal\_neighbors(board, origin)](#unoccupied_diagonal_neighbors/2) Returns a list of unoccupied neighboring points diagonal to a point of origin. [unoccupied\_points(board)](#unoccupied_points/1) Returns a list of all unoccupied points on the board. [within\_bounds?(board, point)](#within_bounds?/2) Returns true if and only if this point is within the board's boundaries, otherwise false. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Snek.Board.Point === A struct for representing points on a board's grid. [Link to this section](#summary) Summary === [Types](#types) --- [direction()](#t:direction/0) A direction from a point toward its adjascent or diagonal neighbor. [t()](#t:t/0) A point on a board. [x()](#t:x/0) A point's X coordinate. [y()](#t:y/0) A point's Y coordinate. [Functions](#functions) --- [adjascent\_neighbors(origin)](#adjascent_neighbors/1) Returns a list of neighboring points adjascent to a point of origin. [diagonal\_neighbors(origin)](#diagonal_neighbors/1) Returns a list of neighboring points diagonal to a point of origin. [difference(point1, point2)](#difference/2) Returns the difference between two points, which could be used to find a vector between points, such as when using the neck and head of a snake to determine the point continuing in the last moved direction. [even?(point)](#even?/1) Returns true if and only if this point falls on an even square for an board, alternating like a checkerboard. [manhattan\_distance(point\_a, point\_b)](#manhattan_distance/2) Returns the Manhattan distance between two points. [new(x, y)](#new/2) Returns a new point at the given X and Y coordinates. [rotate\_clockwise(point)](#rotate_clockwise/1) Rotates a point 90 degrees clockwise. [rotate\_counterclockwise(point)](#rotate_counterclockwise/1) Rotates a point 90 degrees counter-clockwise. [step(origin, direction)](#step/2) Returns the point that is one step toward a given direction from a point of origin. [sum(point1, point2)](#sum/2) Returns the sum of two points, which could be used to apply a vector point to a fixed points, such as when using the neck and head of a snake to determine the point continuing in the last moved direction. [zero?(point)](#zero?/1) Returns true if and only if both X and Y are zero, which could be used to determine if a point is a null vector. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Snek.Board.Size === A struct representing the size of a game board. A board is always rectangular (or square), and is represented by a width and a height. Arbitrary board sizes may be created with [`new/2`](#new/2). There are some helpers functions for some suggested board sizes, including [`small/0`](#small/0), [`medium/0`](#medium/0), and [`large/0`](#large/0). These suggestions are based on the default board sizes in Battlesnake. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [large()](#large/0) Return a large (19x19) board size. [medium()](#medium/0) Return a medium (11x11) board size. [new(width, height)](#new/2) Returns a board size of the specified width and height. [small()](#small/0) Return a small (7x7) board size. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Snek.Board.Snake === Represents a snake on a board. You may also refer to it as a "snake on a plane", as the joke goes in the Battlesnake community. 😎 [Link to this section](#summary) Summary === [Types](#types) --- [id()](#t:id/0) A unique ID to differentiate between snakes on a board [snake\_move()](#t:snake_move/0) A valid direction for a snake to move according to the game rules. [state()](#t:state/0) Whether a snake is currently alive, or has been eliminated. [t()](#t:t/0) A snake on a board. [Functions](#functions) --- [alive?(snake)](#alive?/1) Returns true if and only if the snake is alive (not eliminated). [eliminated?(snake)](#eliminated?/1) Returns true if and only if the snake is eliminated. [feed(snake, new\_health)](#feed/2) Feed a snake and grow its tail. [grow(snake)](#grow/1) Grow a snake's tail. [head(snake)](#head/1) Returns the head of a snake. [hurt(snake)](#hurt/1) Decrements the snake's health by 1 point. [move(snake, direction)](#move/2) Moves the snake one space in a given direction. [step(snake, direction)](#step/2) Returns the point that is one step toward a given direction from this snake's perspective. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Snek.Ruleset behaviour === A behaviour module for implementing variations of game rules. Implementations define how a game plays out from start to finish, by dynamically specifying: 1. [`init/2`](#c:init/2): The initial board position 2. `c:next/2`: Each next turn's board position after moves are applied 3. [`done?/1`](#c:done?/1): When the game is considered over [Link to this section](#summary) Summary === [Types](#types) --- [valid\_move()](#t:valid_move/0) Valid moves for a snake to play. [Callbacks](#callbacks) --- [done?(board)](#c:done?/1) Decide whether the game is over at this board position. [init(board\_size, snake\_ids)](#c:init/2) Decide the initial board position for a new game. [next(board, snake\_moves, apple\_spawn\_chance)](#c:next/3) Apply moves and decide the next turn's board position. [Link to this section](#types) Types === [Link to this section](#callbacks) Callbacks === Snek.Ruleset.Solo === The solo ruleset, based on the official Battlesnake solo rules. Solo rules are the same as [`Snek.Ruleset.Standard`](Snek.Ruleset.Standard.html), except standard games end when there are fewer than 2 snakes remaining and solo games only end after the last remaining snake is eliminated. Effort is made to keep this implementation compatible with Battlesnake's official rules, so that it may be used for simulating game turns. If there is a mistake either in the implementation or the tests/specification, please report it as a bug. [Link to this section](#summary) Summary === [Functions](#functions) --- [done?(board)](#done?/1) Callback implementation for [`Snek.Ruleset.done?/1`](Snek.Ruleset.html#c:done?/1). [init(board\_size, snake\_ids)](#init/2) Callback implementation for [`Snek.Ruleset.init/2`](Snek.Ruleset.html#c:init/2). [next(board, snake\_moves, apple\_spawn\_chance \\ 0.15)](#next/3) Callback implementation for [`Snek.Ruleset.next/3`](Snek.Ruleset.html#c:next/3). [Link to this section](#functions) Functions === Snek.Ruleset.Standard === The standard ruleset, based on the official Battlesnake rules. Effort is made to keep this implementation compatible with Battlesnake's official rules, so that it may be used for simulating game turns. If there is a mistake either in the implementation or the tests/specification, please report it as a bug. [Link to this section](#summary) Summary === [Functions](#functions) --- [done?(board)](#done?/1) Callback implementation for [`Snek.Ruleset.done?/1`](Snek.Ruleset.html#c:done?/1). [init(board\_size, snake\_ids)](#init/2) Callback implementation for [`Snek.Ruleset.init/2`](Snek.Ruleset.html#c:init/2). [next(board, snake\_moves, apple\_spawn\_chance \\ 0.15)](#next/3) Callback implementation for [`Snek.Ruleset.next/3`](Snek.Ruleset.html#c:next/3). [Link to this section](#functions) Functions ===
recase
hex
Erlang
Recase === Recase allows you to convert string from any to any case. This module contains public interface. [Link to this section](#summary) Summary === [Functions](#functions) --- [to_camel(value)](#to_camel/1) Converts string to camelCase. [to_constant(value)](#to_constant/1) Converts string to CONSTANT_CASE. [to_dot(value)](#to_dot/1) Converts string to dot.case [to_header(value)](#to_header/1) Converts string to Header-Case [to_kebab(value)](#to_kebab/1) Converts string to kebab-case. [to_name(value)](#to_name/1) Converts string to Name Case [to_pascal(value)](#to_pascal/1) Converts string to PascalCase (aka UpperCase). [to_path(value)](#to_path/1) [to_path(value, separator)](#to_path/2) Converts string to path/case. [to_sentence(value)](#to_sentence/1) Converts string to Sentence case [to_snake(value)](#to_snake/1) Converts string to snake_case. [to_title(value)](#to_title/1) Converts string to Title Case [underscore(value)](#underscore/1) See [`Recase.to_snake/1`](#to_snake/1). [Link to this section](#functions) Functions === Recase.CamelCase === Module to convert strings to `camelCase`. This module should not be used directly. Examples --- ``` iex> Recase.to_camel "foo_barBaz-λambdaΛambda-привет-Мир" "fooBarBazΛambdaΛambdaПриветМир" ``` Read about `camelCase` here: <https://en.wikipedia.org/wiki/Camel_case[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.ConstantCase === Module to convert strings to `CONSTANT_CASE`. This module should not be used directly. Examples --- ``` iex> Recase.to_constant "foo_barBaz-λambdaΛambda-привет-Мир" "FOO_BAR_BAZ_ΛAMBDA_ΛAMBDA_ПРИВЕТ_МИР" ``` Constant case is the same as `snake_case`, but uppercased. [Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.DotCase === Module to convert strings to `dot.case`. This module should not be used directly. Examples --- ``` iex> Recase.to_dot "foo_barBaz-λambdaΛambda-привет-Мир" "foo.bar.baz.λambda.λambda.привет.мир" ``` `DotCase` is the same as `KebabCase` and `SnakeCase`. But uses `.` as a separator. [Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.Enumerable === Helper module to convert enumerable keys recursively. [Link to this section](#summary) Summary === [Functions](#functions) --- [atomize_keys(enumerable)](#atomize_keys/1) Invoke fun for each keys of the enumerable and cast keys to atoms. [atomize_keys(enumerable, fun)](#atomize_keys/2) [convert_keys(enumerable)](#convert_keys/1) Invoke fun for each keys of the enumerable. [convert_keys(enumerable, fun)](#convert_keys/2) [stringify_keys(enumerable)](#stringify_keys/1) [stringify_keys(enumerable, fun)](#stringify_keys/2) [Link to this section](#functions) Functions === Recase.Generic === Generic module to split and join strings back or convert strings to atoms. This module should not be used directly. [Link to this section](#summary) Summary === [Functions](#functions) --- [rejoin(input, opts \\ [])](#rejoin/2) Splits the input and **`rejoins`** it with a separator given. Optionally converts parts to `downcase`, `upcase` or `titlecase`. [safe_atom(string_value)](#safe_atom/1) Atomizes a string value. Uses an existing atom if possible. [split(input)](#split/1) Splits the input into **`list`**. Utility function. [Link to this section](#functions) Functions === Recase.HeaderCase === Module to convert strings to `Header-Case`. This module should not be used directly. Examples --- ``` iex> Recase.to_header "foo_barBaz-λambdaΛambda-привет-Мир" "Foo-Bar-Baz-Λambda-Λambda-Привет-Мир" ``` Header case is the case suggested in [section 3.4.7 of RFC 822] (<https://tools.ietf.org/html/rfc822#section-3.4.7>) to be used in the message-creation process. [Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.KebabCase === Module to convert strings to `kebab-case`. This module should not be used directly. Examples --- ``` iex> Recase.to_kebab "foo_barBaz-λambdaΛambda-привет-Мир" "foo-bar-baz-λambda-λambda-привет-мир" ``` Read about `kebab-case` here: <https://en.wikipedia.org/wiki/Kebab_case[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.NameCase === Module to convert strings to `Name Case`. This module should not be used directly. Examples --- ``` iex> Recase.to_name "mccarthy o'donnell" "<NAME>" ``` Read about `Name Case` here: <https://metacpan.org/pod/Lingua::EN::NameCase[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.PascalCase === Module to convert strings to `PascalCase` aka `UpperCase`. This module should not be used directly. Examples --- ``` iex> Recase.to_pascal "foo_barBaz-λambdaΛambda-привет-Мир" "FooBarBazΛambdaΛambdaПриветМир" ``` Read about `PascalCase` here: <https://en.wikipedia.org/wiki/PascalCase Changed --- This name was introduced in version `0.2.0`, it was named `UpperCase` before. But `UpperCase` was not clear enough. What is `uppercase`? 1. THIS IS UPPERCASE 2. ThisIsAlsoUpperCase So, it was decided to rename this module into `PascalCase`. For other details see: <https://github.com/sobolevn/recase/issues/2[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.PathCase === Module to convert strings to `path/case`. This module should not be used directly. Path case preserves the original case, but inserts path separator to appropriate places. By default uses `/` as a path separator. Examples --- ``` iex> Recase.to_path "foo_barBaz-λambdaΛambda-привет-Мир" "foo/bar/Baz/λambda/Λambda/привет/Мир" ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value, separator \\ "/")](#convert/2) [Link to this section](#functions) Functions === Recase.Replace === Helper module to pipe and replace values easily. [Link to this section](#summary) Summary === [Functions](#functions) --- [replace(value, regex, new_value)](#replace/3) Replaces `value` if it matches `regex` with `new_value`. [Link to this section](#functions) Functions === Recase.SentenceCase === Module to convert strings to `Sentence case`. This module should not be used directly. Examples --- ``` iex> Recase.to_sentence "foo_barBaz-λambdaΛambda-привет-Мир" "Foo bar baz λambda λambda привет мир" ``` Read about `Sentence case` here: <https://en.wikipedia.org/wiki/Letter_case#Sentence_case[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.SnakeCase === Module to convert strings to `snake_case`. This module should not be used directly. Examples --- ``` iex> Recase.to_snake "foo_barBaz-λambdaΛambda-привет-Мир" "foo_bar_baz_λambda_λambda_привет_мир" iex> Recase.underscore "foo_barBaz-λambdaΛambda-привет-Мир" "foo_bar_baz_λambda_λambda_привет_мир" ``` Read about `snake_case` here: <https://en.wikipedia.org/wiki/Snake_case[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === Recase.TitleCase === Module to convert strings to `Title Case`. This module should not be used directly. **NB!** at the moment has no stop words: titleizes everything Examples --- ``` iex> Recase.to_title "foo_barBaz-λambdaΛambda-привет-Мир" "Foo Bar Baz Λambda Λambda Привет Мир" ``` Read about `Title Case` here: <https://en.wikipedia.org/wiki/Letter_case#Title_case[Link to this section](#summary) Summary === [Functions](#functions) --- [convert(value)](#convert/1) [Link to this section](#functions) Functions === API Reference === Modules --- [Recase](Recase.html) Recase allows you to convert string from any to any case. [Recase.CamelCase](Recase.CamelCase.html) Module to convert strings to `camelCase`. [Recase.ConstantCase](Recase.ConstantCase.html) Module to convert strings to `CONSTANT_CASE`. [Recase.DotCase](Recase.DotCase.html) Module to convert strings to `dot.case`. [Recase.Enumerable](Recase.Enumerable.html) Helper module to convert enumerable keys recursively. [Recase.Generic](Recase.Generic.html) Generic module to split and join strings back or convert strings to atoms. This module should not be used directly. [Recase.HeaderCase](Recase.HeaderCase.html) Module to convert strings to `Header-Case`. [Recase.KebabCase](Recase.KebabCase.html) Module to convert strings to `kebab-case`. [Recase.NameCase](Recase.NameCase.html) Module to convert strings to `Name Case`. [Recase.PascalCase](Recase.PascalCase.html) Module to convert strings to `PascalCase` aka `UpperCase`. [Recase.PathCase](Recase.PathCase.html) Module to convert strings to `path/case`. [Recase.Replace](Recase.Replace.html) Helper module to pipe and replace values easily. [Recase.SentenceCase](Recase.SentenceCase.html) Module to convert strings to `Sentence case`. [Recase.SnakeCase](Recase.SnakeCase.html) Module to convert strings to `snake_case`. [Recase.TitleCase](Recase.TitleCase.html) Module to convert strings to `Title Case`.
gTests
cran
R
Package ‘gTests’ October 13, 2022 Version 0.2 Date 2017-12-6 Title Graph-Based Two-Sample Tests Author <NAME> and <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 3.0.1) Imports ade4 Description Four graph-based tests are provided for testing whether two sam- ples are from the same distribution. It works for both continuous data and discrete data. License GPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2017-12-06 22:04:04 UTC R topics documented: counts... 2 counts... 2 counts... 2 df... 3 ds... 3 ds... 3 ds... 3 E... 4 E... 4 E... 4 g.test... 4 g.tests_discret... 6 getComdis... 8 getGrap... 9 getMV_discret... 9 getR1R... 10 getR1R2_discret... 11 gTest... 11 nnlin... 12 nnlink_Co... 13 nnlink_... 13 permute_discret... 14 counts1 A matrix representing counts in the distinct values for the two samples Description This is a K by 2 matrix, where K is the number of distinct values. It specifies the counts in the K distinct values for the two samples. The data is generated from two samples with mean shift. counts2 A matrix representing counts in the distinct values for the two samples Description This is a K by 2 matrix, where K is the number of distinct values. It specifies the counts in the K distinct values for the two samples. The data is generated from two samples with spread difference. counts3 A matrix representing counts in the distinct values for the two samples Description This is a K by 2 matrix, where K is the number of distinct values. It specifies the counts in the K distinct values for the two samples. The data is generated from two samples with mean shift and spread difference. dfs Depth-first search Description One starts at the root and explores as far as possible along each branch before backtracking. Usage dfs(s,visited,adj) Arguments s The root node. visited N by 1 vector, where N is the number of nodes. This vector records whether nodes have been visited or not with 1 if visited and 0 otherwise. adj N by N adjacent matrix. See Also getGraph ds1 A distance matrix on the distinct values Description This is a K by K matrix, which is the distance matrix on the distinct values for counts1. ds2 A distance matrix on the distinct values Description This is a K by K matrix, which is the distance matrix on the distinct values for counts2. ds3 A distance matrix on the distinct values Description This is a K by K matrix, which is the distance matrix on the distinct values for counts3. E1 An edge matrix representing a similarity graph Description This is a matrix with the number of rows the number of edges in the similarity graph and 2 columns. Each row records the subject indices of the two edges of in the similarity graph. The subject indices of sample 1 is 1:100, and the subject indices of sample 2 is 101:250. E2 An edge matrix representing a similarity graph Description This is a matrix with the number of rows the number of edges in the similarity graph and 2 columns. Each row records the subject indices of the two edges of in the similarity graph. The subject indices of sample 1 is 1:100, and the subject indices of sample 2 is 101:250. E3 An edge matrix representing a similarity graph Description This is a matrix with the number of rows the number of edges in the similarity graph and 2 columns. Each row records the subject indices of the two edges of in the similarity graph. The subject indices of sample 1 is 1:100, and the subject indices of sample 2 is 101:250. g.tests Graph-based two-sample tests Description This function provides four graph-based two-sample tests. Usage g.tests(E, sample1ID, sample2ID, test.type="all", maxtype.kappa = 1.14, perm=0) Arguments E An edge matrix representing a similarity graph with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. sample1ID The subject indices of sample 1. sample2ID The subject indices of sample 2. test.type The default value is "all", which means all four tests are performed: orig- nial edge-count test (Friedman and Rafsky (1979)), generalized edge-count test (Chen and Friedman (2016)), weighted edge-count test (Chen, Chen and Su (2016)) and maxtype edge-count tests (Zhang and Chen (2017)). Set this value to "original" or "o" to permform only the original edge-count test; set this value to "generalized" or "g" to perform only the generalized edge-count test; set this value to "weighted" or "w" to perform only the weighted edge-count test; and set this value to "maxtype" or "m" to perform only the maxtype edge-count tests. maxtype.kappa The value of parameter(kappa) in the maxtype edge-count tests. The default value is 1.14. perm The number of permutations performed to calculate the p-value of the test. The default value is 0, which means the permutation is not performed and only ap- proximate p-value based on asymptotic theory is provided. Doing permutation could be time consuming, so be cautious if you want to set this value to be larger than 10,000. Value test.statistic The test statistic. pval.approx The approximated p-value based on asymptotic theory. pval.perm The permutation p-value when argument ‘perm‘ is positive. References <NAME>. and <NAME>. Multivariate generalizations of the WaldWolfowitz and Smirnov two- sample tests. The Annals of Statistics, 7(4):697-717, 1979. <NAME>. and <NAME>. A new graph-based two-sample test for multivariate and object data. Journal of the American Statistical Association, 2016. <NAME>., <NAME>. and <NAME>. A weighted edge-count two sample test for multivariate and object data. Journal of the American Statistical Association, 2017. <NAME>. and <NAME>. Graph-based two-sample tests for discrete data. Examples # the "example" data contains three similarity graphs represted in the matrix form: E1, E2, E3. data(example) # E1 is an edge matrix representing a similarity graph. # It is constructed on two samples with mean difference. # Sample 1 indices: 1:100; sample 2 indices: 101:250. g.tests(E1, 1:100, 101:250) # E2 is an edge matrix representing a similarity graph. # It is constructed on two samples with variance difference. # Sample 1 indices: 1:100; sample 2 indices: 101:250. g.tests(E2, 1:100, 101:250) # E3 is an edge matrix representing a similarity graph. # It is constructed on two samples with mean and variance difference. # Sample 1 indices: 1:100; sample 2 indices: 101:250. g.tests(E3, 1:100, 101:250) ## Uncomment the following line to get permutation p-value with 200 permutations. # g.tests(E1, 1:100, 101:250, perm=200) g.tests_discrete Graph-based two-sample tests for discrete data Description This function provides four graph-based two-sample tests for discrete data. Usage g.tests_discrete(E, counts, test.type = "all", maxtype.kappa = 1.14, perm = 0) Arguments E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. counts A K by 2 matrix, where K is the number of distinct values. It specifies the counts in the K distinct values for the two samples. test.type The default value is "all", which means all four tests are performed: the orignial edge-count test (Chen and Zhang (2013)), extension of the generalized edge- count test (Chen and Friedman (2016)), extension of the weighted edge-count test (Chen, Chen and Su (2016)) and extension of the maxtype edge-count tests (Zhang and Chen (2017)). Set this value to "original" or "o" to permform only the original edge-count test; set this value to "generalized" or "g" to perform only extension of the generalized edge-count test; set this value to "weighted" or "w" to perform only extension of the weighted edge-count test; and set this value to "maxtype" or "m" to perform only extension of the maxtype edge-count tests. maxtype.kappa The value of parameter(kappa) in the extension of the maxtype edge-count tests. The default value is 1.14. perm The number of permutations performed to calculate the p-value of the test. The default value is 0, which means the permutation is not performed and only ap- proximate p-value based on asymptotic theory is provided. Doing permutation could be time consuming, so be cautious if you want to set this value to be larger than 10,000. Value test.statistic_a The test statistic using ‘average‘ method to construct the graph. test.statistic_u The test statistic using ‘union‘ method to construct the graph. pval.approx_a Using ‘average‘ method to construct the graph, the approximated p-value based on asymptotic theory. pval.approx_u Using ‘union‘ method to construct the graph, the approximated p-value based on asymptotic theory. pval.perm_a Using ‘average‘ method to construct the graph, the permutation p-value when argument ‘perm‘ is positive. pval.perm_u Using ‘union‘ method to construct the graph, the permutation p-value when ar- gument ‘perm‘ is positive. References <NAME>. and <NAME>. Multivariate generalizations of the WaldWolfowitz and Smirnov two- sample tests. The Annals of Statistics, 7(4):697-717, 1979. <NAME>. and <NAME>. Graph-based tests for two-sample comparisons of categorical data. Statistica Sinica, 2013. <NAME>. and <NAME>. A new graph-based two-sample test for multivariate and object data. Journal of the American Statistical Association, 2016. <NAME>., <NAME>. and <NAME>. A weighted edge-count two sample test for multivariate and object data. Journal of the American Statistical Association, 2017. <NAME>. and <NAME>. Graph-based two-sample tests for discrete data. Examples # the "example_discrete" data contains three two-sample counts data # represted in the matrix form: counts1, counts2, counts3 # and the corresponding distance matrix on the distinct values: ds1, ds2, ds3. data(example_discrete) # counts1 is a K by 2 matrix, where K is the number of distinct values. # It specifies the counts in the K distinct values for the two samples. # ds1 is the corresponding distance matrix on the distinct values. # The data is generated from two samples with mean shift. Knnl = 3 E1 = getGraph(counts1, ds1, Knnl, graph = "nnlink") g.tests_discrete(E1, counts1) # counts2 is a K by 2 matrix, where K is the number of distinct values. # It specifies the counts in the K distinct values for the two samples. # ds2 is the corresponding distance matrix on the distinct values. # The data is generated from two samples with spread difference. Kmst = 6 E2 = getGraph(counts2, ds2, Kmst, graph = "mstree") g.tests_discrete(E2, counts2) # counts3 is a K by 2 matrix, where K is the number of distinct values. # It specifies the counts in the K distinct values for the two samples. # ds3 is the corresponding distance matrix on the distinct values. # The data is generated from two samples with mean shift and spread difference. Knnl = 3 E3 = getGraph(counts3, ds3, Knnl, graph = "nnlink") g.tests_discrete(E3, counts3) ## Uncomment the following line to get permutation p-value with 200 permutations. # Knnl = 3 # E1 = getGraph(counts1, ds1, Knnl, graph = "nnlink") # g.tests_discrete(E1, counts1, test.type = "all", maxtype.kappa = 1.31, perm = 300) getComdist Get distance between two components Description This function calculates the distance between two components. Usage getComdist(g1,g2,distance) Arguments g1 The distinct values in Component 1. g2 The distinct values in Component 2. distance A K by K matrix, which is the distance matrix on the distinct values and K is the number of distinct values with at least one observation in either group. See Also getGraph getGraph Construct similarity graph Description This function provides two methods to construct the similarity graph. Usage getGraph(counts, mydist, K, graph.type = "mstree") Arguments counts A K by 2 matrix, where K is the number of distinct values. It specifies the counts in the K distinct values for the two samples. mydist A K by K matrix, which is the distance matrix on the distinct values. K Set the value of k in "k-MST" or "k-NNL" to construct the similarity graph. graph.type Specify the type of the constructing graph. The default value is "mstree", which means constructing the minimal spanning tree as the similarity graph. Set this value to "nnlink" to construct the similarity graph by the nearest neighbor link method. Value E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. See Also g.tests_discrete getMV_discrete Get intermediate results for g.tests_discrete function Description This function calculates means and variances of R1 and R2 quantities using ‘average‘ method and ‘union‘ method to construct the graph. Usage getMV_discrete(E,vmat) Arguments E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. vmat A K by 2 matrix, where K is the number of distinct values with at least one observation in either group. It specifies the counts in the K distinct values for the two samples. See Also g.tests_discrete getR1R2 Get intermediate results for g.tests function Description This function calculates R1 and R2 quantities. Usage getR1R2(E, G1) Arguments E A matrix with the number of rows the number of edges in the similarity graph and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. G1 The subject indices of sample 1. See Also g.tests getR1R2_discrete Get intermediate results for g.tests_discrete function Description This function calculates R1 and R2 quantities using ‘average‘ method and ‘union‘ method to con- struct the graph. Usage getR1R2_discrete(E,vmat) Arguments E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. vmat A K by 2 matrix, where K is the number of distinct values with at least one observation in either group. It specifies the counts in the K distinct values for the two samples. See Also g.tests_discrete gTests Graph-Based Two-Sample Tests Description This package includes four graph-based two-sample tests under the continuous setting and the dis- crete setting. Author(s) <NAME> and <NAME> Maintainer: <NAME> (<EMAIL>) References <NAME>. and <NAME>. (1979). Multivariate generalizations of the WaldWolfowitz and Smirnov two-sample tests. The Annals of Statistics 7(4):697-717. <NAME>. and <NAME>. (2013). Graph-based tests for two-sample comparisons of categorical data. Statistica Sinica 23:1479-1503. <NAME>. and <NAME>. (2017). A new graph-based two-sample test for multivariate and object data. Journal of the American Statistical Association, 112:517, 397-409. <NAME>., <NAME>. and <NAME>. (2017). A weighted edge-count two sample test for multivariate and object data. Journal of the American Statistical Association. <NAME>. and <NAME>. (2017). Graph-based two-sample tests for discrete data. arXiv:1711.04349 See Also g.tests g.tests_discrete getGraph nnlink Construct similarity graph by 1-NNL Description This function provides the edges of the similarity graph constructed by 1-NNL. Usage nnlink(distance) Arguments distance A K by K matrix, which is the distance matrix on the distinct values and K is the number of distinct values with at least one observation in either group. Value E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. See Also getGraph nnlink_Com Get components by nearest neighbor link algorithm Description This function obtains components based on the nearest neighbor link algorithm. Usage nnlink_Com(distance) Arguments distance A K by K matrix, which is the distance matrix on the distinct values and K is the number of distinct values with at least one observation in either group. See Also getGraph nnlink_K Construct similarity graph by k-NNL Description This function provides the edges of the similarity graph constructed by k-NNL. Usage nnlink_K(distance,K) Arguments distance A K by K matrix, which is the distance matrix on the distinct values and K is the number of distinct values with at least one observation in either group. K Set the value of k in "k-NNL" to construct the similarity graph. Value E An edge matrix representing a similarity graph on the distinct values with the number of edges in the similarity graph being the number of rows and 2 columns. Each row records the subject indices of the two ends of an edge in the similarity graph. See Also getGraph permute_discrete Generate a permutation for two discrete data groups Description This function permutes the observations maintaining the two sample sizes unchaged. Usage permute_discrete(vmat) Arguments vmat A K by 2 matrix, where K is the number of distinct values with at least one observation in either group. It specifies the counts in the K distinct values for the two samples. See Also g.tests_discrete
github.com/knadh/koanf/maps
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package maps provides reusable functions for manipulating nested map[string]interface{} maps are common unmarshal products from various serializers such as json, yaml etc. ### Index [¶](#pkg-index) * [func Copy(mp map[string]interface{}) map[string]interface{}](#Copy) * [func Delete(mp map[string]interface{}, path []string)](#Delete) * [func Flatten(m map[string]interface{}, keys []string, delim string) (map[string]interface{}, map[string][]string)](#Flatten) * [func Int64SliceToLookupMap(s []int64) map[int64]bool](#Int64SliceToLookupMap) * [func IntfaceKeysToStrings(mp map[string]interface{})](#IntfaceKeysToStrings) * [func Merge(a, b map[string]interface{})](#Merge) * [func MergeStrict(a, b map[string]interface{}) error](#MergeStrict) * [func Search(mp map[string]interface{}, path []string) interface{}](#Search) * [func StringSliceToLookupMap(s []string) map[string]bool](#StringSliceToLookupMap) * [func Unflatten(m map[string]interface{}, delim string) map[string]interface{}](#Unflatten) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Copy](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L239) [¶](#Copy) ``` func Copy(mp map[[string](/builtin#string)]interface{}) map[[string](/builtin#string)]interface{} ``` Copy returns a deep copy of a conf map. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. #### func [Delete](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L191) [¶](#Delete) ``` func Delete(mp map[[string](/builtin#string)]interface{}, path [][string](/builtin#string)) ``` Delete removes the entry present at a given path, from the map. The path is the key map slice, for eg:, parent.child.key -> [parent child key]. Any empty, nested map on the path, is recursively deleted. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. #### func [Flatten](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L26) [¶](#Flatten) ``` func Flatten(m map[[string](/builtin#string)]interface{}, keys [][string](/builtin#string), delim [string](/builtin#string)) (map[[string](/builtin#string)]interface{}, map[[string](/builtin#string)][][string](/builtin#string)) ``` Flatten takes a map[string]interface{} and traverses it and flattens nested children into keys delimited by delim. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. eg: `{ "parent": { "child": 123 }}` becomes `{ "parent.child": 123 }` In addition, it keeps track of and returns a map of the delimited keypaths with a slice of key parts, for eg: { "parent.child": ["parent", "child"] }. This parts list is used to remember the key path's original structure to unflatten later. #### func [Int64SliceToLookupMap](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L291) [¶](#Int64SliceToLookupMap) ``` func Int64SliceToLookupMap(s [][int64](/builtin#int64)) map[[int64](/builtin#int64)][bool](/builtin#bool) ``` Int64SliceToLookupMap takes a slice of int64s and returns a lookup map with the slice values as keys with true values. #### func [IntfaceKeysToStrings](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L249) [¶](#IntfaceKeysToStrings) ``` func IntfaceKeysToStrings(mp map[[string](/builtin#string)]interface{}) ``` IntfaceKeysToStrings recursively converts map[interface{}]interface{} to map[string]interface{}. Some parses such as YAML unmarshal return this. #### func [Merge](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L108) [¶](#Merge) ``` func Merge(a, b map[[string](/builtin#string)]interface{}) ``` Merge recursively merges map a into b (left to right), mutating and expanding map b. Note that there's no copying involved, so map b will retain references to map a. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. #### func [MergeStrict](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L142) [¶](#MergeStrict) ``` func MergeStrict(a, b map[[string](/builtin#string)]interface{}) [error](/builtin#error) ``` MergeStrict recursively merges map a into b (left to right), mutating and expanding map b. Note that there's no copying involved, so map b will retain references to map a. If an equal key in either of the maps has a different value type, it will return the first error. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. #### func [Search](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L215) [¶](#Search) ``` func Search(mp map[[string](/builtin#string)]interface{}, path [][string](/builtin#string)) interface{} ``` Search recursively searches a map for a given path. The path is the key map slice, for eg:, parent.child.key -> [parent child key]. It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. #### func [StringSliceToLookupMap](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L281) [¶](#StringSliceToLookupMap) ``` func StringSliceToLookupMap(s [][string](/builtin#string)) map[[string](/builtin#string)][bool](/builtin#bool) ``` StringSliceToLookupMap takes a slice of strings and returns a lookup map with the slice values as keys with true values. #### func [Unflatten](https://github.com/knadh/koanf/blob/maps/v0.1.1/maps/maps.go#L71) [¶](#Unflatten) ``` func Unflatten(m map[[string](/builtin#string)]interface{}, delim [string](/builtin#string)) map[[string](/builtin#string)]interface{} ``` Unflatten takes a flattened key:value map (non-nested with delimited keys) and returns a nested map where the keys are split into hierarchies by the given delimiter. For instance, `parent.child.key: 1` to `{parent: {child: {key: 1}}}` It's important to note that all nested maps should be map[string]interface{} and not map[interface{}]interface{}. Use IntfaceKeysToStrings() to convert if necessary. ### Types [¶](#pkg-types) This section is empty.
wpp2012
cran
R
Package ‘wpp2012’ October 12, 2022 Version 2.2-1 Date 2014-8-21 Title World Population Prospects 2012 Author <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (an- <EMAIL>), <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (spooren- <EMAIL>) Maintainer <NAME> <<EMAIL>> Depends R (>= 2.14.2) Description Data from the United Nation's World Population Prospects 2012 License GPL (>= 2) URL http://esa.un.org/wpp, http://esa.un.org/unpd/ppp NeedsCompilation no Repository CRAN Date/Publication 2014-08-22 07:14:07 R topics documented: wpp2012-packag... 2 e... 3 migratio... 4 m... 6 percentASF... 7 po... 7 sexRati... 9 tf... 9 UNlocation... 11 wpp2012-package World Population Prospects 2012 Description Data from the United Nations World Population Prospects 2012. Details Package: wpp2012 Version: 2.2-1 Date: 2014-8-21 Depends: R (>= 2.14.2) License: GPL (>= 2) URL: http://esa.un.org/wpp, http://esa.un.org/unpd/ppp The package contains the following datasets: • tfr, tfr_supplemental, tfrprojMed, tfrproj80u, tfrproj80l, tfrproj95u, tfrproj95l, tfrprojHigh, tfrprojLow: estimates and projections of total fertility rate. • e0F, e0M, e0X_supplemental, e0Xproj, e0Xproj80u, e0Xproj80l, e0Xproj95u, e0Xproj95l: sex-specific estimates and projections of life expectancy with X=“F” and “M”. • popF, popM, popXprojMed, popXprojHigh, popXprojLow: age- and sex-specific population estimates and projections with X=“F” and “M”. • popproj80l, popproj80u, popproj95l, popproj95u, popprojLow, popprojHigh: Lower and up- per bounds of 80 and 95% probability intervals of total population projections, as well as +-1/2 child variants. • mxF, mxM: age- and sex-specific mortality rates • migrationF, migrationM: age- and sex-specific net migration (see note below) • sexRatio: sex ratio at birth as a ratio of female to male • percentASFR: distribution of age-specific fertility rates • UNlocations: location dataset Note Distributions of net migrants by age and sex are provided for illustrative purpose only. Migration figures are based on intercensal net residuals and official statistics, population distribution by age and sex or simplified versions of Rogers-Castro migration age patterns, and incorporate statistical adjustment errors. Author(s) <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (<EMAIL>), <NAME> (<EMAIL>) Maintainer: <NAME> <<EMAIL>> Source These datasets are based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). The probabilistic projections were produced with the method of Raftery et al. (2012). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Probabilistic projections: http://esa.un.org/unpd/ppp <NAME>, <NAME>, <NAME> , <NAME>, <NAME> (2012). Bayesian probabilistic popu- lation projections for all countries. Proceedings of the National Academy of Sciences 109:13915- 13921. e0 United Nations Time Series of Life Expectancy Description Datasets containing the United Nations time series of the life expectancy (e0) for all countries of the world as available in 2012. Datasets e0F and e0F_supplemental contain estimates for female historical e0; e0M and e0M_supplemental contain estimates for male historical e0. The *_supplemental datasets contain a subset of countries for which data prior 1950 are available. Datasets e0Mproj and e0Fproj contain projections of male and female e0, respectively. Datasets *80l, *95l are the lower bounds of 80 and 95% probability intervals, *80u, *95u are the corre- sponding upper bounds. Usage data(e0F) data(e0M) data(e0F_supplemental) data(e0M_supplemental) data(e0Fproj) data(e0Mproj) data(e0Fproj80l) data(e0Fproj80u) data(e0Mproj80l) data(e0Mproj80u) data(e0Fproj95l) data(e0Fproj95u) data(e0Mproj95l) data(e0Mproj95u) Format The datasets contain one record per country or region. They contain the following variables: country Name of country or region (following ISO 3166 official short names in English - see http://www.iso.org/iso/country_codes/iso_3166_code_lists/english_country_names_ and_code_elements.htm and United Nations Multilingual Terminology Database - see http: //unterm.un.org). country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. 1950-1955, 1955-1960, . . . Life expectancy in various five-year time intervals. last.observed containing the year of the last observation for each country. The e0*proj datasets start at 2010-2015. The e0*_supplemental datasets start at 1750-1755. Missing data have NA val- ues. Source These datasets are based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(e0M) head(e0M) data(e0Fproj) str(e0Fproj) migration Datasets on Migration Description Estimates and projections of male and female age-specific net migration. Usage data(migrationM) data(migrationF) Format Data frames with one row per country and age group. For each country there are 21 age groups. It contains the following variables: country Country name. country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. age A character string representing an age interval. For each country there are 21 values: “0-4”, “5- 9”, “10-14”, “15-19”, “20-24”, “25-29”, “30-34”, “35-39”, “40-44”, “45-49”, “50-54”, “55- 59”, “60-64”, “65-69”, “70-74”, “75-79”, “80-84”, “85-89”, “90-94”, “95-99”, and “100+” in that order. 1990-1995, 1995-2000, 2000-2005, . . . Net migration for the specific time period. Not available data are represented by an empty string. Note Distributions of net migrants by age and sex are provided for illustrative purpose only. Migration figures are based on intercensal net residuals and official statistics, population distribution by age and sex or simplified versions of Rogers-Castro migration age patterns, and incorporate statistical adjustment errors. Source These datasets are based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(migrationM) str(migrationM) mx Age-specific Mortality Data Description Age-specific data on mortality for male (mxM) and female (mxF). Usage data(mxM) data(mxF) Format Data frames with one row per country and age group. For each country there are 22 or more age groups (i.e., up to age 100+ or 110+). It contains the following variables: country Country name. country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. age A character string representing an age interval (given by the starting age of the interval). For each country there are 22 values: “0”, “1”, “5”, “10”, “15”, “20”, “25”, “30”, “35”, “40”, “45”, “50”, “55”, “60”, “65”, “70”, “75”, “80”, “85”, “90”, “95”, and “100+” in that order. 1950-1955, 1955-1960, . . . Mortality rate for the given time period. Not available data are repre- sented by an empty string. Source This dataset is based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2013 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(mxF) str(mxF) percentASFR Datasets on Age-specific Distribution of Fertility Rates Description Datasets giving the percentage of fertility rates over ages 15-50. Usage data(percentASFR) Format A data frame with one row per country and age group. For each country there are seven age groups. It contains columns country, country_code, age and one columns per time interval. Source This dataset is based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(percentASFR) str(percentASFR) pop Estimates and Projections of Population Counts Description Datasets with age-specific male and female historical population estimates and projections. Datasets popM (popF) contains estimates of the historical population counts for male (female). Datasets popXprojMed, popXprojHigh and popXprojLow contain median, high and low projections, respec- tively, with X=M for male and X=F for female. Datasets popproj80l, popproj80u, popproj95l, and popproj95u are the lower (l) and upper (u) bounds of the 80 and 95% probability intervals of the total population, i.e. aggregated over sex and age. Datasets popprojHigh and popprojLow contain the upper and lower bounds of total population defined as +- 1/2 child. Usage data(popM) data(popF) data(popMprojMed) data(popFprojMed) data(popMprojHigh) data(popFprojHigh) data(popMprojLow) data(popFprojLow) data(popproj80l) data(popproj80u) data(popproj95l) data(popproj95u) data(popprojHigh) data(popprojLow) Format Data frames with one row per country and age group. For each country there are 21 age groups. It contains the following variables: country Country name. country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. age A character string representing an age interval. For each country there are 21 values: “0-4”, “5- 9”, “10-14”, “15-19”, “20-24”, “25-29”, “30-34”, “35-39”, “40-44”, “45-49”, “50-54”, “55- 59”, “60-64”, “65-69”, “70-74”, “75-79”, “80-84”, “85-89”, “90-94”, “95-99”, and “100+” in that order. 1950, 1955, . . . Population estimate or projection for the given time (mid-year). Datasets popproj80l, popproj80u, popproj95l, popproj95u, popprojHigh, and popprojLow contain one row per country. Source These datasets are based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Probabilistic projections: http://esa.un.org/unpd/ppp Examples data(popM) str(popM) sexRatio Sex Ratio at Birth Description Estimates and projections of the sex ratio at birth derived as the number of female divided by the number of male. Usage data(sexRatio) Format A data frame with one record per country. It contains columns country, country_code, and one columns per time interval. Source This dataset is based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(sexRatio) str(sexRatio) tfr United Nations Time Series of Total Fertility Rate Description Datasets containing the United Nations time series of the total fertility rate (TFR) for all countries of the world as available in 2012. Dataset tfr contains estimates of the historical TFR starting with 1950; tfr_supplemental contains a subset of countries for which data prior 1950 are avail- able. Datasets tfrprojMed contain the median projections. Datasets tfrproj80l, tfrproj80u, tfrproj95l, and tfrproj95u are the lower (l) and upper (u) bounds of the 80 and 95% probabil- ity intervals, respectively. Datasets tfrprojHigh and tfrprojLow contain high and low variants, respectively, defined as +-1/2 child. Usage data(tfr) data(tfr_supplemental) data(tfrprojMed) data(tfrproj80l) data(tfrproj80u) data(tfrproj95l) data(tfrproj95u) data(tfrprojHigh) data(tfrprojLow) Format The datasets contain one record per country or region. It contains the following variables: country Name of country or region (following ISO 3166 official short names in English - see http://www.iso.org/iso/country_codes/iso_3166_code_lists/english_country_names_ and_code_elements.htm and United Nations Multilingual Terminology Database - see http: //unterm.un.org). country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. 1950-1955, 1955-1960, . . . TFR in various five-year time intervals. last.observed containing the year of the last observation for each country. The tfrproj* datasets start at 2010-2015. The tfr_supplemental datasets start at 1740-1745. Missing data have NA values. Source These datasets are based on estimates and projections of United Nations, Department of Economic and Social Affairs, Population Division (2013). References World Population Prospects: The 2012 Revision. (http://esa.un.org/unpd/wpp) Special Tabu- lations. Examples data(tfr) head(tfr) data(tfrprojMed) str(tfrprojMed) UNlocations United Nations Table of Locations Description United Nations table of locations, including regions, as available in 2012. Usage data(UNlocations) Format A data frame with one observations per country or region. It contains the following seven variables: name Name of country or region (following ISO 3166 official short names in English - see http://www.iso.org/iso/country_codes/iso_3166_code_lists/english_country_names_ and_code_elements.htm and United Nations Multilingual Terminology Database - see http: //unterm.un.org). country_code Numerical Location Code (3-digit codes following ISO 3166-1 numeric standard) - see http://en.wikipedia.org/wiki/ISO_3166-1_numeric. reg_code Code of the regions. reg_name Name of the regions. area_code Area code. area_name Area names, such as Africa, Asia, Europe Latin America and the Caribbean, Northern America, Oceania, World. location_type Code giving the type of the observation (0=World, 2=Major Area, 3=Region, 4=Country/Area, 5=Development group, 12=Special groupings). Source Data provided by the United Nations Population Division Examples data(UNlocations)
iced_futures
rust
Rust
Crate iced_futures === Asynchronous tasks for GUI programming, inspired by Elm. ![The foundations of the Iced ecosystem](https://github.com/iced-rs/iced/blob/0525d76ff94e828b7b21634fa94a747022001c83/docs/graphs/foundations.png?raw=true) Re-exports --- * `pub use executor::Executor;` * `pub use subscription::Subscription;` * `pub use futures;` * `pub use iced_core as core;` Modules --- * backendThe underlying implementations of the `iced_futures` contract! * executorChoose your preferred executor to power a runtime. * subscriptionListen to external events in your application. Structs --- * RuntimeA batteries-included runtime of commands and subscriptions. Traits --- * MaybeSendNon-WebAssemblyAn extension trait that enforces `Send` only on native platforms. Functions --- * boxed_streamNon-WebAssemblyBoxes a stream. Type Definitions --- * BoxFutureNon-WebAssemblyA boxed static future. * BoxStreamNon-WebAssemblyA boxed static stream. Crate iced_futures === Asynchronous tasks for GUI programming, inspired by Elm. ![The foundations of the Iced ecosystem](https://github.com/iced-rs/iced/blob/0525d76ff94e828b7b21634fa94a747022001c83/docs/graphs/foundations.png?raw=true) Re-exports --- * `pub use executor::Executor;` * `pub use subscription::Subscription;` * `pub use futures;` * `pub use iced_core as core;` Modules --- * backendThe underlying implementations of the `iced_futures` contract! * executorChoose your preferred executor to power a runtime. * subscriptionListen to external events in your application. Structs --- * RuntimeA batteries-included runtime of commands and subscriptions. Traits --- * MaybeSendNon-WebAssemblyAn extension trait that enforces `Send` only on native platforms. Functions --- * boxed_streamNon-WebAssemblyBoxes a stream. Type Definitions --- * BoxFutureNon-WebAssemblyA boxed static future. * BoxStreamNon-WebAssemblyA boxed static stream. Trait iced_futures::executor::Executor === ``` pub trait Executor: Sized { // Required methods fn new() -> Result<Self, Error> where Self: Sized; fn spawn(&self, future: impl Future<Output = ()> + MaybeSend + 'static); // Provided method fn enter<R>(&self, f: impl FnOnce() -> R) -> R { ... } } ``` A type that can run futures. Required Methods --- #### fn new() -> Result<Self, Error>where Self: Sized, Creates a new `Executor`. #### fn spawn(&self, future: impl Future<Output = ()> + MaybeSend + 'static) Spawns a future in the `Executor`. Provided Methods --- #### fn enter<R>(&self, f: impl FnOnce() -> R) -> R Runs the given closure inside the `Executor`. Some executors, like `tokio`, require some global state to be in place before creating futures. This method can be leveraged to set up this global state, call a function, restore the state, and obtain the result of the call. Implementors --- ### impl Executor for iced_futures::backend::native::async_std::Executor Available on **crate feature `async-std` and non-WebAssembly** only.### impl Executor for iced_futures::backend::native::smol::Executor Available on **crate feature `smol` and non-WebAssembly** only.### impl Executor for iced_futures::backend::null::Executor ### impl Executor for iced_futures::backend::native::thread_pool::Executor Available on **crate feature `thread-pool` and non-WebAssembly** only.### impl Executor for iced_futures::backend::native::tokio::Executor Available on **crate feature `tokio` and non-WebAssembly** only. Struct iced_futures::subscription::Subscription === ``` pub struct Subscription<Message> { /* private fields */ } ``` A request to listen to external events. Besides performing async actions on demand with `Command`, most applications also need to listen to external events passively. A `Subscription` is normally provided to some runtime, like a `Command`, and it will generate events as long as the user keeps requesting it. For instance, you can use a `Subscription` to listen to a WebSocket connection, keyboard presses, mouse events, time ticks, etc. Implementations --- ### impl<Message> Subscription<Message#### pub fn none() -> Self Returns an empty `Subscription` that will not produce any output. #### pub fn from_recipe(recipe: impl Recipe<Output = Message> + 'static) -> Self Creates a `Subscription` from a `Recipe` describing it. #### pub fn batch( subscriptions: impl IntoIterator<Item = Subscription<Message>> ) -> Self Batches all the provided subscriptions and returns the resulting `Subscription`. #### pub fn into_recipes(self) -> Vec<Box<dyn Recipe<Output = Message>>Returns the different recipes of the `Subscription`. #### pub fn with<T>(self, value: T) -> Subscription<(T, Message)>where Message: 'static, T: Hash + Clone + Send + Sync + 'static, Adds a value to the `Subscription` context. The value will be part of the identity of a `Subscription`. #### pub fn map<A>(self, f: fn(_: Message) -> A) -> Subscription<A>where Message: 'static, A: 'static, Transforms the `Subscription` output with the given function. Trait Implementations --- ### impl<Message> Debug for Subscription<Message#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<Message> !RefUnwindSafe for Subscription<Message### impl<Message> !Send for Subscription<Message### impl<Message> !Sync for Subscription<Message### impl<Message> Unpin for Subscription<Message### impl<Message> !UnwindSafe for Subscription<MessageBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Module iced_futures::backend === The underlying implementations of the `iced_futures` contract! Modules --- * defaultA default, cross-platform backend. * nativeNon-WebAssemblyBackends that are only available in native platforms: Windows, macOS, or Linux. * nullA backend that does nothing! Module iced_futures::executor === Choose your preferred executor to power a runtime. Traits --- * ExecutorA type that can run futures. Module iced_futures::subscription === Listen to external events in your application. Structs --- * SubscriptionA request to listen to external events. * TrackerA registry of subscription streams. Traits --- * RecipeThe description of a `Subscription`. Functions --- * channelCreates a `Subscription` that publishes the events sent from a `Future` to an [`mpsc::Sender`] with the given bounds. * eventsReturns a `Subscription` to all the ignored runtime events. * events_withReturns a `Subscription` that filters all the runtime events with the provided function, producing messages accordingly. * raw_eventsReturns a `Subscription` that produces a message for every runtime event, including the redraw request events. * runReturns a `Subscription` that will call the given function to create and asynchronously run the given [`Stream`]. * run_with_idReturns a `Subscription` that will create and asynchronously run the given [`Stream`]. * unfoldReturns a `Subscription` that will create and asynchronously run a [`Stream`] that will call the provided closure to produce every `Message`. Type Definitions --- * EventStreamA stream of runtime events. Struct iced_futures::Runtime === ``` pub struct Runtime<Executor, Sender, Message> { /* private fields */ } ``` A batteries-included runtime of commands and subscriptions. If you have an `Executor`, a `Runtime` can be leveraged to run any `Command` or [`Subscription`] and get notified of the results! Implementations --- ### impl<Executor, Sender, Message> Runtime<Executor, Sender, Message>where Executor: Executor, Sender: Sink<Message, Error = SendError> + Unpin + MaybeSend + Clone + 'static, Message: MaybeSend + 'static, #### pub fn new(executor: Executor, sender: Sender) -> Self Creates a new empty `Runtime`. You need to provide: * an `Executor` to spawn futures * a `Sender` implementing `Sink` to receive the results #### pub fn enter<R>(&self, f: impl FnOnce() -> R) -> R Runs the given closure inside the `Executor` of the `Runtime`. See `Executor::enter` to learn more. #### pub fn spawn(&mut self, future: BoxFuture<Message>) Spawns a `Future` in the `Runtime`. The resulting `Message` will be forwarded to the `Sender` of the `Runtime`. #### pub fn track( &mut self, recipes: impl IntoIterator<Item = Box<dyn Recipe<Output = Message>>> ) Tracks a [`Subscription`] in the `Runtime`. It will spawn new streams or close old ones as necessary! See `Tracker::update` to learn more about this! #### pub fn broadcast(&mut self, event: Event, status: Status) Broadcasts an event to all the subscriptions currently alive in the `Runtime`. See `Tracker::broadcast` to learn more. Trait Implementations --- ### impl<Executor: Debug, Sender: Debug, Message: Debug> Debug for Runtime<Executor, Sender, Message#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<Executor, Sender, Message> !RefUnwindSafe for Runtime<Executor, Sender, Message### impl<Executor, Sender, Message> Send for Runtime<Executor, Sender, Message>where Executor: Send, Message: Send, Sender: Send, ### impl<Executor, Sender, Message> Sync for Runtime<Executor, Sender, Message>where Executor: Sync, Message: Sync, Sender: Sync, ### impl<Executor, Sender, Message> Unpin for Runtime<Executor, Sender, Message>where Executor: Unpin, Message: Unpin, Sender: Unpin, ### impl<Executor, Sender, Message> !UnwindSafe for Runtime<Executor, Sender, MessageBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> MaybeSend for Twhere T: Send, Trait iced_futures::MaybeSend === ``` pub trait MaybeSend: Send { } ``` Available on **non-WebAssembly** only.An extension trait that enforces `Send` only on native platforms. Useful to write cross-platform async code! Implementors --- ### impl<T> MaybeSend for Twhere T: Send, Function iced_futures::boxed_stream === ``` pub fn boxed_stream<T, S>(stream: S) -> BoxStream<T>where S: Stream<Item = T> + Send + 'static, ``` Available on **non-WebAssembly** only.Boxes a stream. * On native platforms, it needs a `Send` requirement. * On the Web platform, it does not need a `Send` requirement. Type Definition iced_futures::BoxFuture === ``` pub type BoxFuture<T> = BoxFuture<'static, T>; ``` Available on **non-WebAssembly** only.A boxed static future. * On native platforms, it needs a `Send` requirement. * On the Web platform, it does not need a `Send` requirement. Type Definition iced_futures::BoxStream === ``` pub type BoxStream<T> = BoxStream<'static, T>; ``` Available on **non-WebAssembly** only.A boxed static stream. * On native platforms, it needs a `Send` requirement. * On the Web platform, it does not need a `Send` requirement.